Electronic – AT89LP51ED2 SPI ISP — First attempt to enter ISP mode after reset yields wrong data back, but second attempt works

8051debuggingispspi

I have a board using an AT89LP51ED2 from Atmel — this is an 8051-derivative with on-board Flash program memory and a SPI ISP interface, which I am using my Bus Pirate to drive (30kHz speed, pullups ON and wired to the board Vcc, default SPI settings otherwise, the AUX pin driving /RESET). The ISP protocol is documented by Atmel, both in the datasheet and in this Atmel app note — overall, it is superficially similar to AVR SPI ISP except for the use of 2 prefix bytes (which makes it incompatible with avrdude). Programming mode requires an entry command — the two prefix bytes of 0xAA 0x55 followed by 0xAC 0x53 and a dummy 0x00, which should return 0x53 back to indicate the device is in ISP mode.

So, I have set out to implement the protocol myself, using the Bus Pirate binary SPI mode, its associated Python module, and a bit of custom Python scripting on my own end — however, I noticed my Python script would only enter ISP mode intermittently. I was able to reproduce the fault manually using the Bus Pirate's terminal interface, starting from a cold boot of the board:

SPI>a
AUX LOW
SPI>{ 0xaa 0x55 0xac 0x53 0x00 ]
/CS ENABLED
WRITE: 0xAA READ: 0xFF 
WRITE: 0x55 READ: 0xFF 
WRITE: 0xAC READ: 0xFF 
WRITE: 0x53 READ: 0xFF 
WRITE: 0x00 READ: 0xA7 
/CS DISABLED
SPI>{ 0xaa 0x55 0xac 0x53 0x00 ]
/CS ENABLED
WRITE: 0xAA READ: 0xFF 
WRITE: 0x55 READ: 0xAA 
WRITE: 0xAC READ: 0x55 
WRITE: 0x53 READ: 0xAC 
WRITE: 0x00 READ: 0x53 
/CS DISABLED
SPI>

The fault in particular is that in the first command, the ISP interface echoes back 0xA7 instead of the 0x53 it should. Resetting the board allowed me to reproduce it again, and capture the sequence on my oscilloscope, including a glitch LOW for half a clock at the end of the last 0xFF it shifts out:

Overall 2ms/div shot of the glitching ISP packet
Byte 1 -- 0xAA out, 0xFF back
Byte 2 -- 0x55 out, 0xFF back
Byte 3 -- 0xAC out, 0xFF back
Byte 4 -- 0x53 out, 0xFF back but with a glitch LOW for a half cycle at the end
Byte 5 -- 0x00 out, 0xA7 back instead of 0x53

So, what am I doing wrong? Is this a protocol issue on my end? Or should I be suspecting a faulty microcontroller, or worse yet, an erratum in the programming implementation? Should I simply work around this in my ISP script?

Best Answer

It looks like you may be getting an extra clock pulse before the last bit of the 4th byte. The slave thinks that all 8 bits have been sent, and it starts driving a 0 on the MISO line on the last bit because it perceives that as being the first bit of the next byte.

Then, all the data is shifted over by one as a result.

I think you can't see this extra clock pulse because your clock rate is so slow that you have to be way zoomed out on the oscope. Try zooming in on that area between the 7th and 8th bit, and adjusting the oscope so that it can see the smaller pulse.

Whether this caused by hardware or software is hard to say, but I would guess software--look for something that could cause a small clock glitch.