Electronic – how long should an I2C slave wait for a STOP bit (if at all)

i2cinterruptspic

I2C frames each group of message bytes in START and STOP conditions, defined as SDA changing state 1>0 or 0>1 (respectively) while SCL is high, as described here.

I am writing interrupt driven handlers for PIC32MX170, and I got quite far using the STOP bit as the signal to the software that the message is done. This then allows for things like checking the rx/tx byte count and so on. I found testing the STOP flag in software to be quite reliable, with the combination of hardware and clock that I used.

However, I now discover that it is not reliable at all : either using a faster clock or slower driver means that the ISR can exit and miss the STOP bit completely. Worse, as the next byte will be START of a new message, there is no way to know if it ever arrived (unless you sit there polling the bus, which is not really the kind of thing I want in an ISR, even with timeout).

However, code that depends on flaky combinations of hardware and clocks is also pretty bad, so I am facing a redesign that assumes STOP bits to be unreliable (luckily I don't try to do variable byte counts).

(Some uC's always raise an interrupt on STOP, but unfortunately not this one, as far as I can tell.)

But this raises the question : should an ISR wait on the STOP bit? If so, for how long? Is there anything in the spec about this?

EDIT:
I am adding some information, as perhaps I did not express myself completely clearly in my original. My question was really about the means of detecting the message start and stop, which of course is essential. (We have some discussion below about related matters such as where in the code the decoding is done – also very valuable but not what I was asking.)

The issue is basically that, although START and STOP (S and P) conditions (actually the status bits that signal them on the device) are not always set when the ISR runs, even though this might be the last time for the message. (There is also the question of whether the ISR needs to look at those bits, which I think is more about system design, but also interesting and relevant.)

As well as S/P flags, there are also flags which tell you what kind of byte you just received: Address R/W and Data R/W. Address Write always signals the start of a message. After this point a certain structure must be observed, which may involve repeated S conditions and so on. Depending on how you design your message protocol (especially whether you support variable length or not) these can also be used to understand the structure of the message. This is what the question is about.

Best Answer

You really need proper firmware architecture for something like this where you react to external asynchronous events not under your control.

Interrupt routines should service the immediate hardware event, then get out of the way. This is NOT where dealing with arbitrary timing between events should take place. I is also NOT where you should be trying to understand the individual events at a higher level, like a whole IIC message.

Last time I had to implement a IIC slave on a dsPIC, I used the hardware to receive events in a interrupt routine. However, that interrupt routine mostly pushed events onto a FIFO. That FIFO was then drained in a separate dedicated tasks to interpret the events as IIC sequences and act upon them. This worked quite well.

REsponse to comments

"Foreground" means running a task from your main loop, right?

It means running from not-interrupt code. Whether that is from the main event loop or a different task is up to your firmware design.

the I2c ISR is at a higher or lower priority?

Higher, obviously. That's part of the point of interrupts. If they weren't at a higher priority, they wouldn't be able to interrupt anything.

if it is clock stretching while it waits for the message - doesn't that actually make it longer running, not shorter?

No. The interrupt routine isn't running at all during the clock stretch time. The interrupt routine gets the address byte. It pushes that on a FIFO and exits. The foreground code interprets the start of the IIC message, realizes that it must respond, fills in a buffer of response bytes, and enables the IIC byte-sending interrupt. That interrupt happens immediately. The interrupt routine fetches the first byte from the buffer and writes it to the IIC hardware. That ends the clock stretch and starts the first data byte getting sent. The interrupt routine exits and is run again when the IIC hardware is ready to accept the next data byte.