Electrical – Is a ‘timeout’ of 0.2 ms for UART communication realistic

communicationoverheadspeedtimeoutuart

In most examples (from Arduino/STM32/whatever) I see timeouts of 10 to 100 ms for different kind of communication.

However, I am wondering how much 'overhead' there is. Assuming I want to send 8 bytes (including stop bits + overhead, via RS485, 2.5 mbps), and getting a response back (let's say also 8 bytes). This would take: 64 bits / 2500000 bits/s = 32 us.

Now there needs to be some processing (interrupt handling) from both sides (sending + receiving). I'm using HAL which has some more overhead maybe, so let's take a huge 10,000 instructions. On a 72 MHz CPU this would take 10,000/72,000,000 = 138 us.

This totals less than 200 us together…
Do I miss something?

(note that I intend to use interrupts, and probably later DMA since it will even have less overhead high likely).

EDIT

  • Communication from an STM32F103C8T6 to another STM32F103C8T6
  • Both running at 72 MHz
  • RS-485 via UART, 2.5 mbps
  • There are no other higher priority interrupts (that can block UART)
  • Requirement: not hard, but preferably multiple messages within 1 ms.

Best Answer

The answer is, unfortunately, "it depends".

What are you talking to? What kind of processing do the nodes have to do before they can respond? How long does it actually take for the signal to travel along the cable and be received (this only really matters for REALLY long RS485 links)? What are the limits (driver enable and turnaround) on your RS485 drivers?

Timeouts are always a good idea -- they ensure that no matter what happens on the wire, your device will respond in a predictable and controlled manner. It means your protocol will be more robust and a comm glitch won't kill the link. However, 200us seems really tight and I have to ask why you would want to enforce such a fast timeout? 2.5mpbs is pretty fast, but you are talking about RS485 and need to manage the DE and inter-frame timing as well: have you given these items any thought because they have a direct impact on the link capabilities and ultimately your timeout value.

If you're using UARTs with hardware driver enable assist this is a bonus, especially at such high data rates; otherwise you're going to be either spinning waiting for the transmit shift register to be empty (NOT the holding register!) or incurring extra interrupt processing to turn the driver off when the last bit has been put on the wire. Some drivers also require an additional bit of time before being disabled which can add some more time to the equation.

What provisions does the protocol you're using have for data integrity? Do you have to calculate CRCs or enforce actual parameter integrity? These will eat up reception and packet processing time as well, especially if you don't have hardware assistance. Does the protocol allow for packet fragmentation or out of order transmission and reception? What happens in the case of a collision? You've not given nearly enough information and I suspect from the style of your question that you've not given these important things the thought they require for us to be able to answer your question properly.

Take some time and think through the problem. 2.5mbps on a system with a 72MHz clock doesn't have a lot of time to spend on these things, particularly if you're doing other data processing in addition to running the comm link. The processing power involved isn't a lot, but you seem to be intentionally tying one hand behind your own back and giving yourself a much more difficult problem without a good reason.

And finally -- if at the end of the day you want to wing it... why not just pick a number for timeout and experiment? Load the systems down, give yourself the longest physical link, make it kind of shitty, add some noise and uncertainty, elevate the temperature and see where the system starts to break. In reality you'd be ideally doing this anyway.