Electronic – How does UART know the difference between data bits and start/stop bits?


I understand that it is common for a UART scheme to use 8N1, meaning 1 start bit, 8 data bits, and 1 stop bit. Something like this:


Where 0 is the start bit, the x's are the data, and 1 is the stop bit. In the case of multiple frames being sent back to back continuously, you'd have something like this:


My question is: how can the receiver tell the difference between the start/stop bits and data bits? To illustrate this point, suppose that the data byte 0xAA is over and over. This would look like:


I made the start/stop bits bold for emphasis, but it seems to me there really is no way to distinguish these from the data bits.

Sure, if the receiver has had a solid error-free connection to the transmitter since eternity past, then I can see how this wouldn't be a problem. Or if the bytes are not sent back to back then it wouldn't be a problem. But I've worked with 8N1 circuits that were continuously transmitting bytes one after another, and I could disconnect/reconnect the wires mid-transmission and the receiver would always jump right back into receiving correctly. How is this possible?

Best Answer

This sounds like a question coming from someone trying to emulate a UART receiver in software or an FPGA. For RS-232, they use the term mark and space. But these usually correspond, once digitized, into a '1' and '0', respectively.

UART receiver often divides up each bit time (must be known, a priori) into at least 4, but often 16 or more, sub-periods. It starts out (upon power up/reset) in a state where expecting the serial receiver line in a mark state. If the line is NOT in a mark at that moment, then it knows that it is in the middle of a transmission frame and must wait until it can synchronize. If the line is in a mark state, then it may or may not be in the middle of something and will have to wait and see. This is a problem with RS-232, if just plugging into another device while serial communications are happening or if part of a tap to monitor the asynch serial communications between two other players and have just been reset. To be absolutely sure, when coming out of reset anyway, the UART would need to observe at least N bit times (where N is the number of bits per word, and often 7 or 8, and assuming no parity option here) worth of mark followed by one bit time of space to re-synchronize up (or else N+1 bit times of space.) Many don't carry that much state around, so they can synchronize up incorrectly, if started in the middle of a stream. You will often get framing errors and occasional data bytes, until it happens to accidentally re-synchronize up correctly again. That's often been an acceptable price, too. Normally, cables are connected and devices are power up in a particular order of operation so that there's rarely any issues.

Once synchronized, though, the UART knows what to expect. It always starts with the receiver line going from a mark to a space, the needed start bit that goes for a full bit time, followed by data bits and then followed by at least one bit time worth of mark (or longer) as the stop bit. If it stays synchronized, then it will see that pattern repeated over and over again.

Part of the reason for dicing up the bit times, to something like 4X or 16X, is that the clocks used by the transmitter and the receiver aren't necessarily perfectly accurate and they are otherwise simply asynchronous to each other. So part of the synchronization that goes on in the receiver is in lining up its diced up periods to the transmitter's timing. The finer-grained that is done, the better aligned the receiver can become.