Electronic – Difference between bit rate and baud rate and its origins

signalsignal processing

Everyone seems to have different definitions everywhere I look.

According to my lecturer:

\$ R_{bit} = \frac{bits}{time} \$

\$ R_{baud} = \frac{data}{time} \$

According to manufacturers :

\$ R_{bit} = \frac{data}{time} \$

\$ R_{baud} = \frac{bits}{time} \$

Which is the correct one and why? Feel free to give the origins of why it is defined as such too.

Related question: link.

Best Answer

Baud rate is the rate of individual bit times or slots for symbols. Not all slots necessarily carry data bits, and in some protocols, a slot can carry multiple bits. Imagine, for example, four voltage levels used to indicate two bits at a time.

Bit rate is the rate at which the actual data bits get transferred. This can be less than the baud rate because some bit time slots are used for protocol overhead. It can also be more than the baud rate in advanced protocols that carry more than one bit per symbol.

For example, consider the common RS-232 protocol. Let's say we're using 9600 baud, 8 data bits, one stop bit, and no parity bit. One transmitted "character" looks like this:

Since the baud rate is 9600 bits/second, each time slot is 1/9600 seconds = 104 µs long. The character consists of a start bit, 8 data bits, and a stop bit, for a total of 10 bit time slots. The whole character therefore takes 1.04 ms to transmit.

However, only 8 actual data bits are transmitted during this time. The effective bit rate is therefore (8 bits)/(1.04 ms) = 7680 bits/second.

If this were a different protocol that, for example, used four voltage levels to indicate two bits at a time with the baud rate held the same, then there would be 16 bits transferred each character. That would make the bit rate 15,360 bits/second, actually higher than the baud rate.