Electronic – Effectiveness of Parity Checking

errorparitysignal processing

A parity bit lets a receiver know whether or not the an input is correct, given the number of 1's matches the logic behind the parity bit (be it even = 1, or odd =1). This seems very ineffective to me, and even 'corrupting' (not sure what the right word would be).

What if the message is not corrupted, but the parity is corrupted, does this means the message has to be resent? Or does the parity check have a way around this? And what if 2 bits flip, so the number of 1's doesn't change, so there is no parity error, but now the message has changed.

Am I understanding parity checking incorrectly, am I missing something more with the parity, or is it really that ineffective of an error checker? The reason this is throwing me off is because if it really is this ineffective, why is it taught so much (at least in my program at my university), and these issues are practically ignored.

Best Answer

You seem to understand parity generation/checking but it is only one technique out of many that are used. It is not in general used as the means of checking data within a message composed of many bytes.

Parity was one of the earliest data checking techniques and has been traditionally used where the data is very limited such as 8 bits over a hardware link - such as a parity bit on a memory bus. The behavior when an error is detected depends upon the system. The advantage of parity is its simplicity and low overhead.

In a communication system where the data is combined into messages the most common approach is to use a Cyclic Redundancy Check (CRC) or checksum where a 16-32 bit word is appended to the message. This can detect more than single bit errors.

More advanced techniques such as Reed-Solomon codes can not only detect but also correct errors.

In terms of effectiveness even parity can give good value for systems that have a low error rate - if the chance of a single bit error is 1 in 10^9 and errors are independent the chance of getting an undetected error due to a 2-bit error would be 1 in 10^18 which is minuscule and never likely to occur in the life of the system.

When using parity the system designer must ensure that simple correlated errors do not cause failure - for example if using ODD parity on a system where a cable being disconnected would cause all bits in a byte to be a 1 would mean that the disconnected cable would not be detected - that particular fault could be detected if EVEN parity was used.

However simple parity would not be good choice though on a wireless system where the raw error rate might be 1 in 10^3 where the chances of a double bit error could occur in a very short time. For a wireless link much more sophisticated error detection and correction would be used.