Electronic – Difference between hard decision and soft decision in error correction codes

error correction

I am having trouble understanding the difference between hard and soft decision. My current understanding is the following. Using a sparse graph, when we are trying to decode a stream of bits, we take into account the probability that each particular bit is either a 1 or 0. Then we use the probability to gauge what the bit should be. And in hard decision, we disregard the probability information we gain by analyzing the parity bits. This is the stem of my confusion. If we are disregarding the probabilities, then how is error checking happening at all? Is hard-decision only used for concepts such as interleaving, etc? Please correct me if any of my understanding is incorrect.

Best Answer

The hard decision is what comes out after the error correction. The error correction algorithm will use the soft decision.

A hard decision is a bit. It's 0 or 1 (or 00/01/10/11 etc if QPSK, etc). End of story - it's what you put in your output stream to be interpreted as "real" data.

A soft decision might be represented as a float, and might be something like 0.05 (obviously 0), 0.92226 (very very likely 1), or 0.50001 (good luck without ECC!).