Electronic – Use the oversamplling followed by the “decimation method” to increase the ADC resolution and not normal averaging

adcmicrocontroller

To increase the 12-bit resolution of ADC from 12 bit to 14 bit, this can be done through the 'oversampling and decimation method'. An Atmel Application note says that

The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.

It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution

This reference clearly says that for the decimation method, the result is right shifted by the desired extra bit of resolution, and not divided by m as in the normal average.

  1. So, the question is, why do we need to use the decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?

  2. It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number? How do we use the decimation method in this case?

Best Answer

I wouldn't take that application note too seriously — it contains many errors, both conceptual1 and typographical.

Adding up a bunch of samples and then scaling the sum by some factor, no matter what you call it, IS averaging. It's also filtering. It is, in fact, just one special case of a finite impulse response (FIR) filter, in which every sample gets its own scale factor and then they get added together to create the result.

So, the question is, why do we need to use decimation method instead of the normal averaging after the oversampling to increase the ADC resolution?

It's all the same thing in the end.

It says above "Right shifting a binary number once is equal to dividing the binary number by a factor of 2", but what if we don't use a binary number, how do we use the decimation method in this case?

Just use ordinary division if the divisor isn't a power of 2.


1 For example, "white" noise is NOT equivalent to "gaussian" noise, although many natural noise sources are both gaussian AND white.