I'm trying to get more than 10 bits of precision from my Arduino ADC but I can't really figure out the theory behind it. An often quoted Atmel Application note (http://www.atmel.com/Images/doc8003.pdf) says that
It is important to remember that normal averaging does not increase the resolution of
the conversion. Decimation, or Interpolation, is the averaging method, which
combined with oversampling, which increases the resolution
Then what they propose for 'Decimation' is moving the decimal point. Which amounts to halving the binary reading for every place you move it so you might as well divide the base 10 value by 2 or 4 or 8 or what have you. Am I understanding decimation wrong?
I took a look at the note and that is indeed a weird claim (or a confusing way of saying what they actually mean).
Perhaps what they actually mean is the point that if you want to get more resolution, you can't divide/shift the number afterward to the same scale as a single sample because (in integer arithmetic) that would throw out the bits you gained.
If your ADC samples are noisy, then of course you can divide to get a less noisy value at the original scale.
The other thing I thought of from just your question was the point that to do oversampling right you need to use an effective low-pass filter, and a straightforward moving average is not as good at being a low-pass filter as a properly designed FIR (or IIR) filter — but that doesn't seem to be supported by the text of the note.