Electronic – More accuracy from analog multiplication

accuracyadc

Let's say I want to detect a voltage signal that's guaranteed to be between 0V and 1V through an ADC whose upper range is 5V. I can hook the signal up to the ADC directly, but then I'll only be using 1/5th of the ADC range. Or, I can multiply the signal by 5 through an amplifier to have the signal extend to the entire ADC voltage range. Then, I would divide the signal by 5 (using software) to get the signal back to a number between 0 and 1. Would this latter approach offer better accuracy? Intuitively, I would think that extending the signal to match the entire ADC range would offer better accuracy, but I wonder if I'm just fooling myself.

Best Answer

From a bit more information in the comments, you are using an ATMega168. This microcontroller has three options for the ADC reference voltage:

  1. \$V_\mathrm{CC}\$ - usually 5 or 3.3 V.
  2. Internal 1.1 V bandgap reference
  3. External reference (< \$V_\mathrm{CC}\$)

With the 10bit ADC, you then split this into \$2^{10}\$ bins (LSB). To avoid the need to prescale your signal in order to use the maximum dynamic range of the ADC, you can also select a more appropriate reference.

In your case, the internal 1.1 V reference gives LSBs of ~1.1 mV and is very easy to use. Simply set ADMUX[7:6] = b11 and put a capacitor on the AREF pin. If you want to use an external 1.024 V reference, this gives LSBs of 1.0 mV; set ADMUX[7:6] = b00.

Another advantage of using the bandgap or an external reference is that you are not directly using the noisy \$V_\mathrm{CC}\$ rail as your reference voltage. Typical reference ICs will include lots of filtering, and will give you the lowest noise of the options.