Electronic – Trade offs between adc range and amplifier gain design

adcamplifier

Edited on 15th,Aug

Tradoffs between ADC range and amplifier gain

If I want to measure a very small signal with 16 bit ADCs,say 10kHz sine waves with amplitude ranging from 0 to 1uV, I have to amplify the small signal first. The signal should be amplified with peak-to-peak value equal to the range of ADC. Then the signal will be shifted up by half of the ADC range by an ADC driver(with level shifter).

My questions is: What are the trade offs between amplifier gain selection and adc range?

Should I use a 2.5V range ADC(16 bit) with 1,250,000 gain amplifier, so the peak-to-peak value of the largest sine wave(1uV) will be amplified to 2.5V?

Or should I use a 5V range ADC(16 bit) with 2,500,000 gain amplifier, so the peak-to-peak value of the largest sine wave(1uV) will be amplified to 5V?

What aspects should be concerned in the decision?

How to choose ADC number of bits?

Also a question, how should I decide the number of bits to use? The noises are fairly large, but I am going to use signal processing techniques, which can reduce noises greatly. It means that more bits should still be useful. (Thanks to gbulmer, you made me realize that I did not really think about this question..)

Best Answer

You'll be lucky to find a 16 bit ADC where you can have a lower Vref (input range) and get better performance compared to sticking with a higher input range and choosing a better input amplifier.

For instance, the AD7687 has a specified signal-to-noise ratio of typically 95.5dB with a reference voltage of 5V - if the 5V reference is lowered to 2.5V, the typical SNR drops to 92.5dB i.e. it gets 3dB worse.

The AD7685 has a similar story, so does the AD7988-5 et cetera....

My advice is find the best ADC you can afford and operate it with Vref set to the best possible value to maximize performance (usually the highest value permissible) - then design your front-end amplifier to deliver the best performance (usually by trading-off current consumption to reduce noise.

Sampling rate - this affects perceived quantization noise, for instance, if you sample at say 30 kHz, all the relevant q-noise will be contained in that bandwidth and so if you sample at 100 kHz, the noise is spread over a wider bandwidth meaning you can use process gain to reduce the noise in your digitized 10 kHz signal - average several samples and decimate in software. Process gain is the same if you converted to analogue and used filters after the DAC - the faster the sampling rate the more noise you can remove by filtering.

Related Topic