The statement:
- The noise density created by ADC quantization will be \${{2.11\times 10^{-4}}V\over \sqrt{10,000Hz}} = {{2.11\times 10^{-6}}V} \$
is incorrect. The analog bandwidth is going to be no more than half the sampling rate. This calculation is not necessary anyway, since you already have the RMS value for this noise.
What you need to do is compute the corresponding RMS value for the analog noise at the ADC input, which is \$5\times10^{-4}\frac{V}{\sqrt{Hz}}\times\sqrt{5000 Hz} = 3.5\times10^{-2}V\$. It will be less if you can band-limit the input signal to something less than the Nyquist bandwidth.
But this gives you a worst-case scenario. It basically says that you have roughly a 100:1 (40 dB) SNR (relative to a full-scale signal) at the ADC input, which would suggest that anything over about 7 bits will be enough.
To address the broader issues you raise: The real question is what is the probability distribution that each source of noise introduces into the stream of samples. The quantizaiton noise is uniformly distributed, and has a peak-to-peak amplitude that's exactly equal to the step size of the ADC: 3V/4096 = 0.732 mV.
In comparison, the AWGN over a 5000 Hz bandwidth has an RMS value of 35 mV, which means that the peak-to-peak value is going to be less than 140 mV 95% of the time and less than about 210 mV 99.7% of the time. In other words, your digital sample words will have a distribution of ±70 mV/0.732 mV = ±95 counts around the correct value, 95% of the time.
EDIT:
- The measurement precision will corresponds to \$ 3V/0.05mV = 2^{16} \$, which has 16 bit resolution.
Be careful — you're comparing a peak-to-peak signal value to an RMS noise value. Your actual peak-to-peak noise value is going to be about 4× the RMS value (95% of the time), so you're really getting about 14 bits of SNR.
- Now let us come back to the real case. When a 12-bit ADC is to be employed, could the 12-bit resolution be simply treated as quantization noise? If this is the case, 12 bit ADC can also lead to a 16 bit resolution result.
The 12-bit resolution is quantization noise. And yes, its effects are reduced by subsequent narrow-bandwidth filtering.
- What bothers me is "Can I really get a more precised result than ADC resolution WITHOUT oversampling?"
Yes. Narrow-bandwidth filtering is a kind of long-term averaging. And the wide-bandwidth sampling is oversampled with respect to the filter output. Since the signal contains a signficant amount of noise prior to quantization, this noise serves to "dither" (randomize) the signal, which, when combined with narrowband filtering in the digital domain, effectively "hides" the effects of quantization.
It might be a little more obvious if you think about it in terms of a DC signal and a 0.01-Hz lowpass (averaging) filter in the digital domain. The mean output of the filter will be the signal value plus the mean value of the noise. Since the latter is zero, the result will be the signal value. The quantization noise is "swamped out" by the analog noise. In the general case, this applies to any narrowband filter, not just a low-pass filter.
A simple example would be a regular voltage comparator, which will output 0
if below 0.5V
and 1
if equal or above 0.5V
. You can view it as a 1-bit ADC working in a range 0-1V
.
Now consider a perfect input signal of 0.5V exactly. By our definition it will give 1
in the output. Now, we introduce a low amplitude white noise averaged at zero added to the signal. In this case the output will "jump" between 1
to 0
with probability of 50% at each point of time. If we sample this output over the time and then compute the average, we will eventually get a value 0.5 which is out of the original resolution we had with a single sample.
...And dithering is a method of introducing some artificial noise if the natural one is insufficient.
Best Answer
Sort of ... if you look further down the page in the linked article, you'll a good explanation of the gain and offset errors. Particularly fig.5 So if you only have gain errors sometime the digital range is suppressed and in some cases the analog input range is suppressed. The former case is explained by your formulae. The later not. You need to account for gain differences.
That would be one way, however, if it's the analog that is suppressed AND you have sufficient noise in the sampled signal to hide your computational noise you could conceivably be able to post multiply to get your full 16 bit range (span) back. Because of the noise present you won't have a full resolution ADC (ENOB - Effective Number of Bits). If you don't have enough noise then you'll notice this fractional multiplication. You don't mention your application but in images this wouldn't be acceptable.
It just means that the INL is low, it doesn't speak to having to truncate the length because that is limited by other factors like DNL. What is does mean is that architecture (circuit technique) has promise for further extension to 17 bits.
Other factors do come into play in your decision. Monotonicity is one. A non-monotonic ADC will have high INL and NOT be correctable.
The article is good, but it does say some things that are applicable to certain architectures of ADC. One statement is " a LOW INL means a low DNL" to paraphrase the very first sentence in the INL section is not necessarily true in all cases.