Electronic – Optimal tradeoff between ADC bit depth and sampling rate

adcnoise

I have a 24-bit ADC sampling at 31250 samples per second. I am collecting these on a wireless device, and sending them in real-time to a PC for recording (and later analysis). My band of interest is 0-1000 Hz and the ADC produces noise which is more or less Gaussian with a level of around 19 LSB RMS.

My radio link has limited throughput, so I cannot send every sample from the ADC to the PC (would require 750kbps) and must instead send a subset of the data (less than about 190kbps).

The goal is to get the lowest possible noise level on a spectrum plot (e.g. using Welch's method) for a given duration of data (e.g. 10 minutes).

With that goal in mind, is it better to average more 24-bit samples together and send the full bit content (lower sampling rate, higher # of bits) or is it better to average fewer 24-bit samples and discard the lower noisy bits (higher sampling rate, lower # of bits)?

If I can discard noisy bits then how many must I keep for adequate oversampling on the PC side?

Is there an alternate processing method which would provide better results than simple averaging? It seems like Delta-Sigma can only be applied while the analog signal is being digitized, not after the fact (and I cannot change my hardware).

UPDATE:

I have been doing a lot of reading on the subject since I asked this question, and discovered that I had many misconceptions.

The central point that I did not understand is that having a higher sampling rate does not allow one to reduce the white noise in the band of interest. I was confusing the idea of reducing quantization noise by oversampling and decimation with the idea of reducing noise in the power spectrum by averaging over a longer time period.

On the surface, they both seem to indicate that having more data allows one to remove more noise. But in the case of Welch's Method, this is only true if the time window under analysis is lengthened. It doesn't help to increase the sampling rate (and in fact increasing the sampling rate creates a heavier processing burden).

On the flip side, the technique of oversampling and decimation cannot reduce thermal noise (or any other analog noise). It simply reduces quantization noise in-band by spreading the fixed noise power over a wider frequency range. Once the quantiaztion noise in-band has been reduced below the analog noise level, further application of oversampling and decimation becomes markedly less useful.

There is an all digital technique to reduce quantiaztion noise in the band of interest called Noise Shaping. It is an integral part of the operation of a Delta-Sigma converter, but can be applied independently such as when reducing bit depth. As with oversampling and decimation, it can do nothing to thermal noise, as that's already part of the measured signal. Since quantization noise is not a limiting factor (as I control the on-air bit depth) it is of limited usefulness to me.

With this new knowledge, it seems to me that I should reduce the sampling rate as much as possible without distorting or adversely affecting the signal's passband (e.g. being mindful of the Nyquist rate). I can limit the sampling rate to the minimum and thus save power by not using the radio's maximum throughput. As an added bonus this reduces storage requirements and processing complexity.

Best Answer

Averaging sets of 24-bit samples is essentially applying a filter with a rectangular impulse response, which leads to a frequency response of a sinc function. The peaks in the tails of the sinc function will alias some of the noise down into your band of interest.

Nevertheless, simple averaging could work well. For example, averaging groups of eight samples at the transmitter reduces the Gaussian noise to

$$ {19 \textrm{ LSB rms} \over \sqrt{8}} = 6.7 \textrm{ LSB rms} $$

Since the resulting noise is still well above the LSB, cutting the average back to the original 24 bits appears okay -- while still preventing potential overflow. This example uses a power of two for the downsampling factor since the division for the average is a simple right shift.

Downsampling by more than a factor of about eight (with this simple filter) risks getting too close to the Nyquist frequency for the 1-kHz passband.

Averaging fewer samples should alias less noise into the passband, but if you then have to truncate low bits to meet your bandwidth limit, you might end up with an LSB that is greater than the noise floor, which is bad.

If you have enough processing power at your transmitter, the best way to do this is with a lowpass FIR decimation filter that preserves your band of interest while avoiding the aliasing of noise.