Electronic – Confusion with the relation between SNR, voltage divider and amplifier concepts

signal-to-noise

Imagine you have an input signal s(t).
And the signal of course has some noise n(t).
So lets say the SNR value is x.

Now if we attenuate this signal by a voltage divider by 100 lets say, will the SNR of the attenuated signal remain same or?

Consider the following two scenarios:

1-) I log voltage data of a signal varying between -100mV to +100mV.

2-) And in the second case I log the voltage with same resolution "after amplifying this signal by factor 10" which means I log as -1V to +1V.

Neglecting the amplifier's noise contribution, which way is better in practice? Why?

Best Answer

Here is the thing, noise is actually a misnomer, signal distortion is what it should really be called.

Every time you pass a signal through any component or even a trace of wire, the signal that comes out the other end WILL have some distortion compared to the original signal. Some of that is "filtering" and "reflections" some of it in cross-coupling from other signals outside of the intended signal path (noise).

Voltage Divider

If you can pass your signal with a signal to noise ratio of X through an ideal divide by 10 resistor divider, your signal would still have the same "noise ratio" of X.

Unfortunately, in reality there is no such thing as an ideal anything.

After division the signal will be distorted a little by inductances and capacitances and the physics of the resistor itself. Further, you just built a signal mixer to add in whatever noise is on the ground side. How much distortion is introduced depends on the quality of the circuit and the nature of the signal. The "noise" distortion could be lower but usually it will be higher.

Do It Late

Reducing a signal level is generally something to be avoided. As I already mentioned, you are actually mixing in the ground noise, but you are also producing a signal that is now more SUSSEPTIBLE to the ambient noise in the system. As such, if it is absolutely necessary to reduce a signal to feed into some device, like an ADC, it is prudent to do that reduction as late in the signal processing chain as possible, and physically as close as possible to the ADC.

Amplification

The same goes for amplification. The entering signal along with the noise will again be distorted on the way through the amplifier. Different frequencies will again be distorted by different amounts. We actually design circuits to take advantage of that and call them filters.

As for feeding into an ADC.

ADCs compare an input signal with a reference signal. Obviously, if the reference is noisy you will get LSB comparison errors. If there is an ambient noise level, then that can couple into and produce a larger "noise component" on a low level reference compared to a larger reference. As such, ADCs work better in general at the larger end of their acceptable signal range.

HOWEVER: That does not necessarily mean amplifying the signal so you can maximize the reference of the ADC is the right thing to do. If that same noise is coupled into the signal you are trying to measure before you amplify it, you are back where you started, but now the signal carries the added distortion of the amplification.

Balance

There is a balance in there somewhere. Ultimately, the best method is to limit the number of times you have to mess with the signal on it's way into the ADC and keep the ADC's reference at a level that you can tolerate the reference noise. Less "messing" also limits the effects of component tolerances. And of course, keep the signal and reference as "quiet" as you can. Some circuit tuning is often required to optimize the ADC chain.

Cost

Cost can often be also be a limiting factor. More accuracy generally means more cost. Part of the design process also involves deciding how much error you can tolerate and how much extra cost you can afford to get there.