Electronic – arduino – Why high impedance source in ADC input causes error

adcarduinoconversionhigh-impedanceoperational-amplifier

Recently, I had a problem with two sensors (LDR and LM35), when I tried to read them in Arduino ADC, both measures were totally wrong. Searching in the internet, the answers are related with high impedance of voltage divider provides to analog pin of ADC. So, I used a op-amp to provides a low impedance and result was great, but the doubt is still in my head.
The sample and hold of arduino have a tiny leakage current (0.1 uA), i'd like to know how the high impedance source affects the ADC.

Best Answer

A simple model to show how an ADC behaves is this:

schematic

simulate this circuit – Schematic created using CircuitLab

There, you have a source (\$V_{\text{Analog}}\$) and its impedance (\$R_{\text{Source}}\$). As well as the internal model for an ADC with a switch which is open and close every sampling period, and the ADC's impedance which is a combination of \$R_{\text{ADC}}\$, \$C_{\text{Hold}}\$ and the leakage current.

The ADC also has a successive approximation register (SAR), which does a binary search to map the analog voltage value to a binary number.

Back to your question. When the switch is closed, the capacitor, \$C_{\text{Hold}}\$, starts to charge and the time its going to take depends on the impedances and the capacitance value.

If we neglect the leakage for a second and resort to the well-known charging equation for a capacitor, you get:

$$ V_{C_{\text{HOLD}}}=V_{\text{Analog}}\bigg(1-e^{-\dfrac{t}{\tau}}\bigg)$$

Where \$\tau=(R_{\text{Analog}}+R_{\text{ADC}})C_{\text{HOLD}}\$

The time it takes to fully charge the capacitor is about 5\$\tau\$. With that, the problem arises when the sampling time is not enough to allow the capacitor to fully charge.

For example, say your ADC samples (\$T_s\$) every 1ms, and consider your source impedance \$R_{\text{Analog}}\$ to be very small compared to \$R_{\text{ADC}}\$. In that case, it could make sense to approximate \$\tau\$ to be:

$$ \tau\approx (R_{\text{ADC}})C_{\text{HOLD}} $$

And just to make up an number say, after plugging in values for \$R_{\text{ADC}}\$ and \$C_{\text{HOLD}}\$, \$\tau\$ turns out to be:

$$ \tau\approx 0.15\text{ms} $$

Since the sample time is greater than 5\$\tau\$, that is plenty of time for the capacitor to fully charge, and your ADC should capture the correct value. At sampling period, the switch is opened.

Now, as your source impedance starts to increase, so does \$\tau\$. Say your source impedance, \$R_{\text{Analog}}\$ increases so that now you can't neglect it and say this leads to \$\tau=1\text{ms}\$. Under this condition (\$\tau=T_s\$), every time you sample, the capacitor value will be at approximately 63% of the actual value of the analog source. That would lead to an incorrect measurement.

enter image description here

It's about giving it enough time to charge up the holding capacitor and the source impedance, as it increases, prevents you from doing so.