Electronic – Why is a voltage divider used when reading analog sensors

adcvoltage divider

So when reading sensors (ping sensors/light/pressure/etc…) I notice they always use a voltage divider with resistors before running the input into the ADC input of a microcontroller.

Why can't we just run the sensor straight into the ADC? I understand that if the "device/sensor/whatever" went to a really high voltage the MCU wouldn't be able to understand it, but what about smaller ones that don't really go past 5v?

Also what's the best way to calculate what resistor value I should use? I guess whatever gets it between 0 and 5v?

edit: Apologies, I wasn't very clear. I guess I meant both types of sensors but I think I was more-so thinking of resistive type sensors (LDR's/etc…), but I guess it makes sense because an MCU can't actually measure resistance (Which I now feel stupid for not making the connection lol)

Best Answer

A resistive sensor generates a signal by modifying an externally applied voltage. Adding a suitable resistor to make a voltage divider allows one to get the signal. Without the voltage divider, the only voltages available would be Vcc and ground.

schematic

simulate this circuit – Schematic created using CircuitLab

added

The reason why simply connecting the resistive sensor across Vcc will not work is this: The Vcc is designed not to change, at least not unless it hits a current limit. So ignoring any constant current situation, the voltage across the sensor will always be same as Vcc. And, if you feed that to the ADC, you are feeding in Vcc (which is kind of meaningless).

In the schematic above, the signal is at 2.5 V. When the resistance of the sensor increases or decreases, the signal voltage also increases or decreases... from a maximum possible value of close to Vcc down to a minimum possible of close to 0V.

A case could probably be made for a completely separate current limited 5V supply just for the sensor, and you could plug that into the ADC. That would probably work perfectly. However, most sensors do not tolerate high current. Or, even if they do, the power dissipation would heat up the sensor and alter any linearity of the sensor readings due to the changing temperature. This would make obtaining accurate readings rather difficult. (It also does not help that the sensor resistance usually falls with light/pressure/temperature, thus leading to greater current flow through the sensor.) Hence, some sort of current limiter in series with the sensor is required anyway. The simplest current limiter is a resistor, and thus we end up with a voltage divider.

Remember that you are not measuring resistance or current, you are measuring voltage with the ADC. So the voltage has to be able to change, you can't fix it to Vcc and feed that into the ADC. To vary voltage, by V=IR, you have to vary either current or resistance (or both). The resistor already varies. Depending on range and linearity, one may additionally opt to provide a constant current if required, instead of a simple resistor.