How to minimize the variance in voltage drop across transistors – or eliminate transistors from this design

arduinoimpedanceresistanceresistorstransistors

I have a fairly simple circuit that I'm using with an arduino to measure the impedance on a very low-resistance circuit. The user plugs in a device at A and B and the arduino can compute the resistance across the user's device by measuring the voltage at A. No resistance on the user device means I read 0V, infinite resistance means I see about 1 volt. The most common user device will register about 1 Ohm. (For the sake of anyone familiar with arduino, I use the internal voltage reference so that 1V reads near the max for analogRead.)

+5V
 |
 z
 z 220 Ohm Resistor (.1% tolerance)
 z
 |
 A----------------------------------------\
 |                                        |
 z                                        |
 z 47 Ohm Resistor (.1% tolerance)        |
 z                                        |
 |                                        |
 |                                        |
GND---------------------------------------B

The trouble is that the circuit has to measure multiple user devices at the same time. I can't just replicate this chunk of circuit 3 times because changes on any one of the user devices will affect the values read on the other arduino input pins.

My solution to this was to put a transistor between 5V and the first resistor and compensate for the voltage drop across the transistor by choosing a smaller resistor. This makes the circuit look like this:

+5V
 |    (Worst ASCII art transistor ever)
 \
  \|   1K Ohm
   |-----NNN---- (To Arduino Pin to select this channel.)
  /|
 /
 v
 |
 |
 z
 z 150 Ohm Resistor (.1% tolerance)
 z
 |
 A----------------------------------------\
 |                                        |
 z                                        |
 z 47 Ohm Resistor (.1% tolerance)        |
 z                                        |
 |                                        |
 |                                        |
GND---------------------------------------B

This circuit gets replicated once for each user input. This works, but my input values are unpredictable because the voltage drops across the transistors isn't consistent. (A measurement of 27 on one channel might equal a 2 Ohm user device, while that same device on a different channel might read 47. A 10% jitter I could tolerate. Greater than 60% renders the project unusable.)

I don't think that I can get transistors that will be in the neighborhood of .1% tolerance without paying a fortune (assuming I can get them at all.)

Because the math is being done on the arduino, I can't easily take 3 measurements (using the transistor-free design) and solve the simultaneous system of equations fast enough. (I'm looking at 500 to 1000 measurements of all three channels every second.)

Is there any way to salvage the project?

Best Answer

No, I don't think you are looking at this correctly at all. If your 5V is stable then there is nothing wrong with your method using fixed resistors and no transistors. The problem is that each ADC input on your MCU has a different error voltage. Check out the dc offset error potential in the data sheet. Here is a typical example from maxim covering the offset error on ADCs and DACs: -

enter image description here

Around 0V, the error could be +/-50mV. In a perfect system, a 1 ohm resistor fed from 5V via a 150 ohm resistor produces a voltage of 33mV - you got a digital value of 27 and, assuming your ADC is 10-bit then 27 represents 132 mV - how do you rationalize this? Forgive me if it isn't a 10-bit ADC but, if it were 12 bits then 27 represents 32.9mV (possibly coincidentally of course) but 47 represents 57mV!!! Assuming they are all 1 ohm resistors and accurately matched (or you used the same resistor on a different channel) you have to conclude you are witnessing ADC offset error in action.