Electronic – Two point calibration for analog signal chain

adc

I am new at this forum, I have agreed to the rules but I have a question that I did not found answered as I would need, considering many hours I spent looking for similar ones here.
In spite of everything I must confess that I have learned a lot during that search because I am a digital guy and analog electronics is something I would love to learn.
My point is about two point calibration for analog signal chain (Sensor-> OP amp–> ADC) using software methods.
I've read from some other question and answers on this forum (ADC/DAC Calibration?) and (ADC calibration in ATTiny88) that you never should use the bottom 50 mv or top 50 mv from ADC input range to do calibration.
Also I refer to the second link which is related with my questions.

1- Question: How do you account for offset error if you do not consider the bottom 50 mv of FS?

2-Question: Suppose you need a resolution that ask LSB of 31.2mv ( 16 bit ADC, Vref 2.048 v) how you deal with that if you need to keep working with 16 bit, Vref 2.048 v ?

Thank you in advance for some help

Best Answer

Think about what a two point calibration means, you measure some parameter and read what ADC value you get, then do the same thing at another parameter value and draw a straight line between them (Which you then extend to 0 and full scale in a fit of optimism).

This implies an expectation of linearity, which is to say it implies an expectation that the line really is straight all the way.

Now real electronics near the rails is often not in fact all that linear, things start to saturate, gains fall, feedback starts to misbehave, maybe there is a small offset, maybe even your sensor becomes slightly non linear. If you take your cal points where all that stuff is going on then your entire calibration is off, even over the 90% or so of the range where those effects are minimal, if you cal at say 10 and 90% (or whatever makes sense) then you can still read (possibly somewhat inaccurately) over the full range but are less likely to have an error over most of the range.

Concrete example:

Lets say there is a temperature sensor that drives a 10 bit ADC, we measure a temperature T1 =24C with a calibrated reference instrument and get an ADC reading V1 = 150, measure temperature T2=100C and get a value of say 827 from the ADC, this constitutes input to our two point cal. A 10 bit ADC has a single ended range of 0->1023 so 150 is just a bit bigger then 10% of the range which is a reasonable cal point, 827 just a bit smaller then 90% of full scale so also a reasonable cal point.

Now, 24C = 150, and 100C = 827, so we can trivially calculate the slope, (100C-24C)/(827 - 150) = 0.1123 Celsius per ADC step.

Calculate the temperature that should give 0 on the ADC, which I make to be about 7 Celsius, and the full scale value which I make to be about 122 degrees.

Now those bits at the end of the scales assume things are linear, but they probably really are not (Which is why we try to keep the cal points away from the total extremes), so take with a caution.

Also notice that our ADC values are quantised, so are out cal points, the line is really an area touching the corners of the quatisation step at each cal point (A good reason to keep the cal points far apart).