You said you only care about 1% accuracy, which is less than 7 bits of the full range. You can therefore use the 1.000-2.024 voltage directly. Even if you have a 10 bit A/D with a 0-3.3 V full range, you still get about 320 counts, which is more than 3 times your requirement. There is no need to shift or scale anything.
If you use a divider to create Vref+ instead of using the 3.3 V supply internally, then you get even more resolution. If you can bring it down to 2.1 V, for example, to leave a little margin, then you get 500 counts over your range. Thats lots more resolution than accuracy unless you use a separate precision reference. Consider that a divider made from 1% resistors will cause significantly more error than a 10 bit A/D using the reference. To get 1% accuracy, using a fixed external reference is probably the simplest way. A 2.048 V reference is almost perfect here.
Some PICs do have a optional Vref- input, but tying it to anything other than ground is going to decrease accuracy. Basically you'd be tradeing off accuracy to get more resolution, which makes no sense when you already have lots of resolution and accuracy is on the edge.
Your desire to get the raw A/D counts to represent some arbitrary "round" value is silly. Don't burden your measurement system with having to meet this arbitrary spec. Do the best job of taking the measurement, then the rest is simple conversion in firmware. You have a digital processor that can easily apply a scale and offset instantaneously in human time. The conversion to decimal will probably take more cycles, although that will be instantaneous in human time too.
Basically, think about what you really want to get out, proritize your requirements accordingly, and don't specify implementation details (like what one A/D count should represent). Your top priority should be accuracy, given your specs, since everything else pretty much falls out with a 10 bit A/D.
Interesting.
I don't think I've ever seen this anomaly before.
It's often convenient to think of a SAR ADC as if it samples the input analog voltage at some instant in time.
In practice, there is a narrow window of time where changes in the input analog voltage --
or noise on the analog voltage reference, or noise on the GND or other power pins of the ADC --
can affect the output digital value.
If the input voltage is slowly rising during that window, then the less-significant bits of the SAR output will be all-ones.
If the input voltage is slowly falling during that window, then the less-significant bits of the SAR output will be all-zeros.
A very narrow noise pulse at the "wrong" time during conversion can have a similar effect.
Right now my best guess is that you're using some sort of analog switches or op amps that don't work quite as well (higher resistance or something) near the high and low power rails as they do near mid-scale, somehow letting in one of the above kinds of noise, which causes the less-significant bits to be all-ones or all-zeros.
I've seen some sigma-delta ADCs and sigma-delta DACs that have good resolution at mid-scale, but worse resolution near the rails -- but the effect looks different than what you show.
The "plot of the difference between one sample and the next sample over the entire full scale range" is fascinating.
If I were you, I would make a similar plot that, instead making the X value the difference between one sample and the next, make the X value the least-significant 6 bits of the raw ADC output sample.
That would quickly show if the "stuck" values are mostly lots of 1s in the least-significant bits (maybe input is slowly rising?) or lots of 0s in the least-significant bits (maybe input is slowly falling?).
I am sampling "pulsed" DC voltages. That means that for each
measurement I put a voltage on the DAC, let it settle for at least 100
times it's settle time, then tell the ADC to convert - and when
conversion is finished, I put the DAC back to 0 V.
My understanding is that when ADC manufacturers say "no missing codes",
the test they use involves several capacitors adding up to a huge capacitance directly connected to the ADC input,
and some system driving a large resistor connected to that capacitance that very slowly charged or discharged that capacitor,
slowly enough that the ADC is expected to see exactly "the same" voltage (within 1/2 LSB) for several conversion cycles before it sees "the next" voltage (incremented by 1 going up, decremented by 1 going down).
If I were you, I would see if such a "continuous slope" test gives the same weird "stuck code" symptoms as the "pulsed test".
Perhaps that would give more clues as to exactly what component(s) are causing this problem.
Please tell us if you ever figure out what caused these symptoms.
Best Answer
Range is VREF in single and 2 * VREF in differential mode. If you use differential mode as pseudo-differential ( one input at a fixed voltage), you lose one bit.
from - https://community.st.com/s/question/0D53W00000OU6Xb/stm32l476rg-differential-adc-functioning-as-singledended