Interesting.
I don't think I've ever seen this anomaly before.
It's often convenient to think of a SAR ADC as if it samples the input analog voltage at some instant in time.
In practice, there is a narrow window of time where changes in the input analog voltage --
or noise on the analog voltage reference, or noise on the GND or other power pins of the ADC --
can affect the output digital value.
If the input voltage is slowly rising during that window, then the less-significant bits of the SAR output will be all-ones.
If the input voltage is slowly falling during that window, then the less-significant bits of the SAR output will be all-zeros.
A very narrow noise pulse at the "wrong" time during conversion can have a similar effect.
Right now my best guess is that you're using some sort of analog switches or op amps that don't work quite as well (higher resistance or something) near the high and low power rails as they do near mid-scale, somehow letting in one of the above kinds of noise, which causes the less-significant bits to be all-ones or all-zeros.
I've seen some sigma-delta ADCs and sigma-delta DACs that have good resolution at mid-scale, but worse resolution near the rails -- but the effect looks different than what you show.
The "plot of the difference between one sample and the next sample over the entire full scale range" is fascinating.
If I were you, I would make a similar plot that, instead making the X value the difference between one sample and the next, make the X value the least-significant 6 bits of the raw ADC output sample.
That would quickly show if the "stuck" values are mostly lots of 1s in the least-significant bits (maybe input is slowly rising?) or lots of 0s in the least-significant bits (maybe input is slowly falling?).
I am sampling "pulsed" DC voltages. That means that for each
measurement I put a voltage on the DAC, let it settle for at least 100
times it's settle time, then tell the ADC to convert - and when
conversion is finished, I put the DAC back to 0 V.
My understanding is that when ADC manufacturers say "no missing codes",
the test they use involves several capacitors adding up to a huge capacitance directly connected to the ADC input,
and some system driving a large resistor connected to that capacitance that very slowly charged or discharged that capacitor,
slowly enough that the ADC is expected to see exactly "the same" voltage (within 1/2 LSB) for several conversion cycles before it sees "the next" voltage (incremented by 1 going up, decremented by 1 going down).
If I were you, I would see if such a "continuous slope" test gives the same weird "stuck code" symptoms as the "pulsed test".
Perhaps that would give more clues as to exactly what component(s) are causing this problem.
Please tell us if you ever figure out what caused these symptoms.
If the internal ADC of your microcontroller performs the job you need it to then no, there is no need for external ADCs. But then, that's not who they're aimed at.
You have covered most of the reasons for an external ADC, but there are a few more, and in my opinion, they are some of the most important reasons:
- You need a different sampling technology - for instance the internal ADC is SAR, but you need to do Delta Sigma.
- The internal ADC, because it is internal, and shares the same die as the main MCU, will never be 100% free from the noise of the rest of the MCU, so an external one would be possible to make ultra low-noise
- Your microcontroller / SoC / FPGA of choice has no ADC. The latter two are most likely - most common SoCs and FPGAs don't have any ADC at all. Yes, you can get ones that do, but many don't. So you add an external one.
For point 3, take the Raspberry Pi for example. That has no ADC available at all, you have to add an external one to do any analog work at all.
Best Answer
Elementary, Watson. You sortof had the idea with #2, except that you don't want a negative gain but rather a gain between 0 and 1. In other words, you want to attenuate the 0-15 V input signal to match the input range of your A/D.
This is easily accomplished with two resistors in a "resistor divider" configuration. If your A/D has a native range of 0-5V, then you want to divide the input voltage by 3. This can be accomplished, for example, with 2K Ohms in series followed by 1K Ohms to ground.
Anything you do to a signal will always change it slightly. In this case, some of the high frequencies will be lost. However, at impedances of 10s of K Ohms, this won't be a problem with a sample rate of 1KHz or less. That implies a upper frequency limit of 500Hz maximum, rather less in practise. Even 100s of K Ohms used in the resistor divider should be able to pass such low frequencies without losing the part you care about.