There could be a couple potential causes for this depending on the magnitude of the DAC output and how this signal is driven to the ADS1299. The DAC may not have the required drive strength capability, so added impedance to the signal path may cause this problem. You may need to buffer the DAC's output.
I have posted your question to TI's E2E forum at the link below. My hope is that other community members will see it and that you can provide more information about your design.
http://e2e.ti.com/support/data_converters/precision_data_converters/f/73/t/328451.aspx
Best Regards,
Ryan
Interesting.
I don't think I've ever seen this anomaly before.
It's often convenient to think of a SAR ADC as if it samples the input analog voltage at some instant in time.
In practice, there is a narrow window of time where changes in the input analog voltage --
or noise on the analog voltage reference, or noise on the GND or other power pins of the ADC --
can affect the output digital value.
If the input voltage is slowly rising during that window, then the less-significant bits of the SAR output will be all-ones.
If the input voltage is slowly falling during that window, then the less-significant bits of the SAR output will be all-zeros.
A very narrow noise pulse at the "wrong" time during conversion can have a similar effect.
Right now my best guess is that you're using some sort of analog switches or op amps that don't work quite as well (higher resistance or something) near the high and low power rails as they do near mid-scale, somehow letting in one of the above kinds of noise, which causes the less-significant bits to be all-ones or all-zeros.
I've seen some sigma-delta ADCs and sigma-delta DACs that have good resolution at mid-scale, but worse resolution near the rails -- but the effect looks different than what you show.
The "plot of the difference between one sample and the next sample over the entire full scale range" is fascinating.
If I were you, I would make a similar plot that, instead making the X value the difference between one sample and the next, make the X value the least-significant 6 bits of the raw ADC output sample.
That would quickly show if the "stuck" values are mostly lots of 1s in the least-significant bits (maybe input is slowly rising?) or lots of 0s in the least-significant bits (maybe input is slowly falling?).
I am sampling "pulsed" DC voltages. That means that for each
measurement I put a voltage on the DAC, let it settle for at least 100
times it's settle time, then tell the ADC to convert - and when
conversion is finished, I put the DAC back to 0 V.
My understanding is that when ADC manufacturers say "no missing codes",
the test they use involves several capacitors adding up to a huge capacitance directly connected to the ADC input,
and some system driving a large resistor connected to that capacitance that very slowly charged or discharged that capacitor,
slowly enough that the ADC is expected to see exactly "the same" voltage (within 1/2 LSB) for several conversion cycles before it sees "the next" voltage (incremented by 1 going up, decremented by 1 going down).
If I were you, I would see if such a "continuous slope" test gives the same weird "stuck code" symptoms as the "pulsed test".
Perhaps that would give more clues as to exactly what component(s) are causing this problem.
Please tell us if you ever figure out what caused these symptoms.
Best Answer
If you have a multi-input ADC and can have it select among inputs that are amplified by different amounts, that's generally the cleanest approach. In some cases, one might also adjust gain by scaling down the reference voltage to an ADC.
The effectiveness of those approaches, versus simply scaling up the values read from the ADC, will vary significantly based upon the ADC design. Some kinds of ADC have a noise floor which is independent of the strength of the incoming signal, but some kinds of converters, especially delta-sigma ones, have a noise floor which varies with signal amplitude. An ideal 16-bit converter would have an SNR of about 96dB on a full-strength signal, but that would drop to 48dB on a -48dB signal. A cheaper 16-bit converter designed for audio, by contrast, might have only a 60dB SNR on a full-strength signal but still manage a 36dB SNR on a -48dB signal (reducing the signal by 48dB would only reduce the SNR by 24dB). On such a converter, feeding in a signal at -12dB and multiplying the readings by four would yield results that were not as good as feeding in a clean signal that was 12dB higher, but may degrade the SNR by a lot less than 12dB.
The cleanest way to scale signals is to use analog scaling before the ADC. Scaling digitally won't be as good, but the amount of degradation will depend upon the kind of converter used, and may or may not be objectionable.