Electronic – dsPIC33 ADC: Why is the minimum TAD interval such an oddly-specific number (117.6ns) for 12-bit conversion

adcdevice-characteristicspic

I'm writing the firmware for a data-acquisition board using the dsPIC33FJ64GP804 MCU and I noticed something strange reading the electrical characteristics for 12-bit A/D conversion:

12-bit specifications

The ADC clock period (emphasis mine) is listed as 117.6ns, which is an oddly-specific number, especially considering there's no direct hardware obstacle to try running your ADC much faster, e.g. with TAD = FCY, which could be as low as 25ns at the highest officially allowed clock speed. So the limit doesn't come from there.

The characteristics for 10-bit conversion seems more like it's been derived from actual characterization testing:

10-bit specifications

So where does this weird value come from? Something related to the settling time of the sample&hold capacitor (esp. considering TSAMP = 3 TAD for 12-bit and TSAMP = 2 TAD for 10-bit)?

Edit

To clarify, I understand TAD = 25ns would be asking for trouble. My main questions are:

  1. Why is TAD different for the 10-bit and the 12-bit case at all?
  2. Where does the 12-bit number come from, could IC characterization (which I guess involves statistical methods and uncertainty) really produce a number that precise?

Best Answer

There are some clues in the datasheet.

For instance:

The AD12B bit (AD1CON1<10>) allows each of the ADC modules to be configured by the user as either a 10-bit, 4-sample/hold ADC (default configuration) or a 12-bit, 1-sample/hold ADC.

Then this:

• In the 12-bit configuration, conversion speeds of up to 500 ksps are supported

• There is only one sample/hold amplifier in the 12-bit configuration, so simultaneous sampling of multiple channels is not supported

I agree the unusually precise value is odd, but if you convert that to a sample rate it yields about 607K samples per second (a bit above the maximum stated rate) (14 ADC periods are required for a 12 bit conversion).

In the ADC reference manual there is a schematic of the effective ADC input in the two modes;

DSPIC33F ADC effective input

Note the difference in the input capacitance; this is necessary as the sample capacitor has to hold the charge for a longer period of time for the conversion to be accurate and will therefore require a longer time to actually charge up during the sample period.

Looking at the values, it appears that the effective capacitance for 12 bit mode is formed of all 4 sample and hold capacitors (4 * 4.4 = 17.6, 18 when rounded) and that makes sense of the statement that there is only one sample and hold in 12 bit mode. Probably achieved by switches isolating the other channel sample and holds from their caps and switching them in to form a single effective device.

Hence a longer ADC period (longer charge time and a longer hold time).

The value in the datasheet may be from calculations or experimentation (I do not know which).