I came across the CS5343 ADC the other day and was impressed with its specs (relative to its cost) but then became confused when I dug deeper and read the datasheet.
It's nominally a 24-bit 96 kHz ADC, but the datasheet lists a dynamic range of only 98 dB and a "-92 dB THD+N". The 92 dB figure gives an "effective number of bits" value of 15 using the normal arithmetic. Even the dynamic range would only correspond to 16 bits worth.
So I guess the first part of my question is: why the heck would anyone choose to market this product as a 24-bit ADC?
I then read a bit more about how delta-sigma ADCs work (along with noise-shaping). I partially understand them now but I clearly have serious gaps relating to the practical details, including the decimation. For example, the CS5343 has a master clock rate of up to around 40 MHz (when digitising at around 100 kHz) – specifically, a 36.864 MHz clock can be used to give a 96 kHz sample rate, using a divider of 384x.
Now, I'm stuck here. My feeble understanding of the 1-bit datastream that emerges from a delta-sigma would imply to me that if you want to achieve 24-bit resolution, you need to count the number of "1s" that occur in each set of 2^24 samples. That would mean a multiplier of 2^24 (i.e. more than 16 million) between the data rate and the master clock rate, not 384. (It also implies a master clock rate of around 1.6 THz!)
So here's the second part of my question: given that this 2^N multiplier (where N is the number of ADC bits) is clearly not present in real-world ADCs, can anyone point out the breakage in my italicised text above, or a link which explains it?
Best Answer
Often marketing calls for 24 bit audio, but the customer doesn't know what that means so the vendor cuts corners.
That's a multi-bit converter. It doesn't actually use a binary output from the delta-sigma modulator, but rather has a lesser number of bit DAC which is combined with oversampling and noise shaping to improve SNR.