I have recently heard of the concept of a one-bit ADC, and have seen it implemented in the context of a sort of digital-to-analog converter (oddly enough), and I'm wondering, what is the point? Why not simply use a higher-resolution ADC, if higher resolution is desired?
Electronic – a one-bit ADC good for
adcdac
Related Solutions
ADS1256 from TI has eight single-ended 24bit channels with high-impedance input buffer and PGA. OpenEXG project has PIC code to interface it (they use two channel version ADS1255, but it should be the same).
If you want differential inputs, then there is ADS1298, with 8 channels, PGAs and A/Ds, internal reference, plus ECG/EEG circuitry which you can ignore. I am not sure you can find any example code for this one, though.
If you are looking for resolution, then precise, low noise reference is a must.
Dithering is one way, as in "rawb"'s answer. In audio, the usual accepted standard for plain dithering was a triangular PDF dither with a peak-peak amplitude of 1 LSB, added to the high res (e.g. analog) signal before quantisation (e.g. the ADC). The same applied not just to ADCs but to any other truncation process, such as going from studio equipment down to 16 bit for CD mastering.
This triangular PDF signal was easily generated as the sum of two uniform PDF dither signals, each 0.5 LSB pk-pk amplitude, from indepenent (or at least uncorrelated) random or pseudorandom generators.
A lot of work was done on this in the 1980s, among others by Decca in London who built their own studio equipment, and they showed that with TPDF dither, signals (pure tones) could be detected about 20dB below the (broad band) noise floor, with no observable harmonic distortion (i.e. nothing distinguishable from noise)
Another way is applicable if the bandwidth of interest is less than the Nyquist bandwidth, as is usually the case in oversampling converters.
Then you can improve massively on the plain dithered results. This approach, noise shaping, generally involves embedding the dithered quantiser in a closed loop with a filter in the feedback path. With a simple filter you can get one extra bit of resolution per halving in frequency as Jon Watte says in a comment, but with a third order filter you can do considerably better than this.
Consider that a 256x oversampling converter ought to give 8 bits additional resolution according to the above equation, however 1-bit converters operating this way routinely give 16 to 20 bit resolution.
You end up with very low noise in the bandwidth of interest (thanks to high loop gain at those frequencies), and very high out-of-band noise somewhere else, easy to filter out in a later stage (e.g. in a decimation filter). The exact result depends on the loop gain as a function of frequency.
Third and higher order filters make it increasingly difficult to stabilise the loop, especially if it starts generating incorrect results during overload (clipping or overflow) conditions. If you're careless or unlucky you can get rail-to-rail noise...
Lots of papers from circa 1990 and onwards by Bob Adams of dBX, Malcolm Hawksford of Essex University and many others about noise shaping converters, in the JAES (Journal of the Audio Engineering Society) and elsewhere.
Interesting historical note : when CD was first being standardised, the Philips 14 bit CD proposal went head to head with Sony's 16-bit LP-sized disk. They compromised on the slightly larger CD we still have today with 16 bits and allegedly at Morita-san's insistence, enough recording time for Beethoven's Ninth Symphony.
Which left Philips with a pile of very nice but now useless 14-bit DACs...
So Philips first CD players drove these DACs at 4x the sampling rate, with a simple noise shaping filter (may have been 2nd order but probably first order) and achieved performance closer to 16 bits than contemporary 16-bit DACs could. For 1983, ... Genius.
Best Answer
To give a basic example of how a 1-bit ADC can be used to obtain useful information from a waveform, take a look at this circuit. It uses a triangle wave to turn the information into a pulse width modulated output. This is a similar but simplified version of how other 1-bit ADC techniques work, by using a (usually fedback) reference signal to compare the input to.
Circuit
Simulation
Magnified Timescale View:
We can see from the top input waveform, the triangle wave is used to compare the waveform at different points through it's period. As long as the triangle wave is of a considerably higher frequency than the input (the higher the frequency the more accurate), this causes the comparator to output an average of high/low depending on the voltage level of the waveform.
To see how we can reproduce the original waveform from the PWM data, the comparator output is fed into a low pass filter, and out pops the sine wave again.
For further reading:
Delta-Sigma Converters
Successive Approximation ADC
Single Bit ADCs
Ramp Compare ADC (Counter ADC)