Electronic – Highest ADC input signal frequency for atmega8

adcatmegafrequencysampling

I am trying to make a simple project which involves acquiring some data from the adc of atmega8 chip and then sending that data to UART. But I cannot figure out what would be the maximum limit of the input signal frequency for it to be a reliable signal acquisition. I am using 10bit adc but don't really mind a 8 bit precision. I should also factor in the time required to transfer my data over the serial port. The data sheet says

By default, the successive approximation circuitry requires an input
clock frequency between 50kHz and 200kHz to get maximum resolution. If
a lower resolution than 10 bits is needed, the input clock frequency
to the ADC can be higher than 200kHz to get a higher sample rate.

Does that mean the sampling is happening at 200kHz? This is all very confusing to me.

Best Answer

What is the ADC Clock?

The section that you are seeing is for the clock used for the ADC. This clock is not directly related to the max sampling frequency though. The clock is what is actually being fed to the ADC module which needs to be faster than your sampling so that it can handle some magic for you.

How does the Max clock relate to the max sampling frequency?

What the datasheet is saying is that in order to get 10 bit resolution your clock can not be any faster than 200 KHz. When your clock is at that speed, you will be able to sample your signal at 15,000 samples per second.

If you don't need all 10 bits of resolution then you can provide the ADC with a faster clock and you will get a faster sampling rate, but the datasheet is not clear as to how fast you can go and still get 8 bit resolution.

I would assume that the clock to sample rate ratio is fixed, so 200K/15K = 13.33 which means you can go as low as 50 KHz clock resulting in 3.75 kSPS.

Why a minimum clock to get a 10 bit sample?

The ADC module is doing a sample and hold in which the voltage is essentially held in a capacitor. If you slow the clock down too much, the voltage can start to bleed off of the capacitor before a complete sample is performed. This change in voltage makes it such that you can't get all 10 bits accurately.

So what does this all mean?

According the the Nyquist-Shannon sampling theorem your sampling frequency needs to be at least twice the maximum frequency in your signal. You can learn more about why by looking at this question: Puzzled by Nyquist frequency

So in order to get 10 bits of resolution, the max your signal can be is 7.5 KHz, but if you need to sample a signal faster than that, you can, but the datasheet does not mention how high you can go or how much it hurts your resolution.