Electronic – This ADC code works, but I don’t understand why

adcembeddedmicrocontrollersamd21

I have the following code for reading a battery voltage on the ADC on a microcontroller (Atmel SAM D21 to be precise.) The reference voltage is 3.3V and the ADC is reading at 12 bit resolution:

/**
 * Union for Readings
 *
 */
typedef union u_reading {
    int16_t i;
    uint8_t c[2];
} reading;

/**
 * Read the main battery voltage.
 *
 */
static void read_battery_level()
{
    // Switch on the Control Pin
    gpio_set_pin_level(ADC_CONTROL, 1);
    
    // Battery Voltage
    float batt_voltage = 0.0f;
    
    // Array of Samples
    reading batt_readings[BATTERY_READINGS_COUNT];
    int x;
    
    // Loop through ad Average the Readings
    for (x = 0; x < BATTERY_READINGS_COUNT; x++)
    {
        // Read the ADC Channel
        adc_sync_read_channel(&ADC_BATTERY, 0, batt_readings[x].c, 2);
        delay_us(20);
    }
    
    // Counter for the Sum
    uint32_t sum = 0;
    
    // Loop through and Average the Readings
    for (x = 0; x < BATTERY_READINGS_COUNT; x++)
    {
        // Add the Sum
        sum += batt_readings[x].i;
    }
    
    // Calculate the Mean Reading
    batt_voltage = (sum / (float)BATTERY_READINGS_COUNT) * 0.8;
    
    // Set the Battery Level
    battery_level.i = (uint16_t)batt_voltage;
    
    // Switch off the Control Pin
    gpio_set_pin_level(ADC_CONTROL, 0);
}

The code works and gives me a very accurate reading for battery voltage – I've tried it with a pretty accurate power source and multiple voltages and the reading is good every time. When I switch the reference voltage to 5V, it's no longer accurate, unless I remove the * 0.8 multiplier.

I'm still wrapping my head around how ADCs work, and I was wondering if someone could explain what's going on here.

Why does a multipler of 0.8 work for a 3.3V input and a multiplier of 1 work for 5V?

Best Answer

Generally the output of an ADC is \$\frac{v_{in}}{V_{ref}}N\$, where \$N\$ is the number of counts you get from the ADC. So if it's a 12-bit ADC the count is \$N = 4096\$. If your reference voltage is \$3.3\mathrm{V}\$, then each ADC count represents a voltage increase of \$\frac{3.3\mathrm{V}}{4096} \simeq 806\mu\mathrm{V}\$ This is close to multiplying by 0.8 to get millivolts.

It's best to make this calculation explicit in your code. Modern C compilers will let you do this as a series of #defines, or possibly const float expressions and then optimize to the actual value; modern C++ compilers will let you do the same thing with constexpr float expressions, only with better type checking than C #defines.

Something like the following would do, and would eliminate the magic number 0.8 from your code:

#define ADC_REF 3.3     // volts
#define ADC_COUNT 4096
#define ADC_LSB_MV (1000.0 * ADC_REF / ADC_COUNT)

Giving it a 5V reference voltage is violating the absolute maximum parameters. The data sheet asks you to limit this to VDD - 0.6V. I'm not sure why the -0.6V part, but generally chips have input protection diodes to the highest voltage (\$\mathrm{V_{DDANA}}\$ in this case), so the chip is probably (unhappily) yanking the +5V reference down to around 4V, which would give you a multiplier of about 1 -- and do all sorts of strange and possibly bad things to the chip.

For that matter, giving it a 3.3V reference is violating the maximum absolute parameters, too, only not as bad. Table 37-24 in the datasheet lists the maximum reference voltage as \$\mathrm{V_{DDANA} - 0.6V}\$. So, properly, if you're using a 3.3V analog supply, you should use no higher than a 2.7V reference (2.5V would be convenient, because there's precision 2.5V references).