On many voltage reference ICs (As an example a MAX610x) there seems to be various different reference voltages available (1.25, 1.8, 2.5, 3.3 etc).
What strikes me as odd are the 2.048V and 4.096V references. Why do we use references at those voltages instead of just simply 2V and 4V which would surely be easier to use mathematically?
When quantising voltages (i.e. passing through a ADC), you usually convert the voltage to an integer representation which is represented using a power of 2 scheme.
This means that they fall into the pattern of binary numbers, e.g. an 8 bit DAC has 256 individual levels. Using a reference that has a power of 2 number of millivolts means that the actual digital values have significant values.
For example, if you have an 11-bit DAC with a reference of 2.048, then the digital value is the number of millivolts.
Edit: As pointed out by Andrew Morton, this provides 2048 levels, whereas there are 2049 millivolt levels including 0. So therefore to properly represent each bit as a millivolt you would require an extra bit. However if you round consistently, it is still possible to round each element down and achieve 0-2047 mV, or round up and have 1-2048 mV. If you fit 2048 to 2049 then you lose the nice property of directly matching the number of millivolts.