You said you only care about 1% accuracy, which is less than 7 bits of the full range. You can therefore use the 1.000-2.024 voltage directly. Even if you have a 10 bit A/D with a 0-3.3 V full range, you still get about 320 counts, which is more than 3 times your requirement. There is no need to shift or scale anything.
If you use a divider to create Vref+ instead of using the 3.3 V supply internally, then you get even more resolution. If you can bring it down to 2.1 V, for example, to leave a little margin, then you get 500 counts over your range. Thats lots more resolution than accuracy unless you use a separate precision reference. Consider that a divider made from 1% resistors will cause significantly more error than a 10 bit A/D using the reference. To get 1% accuracy, using a fixed external reference is probably the simplest way. A 2.048 V reference is almost perfect here.
Some PICs do have a optional Vref- input, but tying it to anything other than ground is going to decrease accuracy. Basically you'd be tradeing off accuracy to get more resolution, which makes no sense when you already have lots of resolution and accuracy is on the edge.
Your desire to get the raw A/D counts to represent some arbitrary "round" value is silly. Don't burden your measurement system with having to meet this arbitrary spec. Do the best job of taking the measurement, then the rest is simple conversion in firmware. You have a digital processor that can easily apply a scale and offset instantaneously in human time. The conversion to decimal will probably take more cycles, although that will be instantaneous in human time too.
Basically, think about what you really want to get out, proritize your requirements accordingly, and don't specify implementation details (like what one A/D count should represent). Your top priority should be accuracy, given your specs, since everything else pretty much falls out with a 10 bit A/D.
If the internal ADC of your microcontroller performs the job you need it to then no, there is no need for external ADCs. But then, that's not who they're aimed at.
You have covered most of the reasons for an external ADC, but there are a few more, and in my opinion, they are some of the most important reasons:
- You need a different sampling technology - for instance the internal ADC is SAR, but you need to do Delta Sigma.
- The internal ADC, because it is internal, and shares the same die as the main MCU, will never be 100% free from the noise of the rest of the MCU, so an external one would be possible to make ultra low-noise
- Your microcontroller / SoC / FPGA of choice has no ADC. The latter two are most likely - most common SoCs and FPGAs don't have any ADC at all. Yes, you can get ones that do, but many don't. So you add an external one.
For point 3, take the Raspberry Pi for example. That has no ADC available at all, you have to add an external one to do any analog work at all.
Best Answer
Think about what a two point calibration means, you measure some parameter and read what ADC value you get, then do the same thing at another parameter value and draw a straight line between them (Which you then extend to 0 and full scale in a fit of optimism).
This implies an expectation of linearity, which is to say it implies an expectation that the line really is straight all the way.
Now real electronics near the rails is often not in fact all that linear, things start to saturate, gains fall, feedback starts to misbehave, maybe there is a small offset, maybe even your sensor becomes slightly non linear. If you take your cal points where all that stuff is going on then your entire calibration is off, even over the 90% or so of the range where those effects are minimal, if you cal at say 10 and 90% (or whatever makes sense) then you can still read (possibly somewhat inaccurately) over the full range but are less likely to have an error over most of the range.
Concrete example:
Lets say there is a temperature sensor that drives a 10 bit ADC, we measure a temperature T1 =24C with a calibrated reference instrument and get an ADC reading V1 = 150, measure temperature T2=100C and get a value of say 827 from the ADC, this constitutes input to our two point cal. A 10 bit ADC has a single ended range of 0->1023 so 150 is just a bit bigger then 10% of the range which is a reasonable cal point, 827 just a bit smaller then 90% of full scale so also a reasonable cal point.
Now, 24C = 150, and 100C = 827, so we can trivially calculate the slope, (100C-24C)/(827 - 150) = 0.1123 Celsius per ADC step.
Calculate the temperature that should give 0 on the ADC, which I make to be about 7 Celsius, and the full scale value which I make to be about 122 degrees.
Now those bits at the end of the scales assume things are linear, but they probably really are not (Which is why we try to keep the cal points away from the total extremes), so take with a caution.
Also notice that our ADC values are quantised, so are out cal points, the line is really an area touching the corners of the quatisation step at each cal point (A good reason to keep the cal points far apart).