EDID is used, among other things, to query the monitor for the timings it would like. It isn't used to transmit any video information.
I guess it's been so long that no one remembers, but VGA monitors back in the day had adjustments for the image location on the CRT. You could move it left or right, up or down, or scale it horizontally or vertically. More advanced monitors had additional adjustments. These allow you to compensate for whatever timing was in use, whatever local magnetic field there might be distorting the picture, etc.
Of course, what these adjustments are doing behind the scenes is adjusting the timing parameters. They were necessary because the VGA signal doesn't explicitly say how much blanking time there is. As you've noticed, there are some general conventions, and the monitor can (through EDID) advertise support for particular timings, but there's no requirement that those timings are what will be sent to the monitor.
What you can do, and what LCD monitors that still have VGA interfaces do when you press the "auto adjust" button, is guess at what the blanking time might be simply by looking at the R, G, and B signals. These should be black for the blanking time, and probably they aren't black otherwise. Of course someone could be looking at a black screen, and this approach won't work.
Similarly, I don't think there's anything in the VGA signal that will tell you what the resolution is. You can guess at it by looking for the edges between pixels, and timing them, and this is again what LCD monitors do. But remember, VGA is an analog signal, designed to be displayed on an analog device. It has no concept of "pixels".
I've recently looked at the AD5696 (quad 16 bit 0->2.5V output DAC) and the data sheet tells you this: -
- INL and DNL add up to +/- 4 LSBs (+/-0.15 mV error in 2.5V full scale)
- Zero offset is +/- 1.5 mV (2.5 volts FSR)
- Gain error is +/- 0.1% of FSR (1.25 mV in 1.25V mid scale)
Total "mid-scale" error when extended to 10V full scale is 0.60 mV + 6 mV + 5 mV = 11.6 mV but this assumes a perfect amplifier following the DAC with perfect resistors. 0.1% resistors could give you an extra gain error of 0.2% but, MAXIM produce accurate and temperature stable potential dividers that are ratiometrically 0.025% so I'd consider them.
Also, as has been said in comments the voltage reference is paramount. You can get a voltage ref of initial accuracy 0.02% but of course this adds an error. Can you live with this unadjusted error?
Temperature and long-time drift account for significant errors. If you have a situation where the DAC is subjected to several degrees change in temperature then you have to look at the ppm/degC the gain might shift - the device above is +/- 1ppm/deg C so it's pretty good BUT you must still consider the error.
Ditto for the voltage reference - I am considering using the LTC6655 - it has an initial accuracy of 0.025% (which I will adjust) and a temperature stability of 2ppm/degC (max).
One final note if using a single supply DAC, check what the zero value eror is - this tells you how close to 0C the DAC will work - you might find that the bottom 5mV of range (or the top 5mV of range) are deadbands and unusable.
Best Answer
Just have a look in the Basys2 manual (p.8, Fig 13) how it is done there. (The Basys2 board is a starter kit for the Spartan 3):
For simple applications with only 3bits for R and G and 2bits for B a simple resistor network seems to be good enough.
BTW: I don't understand your concern about "require[ing] lot of digital pins" when your FPGA has probably more then hundred digital IOs.