In a "purely digital" link where you set an output to "high" and an input the other end of a line is read as "high" then the probability error is purely to do with the SNR of the line. What is the probability that a HIGH can be interpreted as a LOW? By introducing a higher level protocol with error detection and correction you effectively negate most of the SNR errors and the question is now "What is the probability that the protocol cannot correct corrupted bits?"
So yes, the CODEC (or protocol) can be used (and is used) to negate the effects of SNR-induced signal corruption.
As for the second part...
If you assume 1 bit of information is transmitted per quantization level, and 1 bit is received per quantization level, then yes, increasing the quantization level will increase the number of bits sent at any one time. However, the SNR of the transmission medium will then have a greater effect on those now smaller quantization steps, so although you reduce the quantization noise, you now increase the SNR noise.
However, if you don't assume 1 bit per quantization level, but have multiple quantization levels per bit, then you can increase the number of quantization levels and keep the overall bitrate the same, but have more detail about each bit, so can make a better informed decision about what value that bit is.
For instance, you can think of a simple digital link with 2 states (HIGH and LOW) as a 1-bit quantized system. For simplicity we'll call it 1V for HIGH and 0V for low.
Now, you could then have it that anything received >= 0.5V is a HIGH and anything < 0.5V is a LOW. That's 1 bit quantization. 0.5V would be HIGH, but 0.499999999999V would be LOW. That's an infinitesimally small margin for noise.
However, increase the receiving quantization to 2 bits, say, would give you more detail. It would give you 4 voltage levels to consider - 0V, 0.33V, 0.66V and 1V.
You could now say that anything > 0.66V is a HIGH, and anything less than 0.33V is a LOW. You have now introduced a "noise margin". Anything that falls between those values is discarded as noise. The bitrate remains the same, but the overall SNR has fallen.
Then of course you can add a "schmitt trigger" to it (or software equivalent), whereby you toggle the value depending on a transition. When the input rises above 0.66V you see the value as HIGH, and keep it as HIGH. Only when it then drops down below 0.33V do you then switch it to LOW.
For systems where you have discrete voltage levels you could sample them at a higher resolution, and the line-induced noise would occupy the least significant bits of that sampled value. Discarding the noisy bits down to the resolution of the sent data can then reduce the noise in the system. Also taking multiple samples and averaging them, which in effect cancels the random noise out, (known as "oversampling") can reduce the noise as well.
None of those techniques affect the bitrate as such since you're not adding any extra information to the sent values.
from http://www.hdmi.org/learningcenter/faq.aspx#94
Q. How will HDMI change the way we interface with our entertainment
systems? The most tangible and immediate way that HDMI changes the way
we interface with our components is in the set-up. One cable replaces
up to 11 analog cables, highly simplifying the setting up of a home
theater as well as supporting the aesthetics of new component design
with cable simplification.
Next, when the consumer turns on the HDMI-connected system, the video
is of higher quality since the signal has been neither compressed nor
converted from digital to analog and back.
Lastly, because of the two-way communication capabilities of HDMI,
components that are connected via HDMI constantly talk to each other
in the background, exchanging key profile information so that content
is sent in the best format without the user having to scroll through
set-up menus. The HDMI specification also includes the option for
manufacturers to include CEC functionality (Consumer Electronics
Control), a set of commands that utilizes HDMI’s two- way
communication to allow for single remote control of any CEC-enabled
devices connected with HDMI. For example, CEC includes one-touch play,
so that one touch of play on the DVD will trigger the necessary
commands over HDMI for the entire system to power on and
auto-configure itself to respond to the command. CEC has a variety of
common commands as part of its command set, and manufacturers who
implement CEC must do so in a way that ensures that these common
command sets interoperate amongst all devices, regardless of
manufacturer.
It's pretty clear that in the standard, they were trying to make sure that you would ALWAYS have the right cable when trying to assemble consumer-grade AV systems, and one way to do that is to have one cable style. Also, they want to play up the two-way communication features of HDMI.
Of course, that's speculation, but I can honestly assert that I've never cursed at HDMI connections anywhere near as loudly as I've cursed at USB connections.
Best Answer
DDC history in HDMI goes via DVI all the way down to VGA. It is implemented in a way that you can simply hook up a standard I²C EEPROM memory chip on the monitor side, which are almost as cheap as dirt (AT24C01 and compatible).
Nope. The +5 Volts tell you a different story. What they might do is a lower clock frequency on the bus. HDMI cables are usually shielded well, too.
It was there in DVI (which HDMI is compatible to) and works and is cheap.