Electronic – How is transmission line impedance selected

impedanceimpedance-matching

I understand (roughly) why transmission line impedance has to be matched to the source and the load. What I don't understand is how different technologies have chosen to use different impedance. (USB is 90 ohms, Ethernet is 100 ohms, PCIe is 85 ohms, amateur radios and antennas are typically 50 ohms).

Was it related to natural impedances for the source or load? Is there some way to determine the optimal impedance for a whole system if I can control the source, load, and transmission line?

Best Answer

Some impedances are more suited to higher power transmission and some are more suited to producing lower losses: -

enter image description here

Diagram taken from Techplayon but is available from other sources. The one below is taken from Beldon's website: -

enter image description here

So, 50 ohms is a compromise between low loss and decent ability to pass power.

USB is 90 ohms, Ethernet is 100 ohms, PCIe is 85 ohms, amateur radios and antennas are typically 50 ohms

USB (for instance) is a differential signalling system so it tends to have roughly twice the impedance of "standard" coax so, interestingly, there isn't much of a difference between it and twinax (dual coax): -

enter image description here

9207 Belden twinax cable: -

enter image description here

Thicker more robust cables tend to have a bigger core conductor and this tends to make the capacitance between inner and shield/screen bigger. It also tends to make the loop inductance smaller. So a cable having more power handling capability could be generally said to have more capacitance per metre and less inductance per metre. At RF frequencies the characteristic impedance of a cable is: -

\$Z_0 = \sqrt{\frac{L}{C}}\$

Hence, as L decreases and C increases, \$Z_0\$ gets lower.