Electronic – How are 200 GBit/s over twin-axial copper cable achieved

ethernethigh frequencytransmission line

The 200GBASE-CR4 standard [1] defines ethernet communication with 200 GBit/s using four lanes and PAM4 modulation over a maximum distance of 3 m.

A single PAM4 symbol encodes two bits, therefore, I expect that a single lane is modulated at approx 25 GHz.

According to [3], 40GBASE-T is the Ethernet standard with the highest bandwidth using twin-axial copper cables. It uses four lanes, therefore, we have 10 GBit/s per lane and PAM16 encoding. Thus, there should be four bits per symbol and I expect that a lane is modulated at around 2.5 GHz.

From my understanding, such high frequencies are very difficult to handle as the wavelength becomes comparable to the dimensions of the cable and one has to use coaxial cables or other waveguides.

Particular problems I would expect at these high frequencies are high attenuation because of the skin effect [2] and an impedance mismatch due to thermal and mechanical effects on the cable.

Questions:

  1. Is the 200GBASE-CR4 standard, communication of 200 GBit/s through twin-axial copper cable achievable?
  2. What effects make it difficult to transmit such high-frequency signals in practice? Are there any books which cover this particular topic?

Best Answer

The 200GBASE-CR4 standard [1] defines ethernet communication with 200 GBit/s using four lanes and PAM4 modulation over a maximum distance of 3 m. A single PAM4 symbol encodes two bits, therefore, I expect that a single lane is modulated at approx 25 GHz.

They symbol rate is 25 Gbaud. But you will find that the 3-dB bandwidth of the channel needed to achieve that is significantly less than 25 GHz. It's probably between 12 and 19 GHz, but I'm not familiar with this specific standard.

Is the 200GBASE-CR4 standard, communication of 200 GBit/s through twisted pair copper cable achievable?

I don't know what's been commercialized, but you can be sure that no physical medium gets accepted into 802.3 until at least 3 or 4 companies (including both implementers and potential customers) have convinced themselves that the technology is not only achievable, but also will reduce costs relative to previously defined media.

On the other hand, they have been wrong in the past (or at least, they've defined standards that were superseded by even newer technologies before they reached a wide market)

What effects make it difficult to transmit such high-frequency signals in practice? Are there any books which cover this particular topic?

Notice that this medium is limited to 3 m link lengths. Whereas 40 GBase-T was defined for up to 30 m, and media intended for actual LAN applications are generally defined for 100 m or more.

Most degradations in transmission lines scale with link length, so reducing the length allows us to achieve a higher bandwidth over a given cable geometry.

As mentioned in comments, the Ethernet standards for 100 Gbps and up also generally require substantial equalization at both the source and receiver. This is even more true of the short-distance copper media, intended for links within servers or across backplanes, than for the longer distance fiber media intended for links between servers or even across campuses or cities.