Electronic – How does “bidirectional” transmission on gigabit ethernet work

cablesethernet

I was reading about the various twisted pair protocols, being distracted by the marvels of Wikipedia when I went to look up the way to wire a connector.

And I'm wondering how it can transmit in both directions at the same time over the same conductor? I assume I read that correctly, since if they take turns it would not be called full duplex.

And why is that better than using two (different) pairs in each direction?

Best Answer

The method is called echo cancellation, and it requires a bit of signal processing. Basically, the idea is since you know what you're sending out, then you can separate the signal you just sent from what is coming in from the far end of the link. The way the circuitry is set up, the transmit and receive signals are superimposed on top of each other, more or less adding together.

Simple example to give you an idea of how this works: if the transmitter sends

+1, +1, -1, +1

and the local receiver gets

+2, 0, -2, +2

then you can work out that the signal from the other end must have been

+1, -1, -1, +1

That's more or less the gist of how it works, but it's significantly more complicated due to delays and reflections. The technique is called 'echo cancellation' because sending just a lone +1 down the line will not result in receiving a lone +1, rather you will get several delayed copies at various amplitudes. For example, if you send

+1, 0, 0, 0, 0, 0

you might get back

0, +0.8, 0, +0.2, -0.1, +0.1

due to discontinuities along the line. The received signal then becomes the 'convolution' of the transmitted signal with this pattern. For example, if you send

+1, +1, -1, +1, 0, 0, 0, 0

then you will get something like

0, +0.8, +0.8, -0.6, +0.9, -0.2, +0.4, -0.2, +0.1

The transceivers send training sequences to figure out what the echo looks like (e.g. send a lone +1 while the other end is sending 0 and measure what you get at the receiver). This information is used to reconstruct what the receiver would expect to see from the transmitted data echoing back. This reconstruction is subtracted from the received data, leaving behind the signal from the other end of the link.

This method cannot tolerate as much loss or noise as using separate signalling pairs for each direction, however it means that you can re-use the old 100 Mbit cabling that you already have routed to every room in your building.

Incidentally, 10 Mbit and 100 Mbit signalling is horribly inefficient: both use a single receive pair and a single transmit pair, even though the cable has four pairs. When gigabit ethernet was developed, the designers wanted to keep compatibility with 10 and 100 Mbit ethernet as much as possible. Since there was no way they were going to get 10x the bandwidth out of one single pair, the solution was to improve the single pair bandwidth by 2.5x and then use all four pairs. They now have 10G ethernet over a slightly improved version of the same cabling (mainly it requires a lot of shielding), but it is currently very uncommon (most 10G ethernet uses completely different cabling that has one pair in each direction running at 10G). I seriously doubt we will see anything faster than 10G ethernet over RJ-45 cabling.