Encoding delay of Ethernet and the relation to cable frequency rating

copperdelayethernet

I asked a question here called "Speed of electricity (signal propagation?) through copper for communications delay". I wanted to know how long does it take a signal to travel down a length of Cat5e cable (my background is telecomms and networking for this question).

In the opening paragraph on this Wikipedia page it states the following about Cat5 cable;

"The cable standard provides performance of up to 100 MHz"

In my previously referenced question I was pointed to a general rule of thumb for wave propagation which was between 4.9 ns/m and 5.3 ns/m. 100Mbps data transfer over Cat5 cable means 1 bit of data is encoded onto the wire every 1 second / 100,000,000 bits = 0.00000001 ns (that's 1 bit every 10 ns).

From this I assume that the receiving device will expect and receive and decode bits at a rate of 1 every 10 nanoseconds. If the delay down the coper wire (between 4.9 and 5.3) is less than the encoding delay though, surely bits will be being received too quickly at recipient end, faster than they can be decoded into a digital stream which could be buffered?

Also to tie this all together, I have assumed that cat5 is rated for 100 Mhz because that means in each cycle 1 bit of data is encoded onto the wire. Or does this 100 Mhz represent something else? Cat6 is used for gigabit transfer rates (or cat5e) with a frequency of 250 Mhz. Presumably this is just because fancy encoding methods are used to encode more bits into a single symbol on the wire. So, the 100 Mhz reference above from the wiki article is the reason it is a one to one ratio for encoding data onto the wire so we end up with a 10 ns encoding duration per bit. Is that correct also?

Best Answer

If the delay down the copper wire is less than the encoding delay though, surely bits will be being received too quickly at recipient end, faster than they can be decoded into a digital stream which could be buffered?

I think the key point you're missing is that it's entirely possible for more than one bit to be "in flight" on the wire at any given time.

For example, if the wire is 100 m long, and the velocity is 192 x 106 m/s, and the bit rate is 100 Mb/s, then 52 bits of data will actually be "on the wire" at any given time. The receiver, however will only be aware of the 1 bit that is actually arriving at the receiver at that instant.

If the transmitter is sending bits at 100 Mb/s, then the receiver must receive and decode these bits at 100 Mb/s. The length of the wire changes the latency time between these two events, but it has nothing to do with the rate at which the receiver must deal with the incoming data.

Usually the receiver doesn't deal with the incoming bits one at a time, doing calculations at 100,000,000 operations per second. Instead it simply queues the bits up into something like a shift register, and then operates on them at a much lower rate, maybe 12.5 million operations per second, but operating on full bytes with each operation (or even at slower rates, but operating on larger data words).