Electronic – Limitation of baud due to propagation of voltage over distance

clock-speedvoltagewire

I need to make sure I have my facts straight here; the math works but I may be missing something:

In a 100BaseTX network over Cat-5 copper, each party has a circuit (one twisted pair) on which they send and one on which they receive. Let's consider one circuit, and thus the parties can be labelled sender and receiver.

To send data, the sender first turns each 4-bit nibble into a 5-bit word, which ensures that five straight zeroes is never valid and indicates signal loss. That bitstream is then encoded into voltages using a combination of NRZI and MLT-3 encoding schemes, the end result of which is that a "1" is represented by a transition of voltage between three states (call then -1, 0, and 1) in a cyclical fashion; a stream of all 1s would be represented by 0,1,0,-1,0,1,0,-1 etc. This reduces the maximum speed at which the sender must cycle voltages in order to maintain the maximum bitrate in the worst case; the required "fundamental frequency" for the desired 125MHz baud rate is 31.25MHz.

Now, a change in voltage must propagate through the wire; first the recipient will see it, and then the sender themselves will see it on the "undriven" side of the circuit. The sender must see this feedback in order to ensure continuity (doesn't it?). So, the limit to the total circuit length, assuming the ideal that voltage propagates at c, is how far light can travel in 31.25 microseconds. That distance, given a simplistic c = 3*108 m/s, is 9.6m ~= 31.5 ft. Since that's total circuit length from sender to receiver and back, the actual total cable span is half that, or 4.8m ~= 15.75ft. Beyond this length of Cat5, it is simply impossible for the sender to toggle the voltage fast enough to maintain the fundamental frequency, so the two parties negotiate a lower frequency, resulting in a lower maximum bitrate over the longer cable.

By the time we get out to 182m, the Cat-5 specification's maximum cable length at which simple resistance of the spec'ed cable will have reduced signal voltage below the threshold of the receiver's distinction between the three states, I calculate that this speed-of-light limitation will also have reduced the maximum sustainable fundamental frequency to approximately 1.65MHz, for a baud rate of 6.6Mb/s and a true data rate of only 5.28Mb/s.

Compounding this is the fact that propagation of voltage over distance is not at the speed of light; depending on the cable, voltage typically propagates along wire no faster than .9c and as slow as .4c, so we could see data rates as slow as 2.1Mb/s at this extreme (though I'm betting that cable this poor wouldn't meet Cat-5 spec in other significant ways).

Is this line of thinking on track? Is there anything I'm missing that changes any of this significantly? What I'm not sure of is whether the sender actually does need to register the voltage differential on the other side of the circuit (the circuit may be differentially-balanced with both sides being "driven" in opposite directions). If they do, the above holds. If not, all my results get doubled. if I have any unk-unks in this, it could be completely off.

Best Answer

To send data, the sender first turns each 4-bit nibble into a 5-bit word, which ensures that five straight zeroes is never valid and indicates signal loss

Not exactly. This encoding does much more than just detecting signal loss. It makes sure that the same number of zeros and ones are sent (a.k.a. DC balanced), does some error detection, and has otherwise useful properties for this type of work.

Now, a change in voltage must propagate through the wire; first the recipient will see it, and then the sender themselves will see it on the "undriven" side of the circuit. The sender must see this feedback in order to ensure continuity (doesn't it?).

No. Ethernet has properly terminated signals (the termination is on the other side of the isolation transformers), and so the signal does not reflect back to the transmitter. In Ethernet there is no concept of continuity, only link. Link is established by a handshake type protocol between the two ends of the cable. If device A can send data to B, and B can send data to A, then there is a good link between the two devices.

So, the limit to the total circuit length, assuming the ideal that voltage propagates at c, is how far light can travel in 31.25 microseconds. That distance, given a simplistic c = 3*108 m/s, is 9.6m ~= 31.5 ft. Since that's total circuit length from sender to receiver and back, the actual total cable span is half that, or 4.8m ~= 15.75ft. Beyond this length of Cat5, it is simply impossible for the sender to toggle the voltage fast enough to maintain the fundamental frequency, so the two parties negotiate a lower frequency, resulting in a lower maximum bitrate over the longer cable.

No. Since there is no reflections, there is no relationship between bitrate and cable length. To put it differently, a Gigabit Ethernet cable that is 100 meters long can have up to (approximately) 600 bits worth of data "stored" in the cable.

By the time we get out to 182m, the Cat-5 specification's maximum cable length at which simple resistance of the spec'ed cable will have reduced signal voltage below the threshold of the receiver's distinction between the three states, I calculate that this speed-of-light limitation will also have reduced the maximum sustainable fundamental frequency to approximately 1.65MHz, for a baud rate of 6.6Mb/s and a true data rate of only 5.28Mb/s.

Ethernet spec allows for a maximum cable length of 100 meters, not 182 meters. And this has nothing to do with the bitrate or voltage thresholds. It has everything to do with collision detection and minimum packet size.

I do Ethernet all day long and we are able to transmit 900 Mbps of real data over a 100 meter long cable with absolutely no issues with reduced throughput.

if I have any unk-unks in this, it could be completely off.

Yeah, completely off. Sorry.

Related Topic