To send data, the sender first turns each 4-bit nibble into a 5-bit
word, which ensures that five straight zeroes is never valid and
indicates signal loss
Not exactly. This encoding does much more than just detecting signal loss. It makes sure that the same number of zeros and ones are sent (a.k.a. DC balanced), does some error detection, and has otherwise useful properties for this type of work.
Now, a change in voltage must propagate through the wire; first the
recipient will see it, and then the sender themselves will see it on
the "undriven" side of the circuit. The sender must see this feedback
in order to ensure continuity (doesn't it?).
No. Ethernet has properly terminated signals (the termination is on the other side of the isolation transformers), and so the signal does not reflect back to the transmitter. In Ethernet there is no concept of continuity, only link. Link is established by a handshake type protocol between the two ends of the cable. If device A can send data to B, and B can send data to A, then there is a good link between the two devices.
So, the limit to the total circuit length, assuming the ideal that
voltage propagates at c, is how far light can travel in 31.25
microseconds. That distance, given a simplistic c = 3*108 m/s, is 9.6m
~= 31.5 ft. Since that's total circuit length from sender to receiver
and back, the actual total cable span is half that, or 4.8m ~=
15.75ft. Beyond this length of Cat5, it is simply impossible for the sender to toggle the voltage fast enough to maintain the fundamental
frequency, so the two parties negotiate a lower frequency, resulting
in a lower maximum bitrate over the longer cable.
No. Since there is no reflections, there is no relationship between bitrate and cable length. To put it differently, a Gigabit Ethernet cable that is 100 meters long can have up to (approximately) 600 bits worth of data "stored" in the cable.
By the time we get out to 182m, the Cat-5 specification's maximum
cable length at which simple resistance of the spec'ed cable will have
reduced signal voltage below the threshold of the receiver's
distinction between the three states, I calculate that this
speed-of-light limitation will also have reduced the maximum
sustainable fundamental frequency to approximately 1.65MHz, for a baud
rate of 6.6Mb/s and a true data rate of only 5.28Mb/s.
Ethernet spec allows for a maximum cable length of 100 meters, not 182 meters. And this has nothing to do with the bitrate or voltage thresholds. It has everything to do with collision detection and minimum packet size.
I do Ethernet all day long and we are able to transmit 900 Mbps of real data over a 100 meter long cable with absolutely no issues with reduced throughput.
if I have any unk-unks in this, it could be completely off.
Yeah, completely off. Sorry.
This is a half aswer (putting it as a community wiki so others can improve it).
About the transmission itself @Kuba Ober pointed a good alternative by using light instead magnetic waves mostly becouse all the interference you have on the system.
If magnetic transmission is required you could use correcting errors algorithms like hamming or reed-solomon.
Best Answer
Actually, the answer to your seemingly simple question is more complex than you'd readily believe!
The short answer is that one signal at a time can be passed through a single signal wire, in one cycle. The amount of data that symbol represents depends on the protocol used.
The long answer is that: