I put 12V DC through a pair of twisted wires of a 50 meters (~160 feet) UTP cat.5 cable and noticed that there was a voltage drop at the other end of the cable. It only had 5V. If I'm feeding the same 12V through two wires that aren't paired (twisted), I don't get that voltage drop. What's the phenomenon that causes the voltage drop?
I'm going to assume that you're using a relay similar to this one I found on Digikey.
It lists the control current at 5V as 3mA, which I'll use as a good ballpark number. If we want to know if the relay will turn on, then we want to be sure that the control voltage at the relay will be greater than the minimum control voltage of 3V.
The first check is whether or not we can source the current. In this case, 3mA is well below the typical maximum current source/sink of a microcontroller (20mA), so we're fine. If your relay uses more current than that, you may need to look at using a transistor to drive the relay.
The second check is to see what the voltage at the relay will be after the resistive losses of the wire. If we assume the worst-case resistance of the wire, then that is 0.188 Ω/m, like Ildefonso stated. Note that this is the loop resistance, not the resistance for a single wire.
At 11.5 meters length, you will have a total wire resistance of 2.2Ω. (11.5m * 0.118Ω/m). At 3mA current, this creates a voltage drop of 6.5mV (2.2Ω * 3mA). If you drive the relay using a 5V source, the voltage that the relay would see is 4.993V (5V - 6.5mV), which shows that at DC those wire lengths are negligible.
You would need to be drawing 900mA, or using a wire length of 5km to start seeing a voltage close to the minimum threshold (3V) of the relay.
For powering a sensor, the same calculations will hold true. If you know the DC current your sensor requires, then you will probably find that CAT5 will power your sensor fine. If the sensor has an analog output, there shouldn't be a big issue, and with a digital signal you probably will need to go slower than if the sensor was on the Arduino board.
To answer your other sub-questions,
- You always loose power in wiring, but at low current DC, it usually isn't much. What you're probably meaning is will I lose too much voltage.
- The plenum/shielding would not impact your ohmic/resistive losses.
- If the wire resistance was too high, you could use twice as many wires to cut the resistance by half. The wire in CAT5 is fairly high gauge, so using a low gauge wire would help in a similar way.
- If you couldn't afford to use more wire, but wanted to send more power down the line, you could increase the voltage sent over the wire, and use a DC-DC switching supply to reduce the voltage. This is similar to what goes on in Power over Ethernet, and AC Power Grid Transmission.
To send data, the sender first turns each 4-bit nibble into a 5-bit word, which ensures that five straight zeroes is never valid and indicates signal loss
Not exactly. This encoding does much more than just detecting signal loss. It makes sure that the same number of zeros and ones are sent (a.k.a. DC balanced), does some error detection, and has otherwise useful properties for this type of work.
Now, a change in voltage must propagate through the wire; first the recipient will see it, and then the sender themselves will see it on the "undriven" side of the circuit. The sender must see this feedback in order to ensure continuity (doesn't it?).
No. Ethernet has properly terminated signals (the termination is on the other side of the isolation transformers), and so the signal does not reflect back to the transmitter. In Ethernet there is no concept of continuity, only link. Link is established by a handshake type protocol between the two ends of the cable. If device A can send data to B, and B can send data to A, then there is a good link between the two devices.
So, the limit to the total circuit length, assuming the ideal that voltage propagates at c, is how far light can travel in 31.25 microseconds. That distance, given a simplistic c = 3*108 m/s, is 9.6m ~= 31.5 ft. Since that's total circuit length from sender to receiver and back, the actual total cable span is half that, or 4.8m ~= 15.75ft. Beyond this length of Cat5, it is simply impossible for the sender to toggle the voltage fast enough to maintain the fundamental frequency, so the two parties negotiate a lower frequency, resulting in a lower maximum bitrate over the longer cable.
No. Since there is no reflections, there is no relationship between bitrate and cable length. To put it differently, a Gigabit Ethernet cable that is 100 meters long can have up to (approximately) 600 bits worth of data "stored" in the cable.
By the time we get out to 182m, the Cat-5 specification's maximum cable length at which simple resistance of the spec'ed cable will have reduced signal voltage below the threshold of the receiver's distinction between the three states, I calculate that this speed-of-light limitation will also have reduced the maximum sustainable fundamental frequency to approximately 1.65MHz, for a baud rate of 6.6Mb/s and a true data rate of only 5.28Mb/s.
Ethernet spec allows for a maximum cable length of 100 meters, not 182 meters. And this has nothing to do with the bitrate or voltage thresholds. It has everything to do with collision detection and minimum packet size.
I do Ethernet all day long and we are able to transmit 900 Mbps of real data over a 100 meter long cable with absolutely no issues with reduced throughput.
if I have any unk-unks in this, it could be completely off.
Yeah, completely off. Sorry.
- Electronic – Does ground suffer from a voltage drop like effect
- Electrical – Thickening wire to prevent voltage drop
- Electrical – the reasoning behind wire/cable sizing
- Electronic – How to compensate voltage drop on a wire
- Electrical – Voltage loss over wire
- Electronic – use a twisted pair for 5V and ground
- Electronic – How to calculate voltage drop across multiple wire gauges