How do Ethernet cables/USB cables send binary 0

binaryethernet

I was wondering how your computer sends binary zero as an electrical signal. Is there a certain delay or does it do something unique.

Best Answer

The most basic CMOS/TTL logic uses a voltage within a specified min/max range to represent either a "logic low" 0 or a "logic high" 1. These include the discrete logic gates like 7400 / 7402 / 7432 that were used in the 1970's (and are still used sometimes on solderless breadboards), as well as more modern higher-integration chips. The exact voltage range is listed in the device's datasheet Electrical Characteristics table:

- VOH = voltage output high; specified as a minimum limit
- VOL = voltage output low; specified as a maximum limit

The device that is driving the output, is guaranteed to drive a logic low 0 as some voltage between GND and VOLmax; and also to drive a logic high 1 as some voltage between VOHmin and VCC (power supply rail). The gap between VOLmax and VOHmin is a dead band where the output is undefined -- this is what provides the noise immunity of digital signalling as compared to analog signals.

- VIH = voltage input high; specified as a minimum limit
- VIL = voltage input low; specified as a maximum limit

The device that receives the input, will interpret a voltage between GND and VILmax as meaning a logic low 0, and interpret a voltage between VIHmin and VCC as meaning a logic high 1. Any input between VILmax and VIHmin is not valid. And any input below GND or above VCC may violate the Absolute Maximum Ratings (i.e. permanently damage or degrade the device).

For CMOS, the VOH/VOL and VIH/VIL thresholds are usually a percentage of the power supply, like 30%VCC / 70%VCC. For TTL, the thresholds are absolute with 2.4V the usual VOLmin voltage.

Timing is a separate concern. If the logic is just implementing some Boolean equation ("glue logic"), the output signal simply follows the input after some specified propagation delay time. If the logic implements a state machine or a CPU, there will be a clock signal that determines the system's timing.


You also mentioned the Ethernet and USB communications protocols; these are a lot more complicated. It's a lot harder to even frame the question in terms of sending a single binary bit, since there is a lot more information that is required (such as host IP address, frame number, etc.) These build on the basic idea I described above, but add a lot more layers that are specific to each standard.

Ethernet has several layers of communications protocols; the datalink layer is different even for different types of Ethernet -- 10Mbit and 100Mbit are not just different speeds but different signalling protocols. This is described in IEEE standard 802.3

The USB protocols are described in the USB Standard, as well as on Jan Axelson's USB Complete website.

If you've never read a standards specification document before, I'd recommend starting with USB -- it's comparatively a bit simpler than Ethernet.