I was wondering how your computer sends binary zero as an electrical signal. Is there a certain delay or does it do something unique.
How do Ethernet cables/USB cables send binary 0
binaryethernet
Related Solutions
Binary subtraction is usually implemented using binary addition, and negation.
It is unusual to do three or more operations simultaneously (other than multiply add, aka multiply accumulate, or MAC).
So A-B-C
, is usually implemented as A+(-B)+(-C)
where the -
is unary minus, or negation.
This scales to many operands and many subtrahends.
A-B-C-D+E-F-G
becomes A+(-B)+(-C)+(-D)+E+(-F)+(-G)
. The addition, '+' operation is commutative, so this can be evaluated in any order (ignoring overflow), and with arbitrarily many operands.
How might we operate on \$47-15+23-11+16-22+12\$?
One approach might be \$47+23+16+12-(15+11+22)\$, but how much hardware might be needed to implement that?
Clearly a nifty solution is \$47+(16-15)+(23-22)+(16-15)=50\$, but so what? How much hardware or VHDL and programmable logic would it take to recognise that? When we know in advance the sequence of operations, then optimisations may make sense.
Using 1's complement representation, the rightmost, 'top' bit represents the sign, with '1' indicating a negative number. So to negate a number, 'invert' its top bit to be the other value. A problem with 1's complement representation is their are two zero's a positive (top bit 0) and a negative (top bit 1) representation, which makes some operations more complex.
Most computing uses 2's complement representation of binary numbers. The top bit still indicates the sign ('0' for positive, and '1' for negative) as in 1's complement. However, the values are represented in a slightly different way.
The process is described at Wikipedia Two's complement
To subtract a number, convert to its two's complement form then add. This is a bit weird, but isn't too bad to implement.
This approach has several useful properties:
- there is only one representation of zero, and hence
- comparing a number against zero is simpler than 1's complement
- once the number is converted to its complement, the addition is Commutative, it can be done is either order
Edit:
An efficient way to implement subtraction by calculating the 2's complement adding is:
- invert each bit of the subtrahend. Cost one NOT gate per bit. The invert operation has quite a low propagation delay, then
- add using N-bit-addder with a '1' carry-in at the bottom bit. The carry-in implements the second 2's complement step of +1.
So the total extra cost of subtraction is N NOT-gates, and the lowest bit of the N-bit-add is a full adder instead of a half adder (which is the classic way to teach this), and hence is an extra three gates.
A useful property of restricting operations to two operands at a time is it works for all cases, and is relatively easy to parse, and apply 'everyday' arithmetic rules of precedence (multiply before add), associativity (left to right) and commutativity (A+B = B+A, but A-B ≠ B-A).
Signed integer divide is almost always done by taking absolute values, dividing, and then correcting the signs of quotient and remainder, or at least was in earlier CPUs. They may have fancier tricks nowadays. But the fact that dividing by a positive number always truncates toward zero, rather than toward minus infinity, suggests that this is how it's done. In addition to checking for divide by zero, though, it's important to test for dividing the maximum negative number by -1, because that would produce one more than the maximum positive number.
Signed integer multiplies, however, are never done by taking absolute values, multiplying, and then negating if necessary. The difference between a signed integer and an unsigned integer is simply that the msb has a negative weight if it is signed. An unsigned byte has bit weights of 128, 64, 32, 16, 8, 4, 2, and 1. A signed byte has bit weights of -128, 64, 32, 16, 8, 4, 2, and 1. So it's easy to design hardware that takes that into account, using a subtraction instead of an addition when multiplying by the leftmost bit.
Another way of looking at it is that if a byte has a 1 in the msb, then signed value equals the unsigned value minus 256. This means that if you have an unsigned multiplier, you can do a signed multiply pretty easily. If one number has its sign bit set, you subtract the other number from the high half of the result; if the other number has its sign bit set, you subtract the first number from the high half of the result. And if you don't need the high half at all (if you know the numbers are small enough), then there is no difference between signed and unsigned multiply. (I used to do this a lot when I was programming the 6801 and 6809 decades ago.)
BTW, standard floating point representations are always sign-magnitude, rather than two's complement, so they do arithmetic more the way humans do.
Best Answer
The most basic CMOS/TTL logic uses a voltage within a specified min/max range to represent either a "logic low" 0 or a "logic high" 1. These include the discrete logic gates like 7400 / 7402 / 7432 that were used in the 1970's (and are still used sometimes on solderless breadboards), as well as more modern higher-integration chips. The exact voltage range is listed in the device's datasheet Electrical Characteristics table:
The device that is driving the output, is guaranteed to drive a logic low 0 as some voltage between
GND
andVOLmax
; and also to drive a logic high 1 as some voltage betweenVOHmin
andVCC
(power supply rail). The gap betweenVOLmax
andVOHmin
is a dead band where the output is undefined -- this is what provides thenoise immunity
of digital signalling as compared to analog signals.The device that receives the input, will interpret a voltage between
GND
andVILmax
as meaning a logic low 0, and interpret a voltage betweenVIHmin
andVCC
as meaning a logic high 1. Any input betweenVILmax
andVIHmin
is not valid. And any input belowGND
or aboveVCC
may violate the Absolute Maximum Ratings (i.e. permanently damage or degrade the device).For CMOS, the VOH/VOL and VIH/VIL thresholds are usually a percentage of the power supply, like 30%VCC / 70%VCC. For TTL, the thresholds are absolute with 2.4V the usual VOLmin voltage.
Timing is a separate concern. If the logic is just implementing some Boolean equation ("glue logic"), the output signal simply follows the input after some specified
propagation delay
time. If the logic implements astate machine
or a CPU, there will be a clock signal that determines the system's timing.You also mentioned the Ethernet and USB communications protocols; these are a lot more complicated. It's a lot harder to even frame the question in terms of sending a single binary bit, since there is a lot more information that is required (such as host IP address, frame number, etc.) These build on the basic idea I described above, but add a lot more layers that are specific to each standard.
Ethernet has several layers of communications protocols; the
datalink
layer is different even for different types of Ethernet -- 10Mbit and 100Mbit are not just different speeds but different signalling protocols. This is described in IEEE standard 802.3The USB protocols are described in the USB Standard, as well as on Jan Axelson's USB Complete website.
If you've never read a standards specification document before, I'd recommend starting with USB -- it's comparatively a bit simpler than Ethernet.