# Binary Subtraction

binary

I'm a CS student and I just started my course on basic digital electronics. I'm having some trouble figuring out how binary subtraction works when you have more than one subtrahend, and you need to use more than one "borrow-in". For example, what would be the correct way to solve this:

```101011 -001011 001111 --------```

Currently, my solution is: sum both substrahends and then I get the most basic case of a minuend and a substrahend. However I was wondering if there was other solutions.

Binary subtraction is usually implemented using binary addition, and negation.

It is unusual to do three or more operations simultaneously (other than multiply add, aka multiply accumulate, or MAC).

So `A-B-C`, is usually implemented as `A+(-B)+(-C)` where the `-` is unary minus, or negation.

This scales to many operands and many subtrahends.

`A-B-C-D+E-F-G` becomes `A+(-B)+(-C)+(-D)+E+(-F)+(-G)`. The addition, '+' operation is commutative, so this can be evaluated in any order (ignoring overflow), and with arbitrarily many operands.

How might we operate on \\$47-15+23-11+16-22+12\\$?
One approach might be \\$47+23+16+12-(15+11+22)\\$, but how much hardware might be needed to implement that?

Clearly a nifty solution is \\$47+(16-15)+(23-22)+(16-15)=50\\$, but so what? How much hardware or VHDL and programmable logic would it take to recognise that? When we know in advance the sequence of operations, then optimisations may make sense.

Using 1's complement representation, the rightmost, 'top' bit represents the sign, with '1' indicating a negative number. So to negate a number, 'invert' its top bit to be the other value. A problem with 1's complement representation is their are two zero's a positive (top bit 0) and a negative (top bit 1) representation, which makes some operations more complex.

Most computing uses 2's complement representation of binary numbers. The top bit still indicates the sign ('0' for positive, and '1' for negative) as in 1's complement. However, the values are represented in a slightly different way.

The process is described at Wikipedia Two's complement

To subtract a number, convert to its two's complement form then add. This is a bit weird, but isn't too bad to implement.

This approach has several useful properties:

• there is only one representation of zero, and hence
• comparing a number against zero is simpler than 1's complement
• once the number is converted to its complement, the addition is Commutative, it can be done is either order

Edit:
An efficient way to implement subtraction by calculating the 2's complement adding is:

1. invert each bit of the subtrahend. Cost one NOT gate per bit. The invert operation has quite a low propagation delay, then
2. add using N-bit-addder with a '1' carry-in at the bottom bit. The carry-in implements the second 2's complement step of +1.

So the total extra cost of subtraction is N NOT-gates, and the lowest bit of the N-bit-add is a full adder instead of a half adder (which is the classic way to teach this), and hence is an extra three gates.

A useful property of restricting operations to two operands at a time is it works for all cases, and is relatively easy to parse, and apply 'everyday' arithmetic rules of precedence (multiply before add), associativity (left to right) and commutativity (A+B = B+A, but A-B ≠ B-A).