Electronic – How does a processor negate an integer

aludigital-logic

I am aware of three ways to negate an integer using 2's complement representation.

  1. The standard "invert, then add 1" which is taught in most textbooks.

  2. Scan from the least significant bit, copying bits as you go. When you reach the first "1", copy it, then flip the remaining bits.

  3. Subtract the value from \$2^n\$, where \$n\$ is the number of bits.

By my thinking, the first technique requires two passes through the bits, although the inversion can be done in parallel. However, the silicon to do both steps may already be present (an invert instruction and an increment instruction), so this way may require the least amount of additional silicon.

The second way needs only one pass through the bits.

The third way may work if there is already a subtraction unit. However, I think that most processors these days subtract by adding the negative, so this may be impractical to implement.

Which of these techniques is actually used by a microprocessor to negate a value?

Best Answer

Option 1 is how I've seen it and how I've designed it for integer inversion.

The "add 1" takes one cycle and the bit inversion is absorbed in the same cycle.

For subtraction, the same adder with inverter is used, and the "add 1" is applied through the carry-in for the LSB, and so it costs no extra cycles.

enter image description here

From https://cs.wellesley.edu/~cs240/f16/assignments/circuits/circuits.html