Electronic – Why does Intel’s Haswell chip allow floating point multiplication to be twice as fast as addition

alucomputer-architecturecpufloating pointintel

I was reading this very interesting question on Stack Overflow:

Is integer multiplication really done at the same speed as addition on a modern CPU?

One of the comments said:

"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."

Why is it that they would allow twice as many simultaneous multiplications compared to addition?

Best Answer

This possibly answers the title of the question, if not the body:

Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.

Added for the OP: Note that adding the lengths of 2 millimeters and 2 kilometers is not 4 of either unit. That's because of the need to convert one or the other measurement to the same scale or unit representation before addition. That conversion requires essentially a multiplication by some power of 10. The same thing usually needs to happen during floating point addition, because floating point numbers are a form of variably scaled integers (e.g. there is a unit or scale factor, an exponent, associated with each number). So you may need to scale one of the numbers by a power of 2 before adding raw mantissa bits in order to have both represent the same units or scale. This scaling is essentially a simple form of multiplication by a power of 2. Thus, floating point addition requires multiplication (which, being a power of 2, can be done with a variable bit shift or barrel shifter, which can require relatively long wires in relation to the transistor sizes, which can be relatively slow in deep sub-micron-lithography circuits). If the two numbers mostly cancel (because one is nearly the negative of the other), then there may be a need to rescale the result of the addition as well to suitably format the result. So addition can be slow if it furthermore requires 2 multiplications (pre and post) steps surrounding the binary addition of a raw fixed (finite) number of mantissa bits representing equivalent units or scale, due to the nature of the number format (IEEE floating point).

Added #2: Also, many benchmarks weight FMACS (multiply-accumulates) more than bare adds. In a fused MAC, the alignment (shift) of the addend can often be mostly done in parallel with the multiply, and the mantissa add can often be included in the CSA tree before the final carry propagation.