Math and Floating-Point – Importance of Negative Zero

floating pointmath

I'm confused about why we care about different representations for positive and negative zero.

I vaguely recall reading claims that having a negative zero representation is extremely important in programming that involves complex numbers. I've never had the opportunity to write code involving complex numbers, so I'm a little baffled about why this would be the case.

Wikipedia's article on the concept isn't especially helpful; it only makes vague claims about signed zero making certain mathematical operations simpler in floating point, if I understand correctly. This answer lists a couple of functions that behave differently, and perhaps something could be inferred from the examples if you're familiar with how they might be used. (Although, the particular example of the complex square roots looks flat out wrong, since the two numbers are mathematically equivalent, unless I have a misunderstanding.) But I have been unable to find a clear statement of the kind of trouble you would get into if it wasn't there. The more mathematical resources I've been able to find state that there is no distinguishing between the two from a mathematical perspective, and the Wikipedia article seems to suggest that this is rarely seen outside of computing aside from describing limits.

So why is a negative zero valuable in computing? I'm sure I'm just missing something.

Best Answer

You need to keep in mind that in FPU arithmetics, 0 doesn't necessarily has to mean exactly zero, but also value too small to be represented using given datatype, e.g.

a = -1 / 1000000000000000000.0

a is too small to be represented correctly by float (32 bit), so it is "rounded" to -0.

Now, let's say our computation continues:

b = 1 / a

Because a is float, it will result in -infinity which is quite far from the correct answer of -1000000000000000000.0

Now let's compute b if there's no -0 (so a is rounded to +0):

b = 1 / +0
b = +infinity

The result is wrong again because of rounding, but now it is "more wrong" - not only numerically, but more importantly because of different sign (result of computation is +infinity, correct result is -1000000000000000000.0).

You could still say that it doesn't really matter as both are wrong. The important thing is that there are a lot of numerical applications where the most important result of the computation is the sign - e.g. when deciding whether to turn left or right at the crossroad using some machine learning algorithm, you can interpret positive value => turn left, negative value => turn right, actual "magnitude" of the value is just "confidence coefficient".

Related Topic