What causes floating point rounding errors

floating pointnumeric precision

I am aware that floating point arithmetic has precision problems. I usually overcome them by switching to a fixed decimal representation of the number, or simply by neglecting the error.

However, I do not know what are the causes of this inaccuracy. Why are there so many rounding issues with float numbers?

Best Answer

This is because some fractions need a very large (or even infinite) amount of places to be expressed without rounding. This holds true for decimal notation as much as for binary or any other. If you would limit the amount of decimal places to use for your calculations (and avoid making calculations in fraction notation), you would have to round even a simple expression as 1/3 + 1/3. Instead of writing 2/3 as a result you would have to write 0.33333 + 0.33333 = 0.66666 which is not identical to 2/3.

In case of a computer the number of digits is limited by the technical nature of its memory and CPU registers. The binary notation used internally adds some more difficulties. Computers normally can't express numbers in fraction notation, though some programming languages add this ability, which allows those problems to be avoided to a certain degree.

What Every Computer Scientist Should Know About Floating-Point Arithmetic

Related Topic