What can be done to programming languages to avoid floating point pitfalls

language-design

The misunderstanding of floating point arithmetic and its short-comings is a major cause of surprise and confusion in programming (consider the number of questions on Stack Overflow pertaining to "numbers not adding correctly"). Considering many programmers have yet to understand its implications, it has the potential to introduce many subtle bugs (especially into financial software). What can programming languages do to avoid its pitfalls for those that are unfamiliar with the concepts, while still offering its speed when accuracy is not critical for those that do understand the concepts?

Best Answer

You say "especially for financial software", which brings up one of my pet peeves: money is not a float, it's an int.

Sure, it looks like a float. It has a decimal point in there. But that's just because you're used to units that confuse the issue. Money always comes in integer quantities. In America, it's cents. (In certain contexts I think it can be mills, but ignore that for now.)

So when you say $1.23, that's really 123 cents. Always, always, always do your math in those terms, and you will be fine. For more information, see:

Answering the question directly, programming languages should just include a Money type as a reasonable primitive.

update

Ok, I should have only said "always" twice, rather than three times. Money is indeed always an int; those who think otherwise are welcome to try sending me 0.3 cents and showing me the result on your bank statement. But as commenters point out, there are rare exceptions when you need to do floating point math on money-like numbers. E.g., certain kinds of prices or interest calculations. Even then, those should be treated like exceptions. Money comes in and goes out as integer quantities, so the closer your system hews to that, the saner it will be.