# Sql-server – Use float or decimal for accounting application dollar amount

accountingcurrencydatabase-designsql servervb.net

We are rewriting our legacy accounting system in VB.NET and SQL Server. We brought in a new team of .NET/ SQL Programmers to do the rewrite. Most of the system is already completed with the dollar amounts using floats. The legacy system language, I programmed in, did not have a float, so I probably would have used a decimal.

Should the float or decimal data type be used for dollar amounts?

What are some of the pros and cons for either?

One con mentioned in our daily scrum was you have to be careful when you calculate an amount that returns a result that is over two decimal positions. It sounds like you will have to round the amount to two decimal positions.

Another con is all displays and printed amounts have to have a format statement that shows two decimal positions. I noticed a few times where this was not done and the amounts did not look correct. (i.e. 10.2 or 10.2546)

A pro is the float-only approach takes up eight bytes on disk where the decimal would take up nine bytes (decimal 12,2).

Should Float or Decimal data type be used for dollar amounts?

The answer is easy. Never floats. NEVER!

Floats were according to IEEE 754 always binary, only the new standard IEEE 754R defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation.

Any binary number can be written as `m/2^n` (`m`, `n` positive integers), any decimal number as `m/(2^n*5^n)`. As binaries lack the prime `factor 5`, all binary numbers can be exactly represented by decimals, but not vice versa.

``````0.3 = 3/(2^1 * 5^1) = 0.3

0.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125]

1/4         1/8         1/16          1/32
``````

So you end up with a number either higher or lower than the given decimal number. Always.

Why does that matter? Rounding.

Normal rounding means 0..4 down, 5..9 up. So it does matter if the result is either `0.049999999999`.... or `0.0500000000`... You may know that it means 5 cent, but the the computer does not know that and rounds `0.4999`... down (wrong) and `0.5000`... up (right).

Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers.

Unconvinced? You insist that in your account system everything is perfectly ok? Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system!

Compare that with the formatted sum. Oops, there is something wrong, isn't it?

For that calculation, extreme accuracy and fidelity was required (we used Oracle's FLOAT) so we could record the "billionth's of a penny" being accured.

It doesn't help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently.