The difference between a floating decimal number and fixed decimal number

math

Please explain in layman's terms.

Best Answer

There are two major differences between the two:

  • Binary format
  • Intended use

For details of the binary format, and the math behind the binary format, see the Wikipedia articles for fixed point arithmetic and Floating point numbers.

Floating Point Numbers: Within the limits of the binary representation, floating point numbers represent variable precision. In short, you can represent really tiny numbers or really big numbers. The number of decimal places you can represent are only limited by the number of bits dedicated to the number. These are commonly used in physics and other more precise math problems.

Fixed Decimal Numbers: Have a constant number of digits after the decimal place. These are typically used to represent money, percentages, or a certain precision of the number of seconds (i.e. limiting to milliseconds). They are mostly used in databases as a simple and efficient storage format. The math involved with these types of numbers have no practical significance for any precision lower than the fixed number of decimal places. What practical use is 1/1000th of a penny?