Many times, operations like floating point and memory management are encoded in a way that they can be "trapped". This means that the system can be configured to either use hardware or automatically branch to a software implementation. In the case of software, the implementation can be anything, although most manufacturers supply libraries that follow accepted standards (IEEE-754 in the case of floating point). In many systems, when a floating-point unit or other chip is installed, the instruction execution is automatically deferred to the new chip, so no software reconfiguration is necessary.
As I understand it, the ARM architecture does something very similar to the x86, with floating-point instructions that trap to software emulation if no FPU hardware is found.
The following is based on my own deduction and I have no proof of its accuracy.
Ultimately, how many bytes does a floating point number consume? A computer can't possibly represent more unique numbers than it can unique bit patterns. For a 64-bit floating point (C# double) there are 2^64 unique values. Note that some combinations give equivilent values. Quoting Wikipedia:
While the exponent (11-bits for C# double) can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros and subnormal numbers, values of all 1s are reserved for the infinities and NaNs.
So this means there's 2^53 combinations that represent infinate or invalid numbers, and 2^53 combinations that represent zero and subnormal numbers. I can't say one way or the other whether there are other bit-combinations that will produce the same number.
2^64 - 2^53 + 3 = 18,437,736,874,454,810,627 unique values
(Represents all bit combinations with positive infinity, negative infinity, and not-a-number combinations being condensed to three unique values.)
Read Floating point, Internal representation.
Best Answer
There are two major differences between the two:
For details of the binary format, and the math behind the binary format, see the Wikipedia articles for fixed point arithmetic and Floating point numbers.
Floating Point Numbers: Within the limits of the binary representation, floating point numbers represent variable precision. In short, you can represent really tiny numbers or really big numbers. The number of decimal places you can represent are only limited by the number of bits dedicated to the number. These are commonly used in physics and other more precise math problems.
Fixed Decimal Numbers: Have a constant number of digits after the decimal place. These are typically used to represent money, percentages, or a certain precision of the number of seconds (i.e. limiting to milliseconds). They are mostly used in databases as a simple and efficient storage format. The math involved with these types of numbers have no practical significance for any precision lower than the fixed number of decimal places. What practical use is 1/1000th of a penny?