Electronic – How do computers understand decimal numbers

arithmetic-divisiondecimaldigital-logic

Computers calculate numbers in 0s and 1s. A bit can be either but not in between. So if you enter 3/2 into a calculator, it should return either 1 or 2, right? Wrong! It gives you 1.5, the correct answer. Even on problems with more complexity, the calculator answers with the right number. So my question is, how does all this work? If a computer can only use 1s and 0s, how is it able to interpret a number in between 1 and 0 correctly, and is there a way to build a schematic for a machine that understands decimal?

Best Answer

Calculators generally work in BCD, whereas in programming languages usually (non-integer) numbers are represented in binary floating point format such as IEEE 754.

In the case of binary floating point, there is a number in 2's complement normalized so the most-significant bit is '1' (and since we know it's one, we can avoid storing it and just assume it is there). The exponent is usually a biased binary number that is always positive.

Doing division in BCD is not all that hard, you can do it with a 4-bit arithmetic logic unit (ALU) and a typical long division algorithm (which involves a number of subtracts until the result turns negative, and then one addition), then shift and repeat.

As far as the decimal or binary point, you can handle that separately as a kind of exponent.

Instead of 3/2, think of 30000000/20000000 = 15000000, then you figure out where to place the decimal point.

To add or subtract you have to right-shift the smaller number to make the exponents the same first. So 3 + 0.01 from 30000000 + 100000000 -> 30000000 + 01000000 = 30100000 and the decimal place is set to get 3.0100000

You could hard-wire logic to do this, but it would involve quite a few MSI level ICs for the registers, the ALU and the control logic, usually we'd want to use a microcontroller, an ASIC (as in a calculator) or an FPGA.