Mathematics Evaluation – How Programming Languages Handle It

evaluationmath

As I get more and more involved with the theory behind programming, I find myself fascinated and dumbfounded by seemingly simple things..
I realize that my understanding of the majority of fundamental processes is justified through circular logic

Q: How does this work?

A: Because it does!

I hate this realization! I love knowledge, and on top of that I love learning, which leads me to my question (albeit it's a broad one).

Question:

How are fundamental mathematical operators assessed with programming languages?

How have current methods been improved?

Example

var = 5 * 5; 

My interpretation:

$num1 = 5; $num2 = 5; $num3 = 0;
while ($num2 > 0) {
    $num3 = $num3 + $num1;
    $num2 = $num2 - 1;
}
echo $num3;

This seems to be highly inefficient. With Higher factors, this method is very slow while the standard built in method is instantanious. How would you simulate multiplication without iterating addition?

var = 5 / 5;

How is this even done? I can't think of a way to literally split it 5 into 5 equal parts.

var = 5 ^ 5; 

Iterations of iterations of addition?
My interpretation:

$base = 5;
$mod = 5;
$num1 = $base;
while ($mod > 1) {

    $num2 = 5; $num3 = 0;
    while ($num2 > 0) {
        $num3 = $num3 + $num1;
        $num2 = $num2 - 1;
    }
    $num1 = $num3;
    $mod -=1;
}
echo $num3;

Again, this is EXTREMELY inefficient, yet I can't think of another way to do this.
This same question extends to all mathematical related functions that are handled automagically.

Best Answer

To really understand how arithmetic works inside a computer you need to have programmed in assembly language. Preferably one with a small word size and without multiplication and division instructions. Something like the 6502.

On the 6502, virtually all arithmetic is done in a register called the Accumulator. (A register is a special memory location inside the processor that can be accessed quickly.) So to add two numbers, you load the first number into the Accumulator, then add the second number to it.

But that's oversimplifying. Because the 6502 is an 8-bit processor, it can handle numbers only from 0 to 255. Most of the time you will want to be able to work with larger numbers. You have to add these in chunks, 8 bits at a time. The processor has a Carry flag that is set when the result of adding two numbers overflows the Accumulator. The processor adds that in when doing an addition, so it can be used to "carry the 1" assuming you start with the lowest-order byte of a number. A multi-byte add on the 6502 looks like this:

  1. Clear carry flag (CLC)
  2. Load lowest-order-byte of first number (LDA, load accumulator)
  3. Add lowest-order-byte of second number (ADC, add with carry)
  4. Store lowest-order byte of result (STA, store accumulator)
  5. Repeat steps 2-4 with successively higher-order bytes
  6. If at the end, the carry is set, you have overflowed; take appropriate action, such as generating an error message (BCS/BCC, branch if carry set/clear)

Subtraction is similar except you set the carry first, use the SBC instruction instead of ADC, and at the end the carry is clear if there was underflow.

But wait! What about negative numbers? Well, with the 6502 these are stored in a format called two's complement. Assuming an 8-bit number, -1 is stored as 255, because when you add 255 to something, you get one less in the Accumulator (plus a carry). -2 is stored as 254 and so on, all the way down to -128, which is stored as 128. So for signed integers, half the 0-255 range of a byte is used for positive numbers and half for negative numbers. (This convention lets you just check the high bit of a number to see if it's negative.)

Think of it like a 24-hour clock: adding 23 to the time will result in a time one hour earlier (on the next day). So 23 is the clock's modular equivalent to -1.

When you are using more than 1 byte you have to use larger numbers for negatives. For example, 16-bit integers have a range of 0-65536. So 65535 is used to represent -1, and so on, because adding 65535 to any number results in one less (plus a carry).

On the 6502 there are only four arithmetic operations: add, subtract, multiply by two (shift left), and divide by two (shift right). Multiplication and division can be done using only these operations when dealing in binary. For example, consider multiplying 5 (binary 101) and 3 (binary 11). As with decimal long multiplication, we start with the right digit of the multiplier and multiply 101 by 1, giving 101. Then we shift the multiplicand left and multiply 1010 by 1, giving 1010. Then we add these results together, giving 1111, or 15. Since we are ever only multiplying by 1 or 0, we don't really multiply; each bit of the multiplier simply serves as a flag which tells us whether to add the (shifted) multiplicand or not.

Division is analogous to manual long division using trial divisors, except in binary. If you are dividing by a constant, it is possible to do this in a way analogous to subtraction: rather than dividing by X, you multiply by a precalculated rendition of 1/X that produces the desired result plus an overflow. Even today, this is faster than division.

Here's a page showing some actual 6502 code for multiplication and division, including a nicely optimized multiplication routine that uses a 2 kilobyte lookup table. http://nparker.llx.com/a2/mult.html

Now try doing floating-point math in assembly, or converting floating-point numbers to nice output formats in assembly. And do remember, it's 1979 and the clock speed is 1 MHz, so you must do it as efficiently as possible.

Things still work pretty much like this today, except with bigger word sizes and more registers, and of course most of the math is done by hardware now. But it's still done in the same fundamental way. If you add up the number of shifts and adds required for a multiply, for example, it correlates rather well to the number of cycles required for a hardware multiply instruction on early processors having such an instruction, such as the 6809, where it was performed in microcode in much the same way you would do it manually. (If you have a larger transistor budget, there are faster ways to do the shifts and adds, so modern processors do not perform these operations sequentially and can perform multiplications in as little as a single cycle.)

Related Topic