Why use other number bases when programming

language-agnosticmathprogramming practices

My coworkers and I have been bending our minds to figuring out why anyone would go out of their way to program numbers in a base other than base 10.

I suggested that perhaps you could optimize longer equations by putting the variables in the correct base you are working with (for instance, if you have only sets of 5 of something with no remainders you could use base 5), but I'm not sure if that's true.

Any thoughts?

Best Answer

The usual reason for writing numbers, in code, in other than base 10, is because you're bit-twiddling.

To pick an example in C (because if C is good for anything, it's good for bit-twiddling), say some low-level format encodes a 2-bit and a 6-bit number in a byte: xx yyyyyy:

main() {
    unsigned char codevalue = 0x94; // 10 010100
    printf("x=%d, y=%d\n", (codevalue & 0xc0) >> 6, (codevalue & 0x3f));
}

produces

x=2, y=20

In such a circumstance, writing the constants in hex is less confusing than writing them in decimal, because one hex digit corresponds neatly to four bits (half a byte; one 'nibble'), and two to one byte: the number 0x3f has all bits set in the low nibble, and two bits set in the high nibble.

You could also write that second line in octal:

printf("x=%d, y=%d\n", (codevalue & 0300) >> 6, (codevalue & 077));

Here, each digit corresponds to a block of three bits. Some people find that easier to think with, though I think it's fairly rare these days.