To answer your title question "Does any programming language use variables as they're in maths?":
C, C#, Java, C++, and any other C style language use variables in the way they are used in math.
You just need to use == instead of =.
If I take your original
root(square(x)) = abs(x)
Then I can translate that into C# directly without any changes other than for the syntax.
Math.Sqrt(Math.Pow(x,2)) == Math.Abs(x)
This will evaluate to true for any value of x as long as x squared is less than the max for the data type you are using.
(Java will be grossly similar, but I believe the Math namespace is a bit different)
This next bit will fail to compile in C# because the compiler is smart enough to know I can't assign the return of one operation to another operation.
Math.Sqrt(Math.Pow(x,2)) = Math.Abs(x)
Immutability has nothing to do with this. You still need to assign the value in an immutable language and it's entirely possible that a given language may chose to do this by using = as the operator.
Further proving the point, this loop will run until you exhaust legal values of x and get an overflow exception:
while (Math.Sqrt(Math.Pow(x, 2)) == Math.Abs(x))
{
++x;
System.Console.WriteLine(x);
}
This is why mathematicians hate the use of = for assignment. It confuses them. I think this has led you to confuse yourself. Take your example
y = (x**2)**.5
x *= 2
assert y == abs(x)
When I turn this into algebra, I get this:
abs(2x) = root(x^2)
Which of course is not true for values other than 0. Immutability only saves you from the error of changing the value of x when you add extra steps between evaluating the Left Hand Side and Right Hand Side of the original equation. It's doesn't actually change how you evaluate the expression.
The usual reason for writing numbers, in code, in other than base 10, is because you're bit-twiddling.
To pick an example in C (because if C is good for anything, it's good for bit-twiddling), say some low-level format encodes a 2-bit and a 6-bit number in a byte: xx yyyyyy
:
main() {
unsigned char codevalue = 0x94; // 10 010100
printf("x=%d, y=%d\n", (codevalue & 0xc0) >> 6, (codevalue & 0x3f));
}
produces
x=2, y=20
In such a circumstance, writing the constants in hex is less confusing than writing them in decimal, because one hex digit corresponds neatly to four bits (half a byte; one 'nibble'), and two to one byte: the number 0x3f
has all bits set in the low nibble, and two bits set in the high nibble.
You could also write that second line in octal:
printf("x=%d, y=%d\n", (codevalue & 0300) >> 6, (codevalue & 077));
Here, each digit corresponds to a block of three bits. Some people find that easier to think with, though I think it's fairly rare these days.
Best Answer
I know this is an old post, but I saw this post being referenced and dislike the chosen answer's tone.
So I did a bit of investigation!
From this, I can speculate about a couple things:
Therefore, my conclusion would be:
When they had to choose, they didn't know of the standard, chose the 'other' system, and everyone else just went along for the ride. No shady business, just an unfortunate design decision that was carried along because backward compatibility is the name of Microsoft's game.