The answer, as with many things, is:
Practice, practice, practice.
By the way, I believe that in your solution you got into a dead-end by making a trivial mistake really early on: "There are 2 ways to parenthesize this new expression" -- aren't there more than 2? How about (A.B.(C.D.E))
, for example?
The biggest problem is that an equivalence relationship, the mathy term for things like ==
, is supposed to satisfy 3 laws.
- reflexivity,
a == a
- commutativity
a == b
means b == a
- transitivity
a == b
and b == c
means a == c
All of these are very intuitive and expected. And PHP doesn't follow them.
'0'==0 // true
0=='' // true
'0'==''// false, AHHHH
So it's not actually an equivalence relationship, which is a pretty distressing realization for some mathy people (including me).
It also hints at one of the things that people really hate about implicit casts, they often behave unexpectedly when combined with the mundane. It's basically just an arbitrary set of rules because it's unprincipled in this sense, weird stuff happens and it all needs to be specified case by case.
Basically we sacrifice consistency and the developer has to shoulder the extra burden of making sure there's no funny (and expensive) conversions happening behind the scene's. To quote this article
Language consistency is very important for developer efficiency. Every
inconsistent language feature means that developers have one more
thing to remember, one more reason to rely on the documentation, or
one more situation that breaks their focus. A consistent language lets
developers create habits and expectations that work throughout the
language, learn the language much more quickly, more easily locate
errors, and have fewer things to keep track of at once.
EDIT:
Another gem I stumbled across
NULL == 0
NULL < -1
So if you try to sort anything, it's nondetermistic and entirely dependent on the order in which comparisons are made. Eg suppose bubble sort.
bubble_sort([NULL, -1, 0]) // [NULL, -1, 0]
bubble_sort([0, -1, NULL]) // [-1, 0, NULL]
Best Answer
All the way down the rabbit hole, eh? OK I'll give it a try.
Step 1. From C to machine language
The C compiler transforms your comparison to opcodes stored in machine language. Machine language is a series of numbers that the CPU interprets as instructions. In this case there would be two opcodes: "subtract with carry" and "jump if carry." In other words, 2 is subtracted from 3 in one instruction, and the next instruction checks to see if it overflowed. These would be preceded by two instructions to load the numbers 2 and 3 into locations where they can be compared.
Each of the above has a binary representation; for example, the code for
SUB
is2D
hex, or00101101
in binary.Step 2. Opcodes to ALU
Arithmetic opcodes like
ADD
,SUB
,MUL
, andDIV
perform basic integer math using an ALU or Arithmetic Logic Unit built into the CPU. Numbers are stored in registers by some opcodes; other opcodes instruct the chip to call the ALU to do math on whatever is stored in registers at the time.Note: At this point we are well beyond anything that any software engineer would worry about if working with a 3GL like C.
Step 3. The ALU, half-adder, and full-adder
Did you know that all mathematical operations you know of can be reduced to a series of NOR operations? And that is exactly how the ALU works.
The ALU only knows how to work with binary numbers, and can only perform logical operations like OR, NOT, AND, and XOR. Implementation of binary addition and subtraction are accomplished with a series of logical operations arranged a certain way, in a subsystem known as an adder. These subsystems are composed of a network of "half-adders" that operate on two bits and determine their single-bit sum and a single-bit carry flag. By chaining these together, the ALU can perform operations on numbers with 8, 16, 32 etc. bits.
What about subtraction? Subtraction is just another form of addition:
The ALU computes
-B
by taking the two's complement ofB
. Once it is converted to negative, submitting the value to the adder will result in a subtraction operation.Step 4: The final step: On-chip transistors
The adders' operations are implemented using a combination of electrical components that interact to create "logic gates," such as those found in transitor-transistor logic or TTL, or in CMOS. Click here for a few examples to see how these are wired up.
On a chip, of course, these "circuits" are implemented in millions of tiny bits of conductive and nonconductive material, but the principle is the same as if they were full-sized components on a breadboard. Watch this video which shows you all the transistors on a microchip through the lens of an electronic microscope.
Some additional notes:
The code you wrote would actually be precomputed by the compiler and not executed at run time, because it is composed solely of constants.
Some compilers don't compile to machine code but introduce yet another layer, such as Java bytecode or .NET intermediate language. But in the end it all gets executed via machine language.
Some mathematical operations aren't actually computed; they are looked up in massive tables on an arithmetic coprocessing unit, or contain a combination of lookup and computation or interpolation. An example would be the function to compute a square root. Modern PC CPUs each have a floating point coprocessing unit built into each CPU core.