Electronic – Difference between Micro-Operations in RISC and CISC processors

armcpuintelmicroprocessorprocessor

I've read that the modern Intel processors use the CISC instructions on the top, which are converted into RISC-like simpler instructions in the form of Micro-Operations at the back-end.

So if Intel micro-ops are RISC-like simple hardware level controls, then what do the ARM Micro-Operations do?

Because ARM instructions are already quite RISC-like, what would their Micro-Operations form look like?

Best Answer

All microprocessors, and indeed all synchronous digital circuits work in what is called a "Register Transfer Level". Basically all that any microprocessor does is loading values into registers from different sources. Those sources can be memory, other registers or the ALU (Artihmitical-Logical Unit, a calculator inside the processor). Some of the registers are simple registers inside the procesor, some registers can be special function registers that are located around the CPU, in 'peripherals' such as I/O ports, memory management unit, interrupt unit, this and that.

In this model, 'Instructions' are basic sequences of register transfers. Normally it doesn't make sense to give the programmer the ability to control each register transfer individually, because not all of the possible register transfer combinations are meaningful, so allowing the programmer to express them all would be wasteful in terms of memory consumption. So basically each processor declares a set of sets of register transfers that it allows the programmer to ask the processor to do, and these are called 'Instructions'.

For example ADD A, B, C might be an operation where the sum of registers A and B is placed into register C. Internally, that would be three register transfers: Load adder left input from A, load adder right input from B, then load C from adder output. Additionally, the processor makes the necessary transfers to load memory address register from program counter, load instruction register from memory data bus, and finally load program counter from program counter incrementer.

The 8086 used an internal ROM look-up table to see which register transfers make each instruction. The contents of that ROM were quite freely programmable by the designers of the 8086 CPU so they chose instruction sequences, which seemed useful for the programmer, instead of choosing sequences which would be simple and fast to execute by the machine. Remember, that in those days most software was written in assembly language, so it made sense to make that as easy as possible for the programmer. Later on, Intel designed 80286, in which they made, what now seems, a critical error. They had some unused microcode memory left and they thought that they might as well fill it with something, and came up with a bunch of instructions just to fill the microcode. This bit them in the end, as all those extra instructions needed to be supported by the 386, 486, Pentium and later processors, which didn't use microcode any more.

ARM is a lot newer processor design than the 8086 and the ARM people took a different design route. By then, computers were common and there were a lot of compilers available. So instead of designing an instruction set that is nice for the programmer, they chose an instruction set which is fast for the machine to execute and efficient for the compiler to generate code on. And for a while, the X86 and ARM were different in the way that they execute instructions.

Time then goes by and CPUs become more and more complex. And also microprocessors are designed using computers and not pencil and paper. Nobody uses microcode any more, all processors have a hardwired (pure logic) execution control unit. All have multiple integer calculation units and multiple data buses. All translate their incoming instructions, reschedule them and distribute them among the processing units. Even old RISC instruction sets are translated into new RISC operation sets. So the old question of RISC versus CISC doesn't really exist anymore. We're again back in the register transfer level, programmers ask CPU's to do operations and CPU's translate them into register transfers. And whether that translation is done by a microcode ROM or hardwired digital logic, really isn't that interesting any more.