Electronic – How does a chip interpret different opcodes

alumicroprocessor

I have learnt in school about opcodes and how it works with the ALU to add numbers and such, now is my question how the computer is able to understand this, because it is codes and the computer needs something to translate that to actually doing that, in't it? (correct me if I'm wrong plz)
I also want to, when I understand that, make my own processor so I can learn the computer bottom-up and become a good computer engineer.

Best Answer

At the very lowest level, consider something like microcode. That's what Wouter was talking about when he mentioned Very Long Instruction Word architectures.

A CPU is a collection of busses, registers, memory, and arithmetic logic unit (ALU). Each of these do simple and finite things. Any one higher level instruction is a coordinated set of actions between all the blocks. For example, the low level operations to add a memory location value into the accumulator could be:

  1. Enable the operand address onto the memory address bus. Assume memory is always in read mode when not explicitly writing.

  2. Enable the accumulator onto the ALU input 1.

  3. Enable the memory data bus onto the ALU input 2.

  4. Set the ALU operation to addition.

  5. Wait the minimum number of clock ticks so that you know the output of the ALU has settled. In this case it includes the memory read time, the ALU propagation time, and any intermediate data path propagation times.

  6. Latch the ALU output into the accumulator.

When you break it down into the basic hardare operations, you note that orchestrating a instruction is mostly routing data to the right places in the right sequence. If this were implemented with descrete logic, #1 would be asserting a single output enable line of a tri-state buffer that drives the memory address bus. #2 and #3 likewise require asserting a single line. #4 is a little different in that the ALU is usually a canned logic block itself and often has a set of lines that code the operation. For example, 000 might be pass input 1 to output, 001 add both inputs to the output, 010 logical AND both inputs to the output, etc.

The point is that at the level described above, each instruction is just asserting a certain set of control lines in sequence, possibly with minimum wait times between some actions. A stripped down CPU could simply tie each bit in the instruction word to one of these control lines. That would be simple and highly flexible, but one drawback is that the instruction word would need to be quite wide to contain all the necessary control lines and a operand address field. The instruction memory is used very inefficiently.

A bunch of years ago, there were machines with microcoded architecture. They worked as I described at the low level, but these microinstructions weren't what you wrote when you programmed the machine. Each user level instruction essentially kicked off a small microcode routine. The user instructions would now be more compact with less redundant information in them. This was good because there could be many many of them and memory was limited. But the actual low level control of the hardware was done from microcode. This was stored in a special wide and fast and therefore expensive memory, but it didn't need to be very big because there were only a few microcode instruction for each user level opcode.

Nowadays, relatively simple machines like microcontrollers don't really have microcode. The instruction set has been made a little simpler and more regular so that it can be decoded directly by hardware, although that hardware may have a sort of sequencer or state machine that isn't exactly a microcode engine but sortof does that job with combinatorial logic with pipeline stages where things get held up waiting on clock edges. This is one reason, for example, that smaller PICs have a CPU clock that is 4x the instruction clock. That allows 4 clock edges per instruction, which is useful for managing propagation delays thru the logic. Some of the PIC documentation even tells you at what Q edges what operations are performed.

So if you want to get something very basic up and running, try implementing just a microcode machine. You may need a 24 bit or wider instruction word, but for demonstration and learning that is fine. Each bit controls a single flip flop clock, enable line, or whatever. You will also need some way of sequencing thru the instructions and doing conditional branching. A brute force way is to put a new address for possible use depending on the result of some conditional operation right into the microcode word.