I read about different parts of micro-processor like ALU, registers etc. all different digital parts. Are there any analog parts inside the processor?
Microprocessor – Is Microprocessor Completely Digital?
microprocessorprocessor
Related Solutions
Of course to properly look at this we must know what it means to "Natively" execute anything. On the surface this seems like an easy question, but it isn't. Let me elaborate.
But first, let me say that I am massively simplifying this description! There is no way I can explain this in a reasonable number of words without some over-arching generalizations and simplifications. Deal with it.
Let's start with a bit-slice processor (BSP) design. These are the easiest of processors to design, the hardest to program for, the smallest in terms of logic size, and the worst in terms of code-density. Essentially, an instruction word in a bit-slice processor never goes through an instruction decode step. The instruction word is somewhat pre-decoded. The individual bits of the instruction goes directly to latches, muxes, ALUs, etc inside the processor. Consequently the instruction word can be very large. Instructions larger than 256 bits is not uncommon! Normal BSP's are purpose built for a single task and are not general purpose CPU's. While BSP's sound somewhat exotic, they are used all over the place but are so deeply embedded that you probably don't notice.
One step up from a BSP is a RISC CPU. The overall data flow is changed to be more general purpose, and an instruction decode stage is added to the pipeline. Inside the RISC CPU there is still a giant instuction word, like the BSP, except that the instruction decode is used to convert the 32-bit instruction into that giant instruction word. Fundamentally this instruction decode is like a giant look up table that converts the 32-bit instruction to the giant instruction word used in the BSP. It is not literally a giant look up table, but that is what it effectively is. This instruction decode limits what the instructions can do, but greatly simplifies programming and is what turns this thing into a general purpose CPU.
Next step up we get to a CISC CPU. The main difference is that the instruction decode becomes more complex. Instead of the ID being just a huge lookup table, the ID converts the 32-bit instruction into a series of BSP-like instructions. You can really think of each 32-bit instruction and being a small subroutine call inside a BSP.
Next, you have assembly language. This is the ASCII text that you write that gets converted into those 32-bit instructions by the assembler and linker. While this is the lowest level of programming that a human might do, there is not always a one to one relationship between what the human writes and what the CPU executes. Even here the assembler is doing some level of interpreting and manipulating of the final instructions. For example, MIPS assemblers will rearrange or add instructions to deal with pipeline hazards. I'm sure other assemblers will do something similar.
Then you have a fully interpreted language. In this language, the interpreter has to parse the ASCII of each line or command every time that line is executed. This is what most scripting languages do.
There are also fully compiled languages, like C/C++, in which a compiler takes the ASCII source code and converts it into assembly language (or sometimes directly into the normal 32-bit opcodes).
Between interpreted and compiled languages there is "tokenized languages". These are most like interpreted languages, but the ASCII source code is parsed only once. The net effect is that the execution speed is much quicker and a fully interpreted language, but you still have the flexibility of an interpreted language and don't have the compile time of a compiled language. The term "tokenized" is used because the code is pre-parsed, or tokenized, into something that is easier to deal with than straight ASCII. Java is a good example of a tokenized language.
There have also been "BASIC CPUs", essentially these are CPU's that have a BASIC interpreter built into them. They are a normal MCU where the Flash EPROM contains a BASIC interpreter as well as the pre-tokenized BASIC program.
So, back to the question: What does it mean to natively execute a program? Does the program have to be down to the BSP level to be native? If so then almost nothing is native. What about the 32-bit instruction level? Ok, that's what most would call native since that is what the "CPU block" is given to execute. Normally anything ASCII is not "native" since some level of interpretation needs to be done before it can be executed. How about those BASIC MCU's? Do they natively execute BASIC? Probably not.
But let's look more at those BASIC MCU's. The BASIC interpreter is stored in the Flash EPROM and is made up of those MCU's standard opcodes. But what if the interpreter was actually part of a CISC CPU's instruction decode? Instead of the instruction decode running some subroutine for an "Multiple and ADD with Saturation" instruction, it ran a subroutine for "let X=5 + y". Would that CPU then be said to execute BASIC natively? I would!
But let's look at the C language specifically. And let's assume some crazy CISC processor that would interpret ASCII C source code directly. As you look at the tasks of managing files, parsing ASCII, and managing variables you notice two things: Either the BSP at the core of our C-CPU becomes absolutely huge and unmanageable or the BSP starts to look like what any other modern CPU has. But if the BSP looks similar to other CPU's then the instruction decode must do all the hard work, which it is not well suited for either.
What you end up with if you follow this to it's natural conclusion is something that looks like a normal RISC or CISC CPU that has a C Interpreter already programmed into it's Flash EPROM. Exactly like those Basic MCU's I mentioned before!
The net result is that a CPU that runs C "natively" is not useful-- even as an educational project. I could go on and on, but I'm almost late for a meeting now. Enjoy!
All microprocessors, and indeed all synchronous digital circuits work in what is called a "Register Transfer Level". Basically all that any microprocessor does is loading values into registers from different sources. Those sources can be memory, other registers or the ALU (Artihmitical-Logical Unit, a calculator inside the processor). Some of the registers are simple registers inside the procesor, some registers can be special function registers that are located around the CPU, in 'peripherals' such as I/O ports, memory management unit, interrupt unit, this and that.
In this model, 'Instructions' are basic sequences of register transfers. Normally it doesn't make sense to give the programmer the ability to control each register transfer individually, because not all of the possible register transfer combinations are meaningful, so allowing the programmer to express them all would be wasteful in terms of memory consumption. So basically each processor declares a set of sets of register transfers that it allows the programmer to ask the processor to do, and these are called 'Instructions'.
For example ADD A, B, C might be an operation where the sum of registers A and B is placed into register C. Internally, that would be three register transfers: Load adder left input from A, load adder right input from B, then load C from adder output. Additionally, the processor makes the necessary transfers to load memory address register from program counter, load instruction register from memory data bus, and finally load program counter from program counter incrementer.
The 8086 used an internal ROM look-up table to see which register transfers make each instruction. The contents of that ROM were quite freely programmable by the designers of the 8086 CPU so they chose instruction sequences, which seemed useful for the programmer, instead of choosing sequences which would be simple and fast to execute by the machine. Remember, that in those days most software was written in assembly language, so it made sense to make that as easy as possible for the programmer. Later on, Intel designed 80286, in which they made, what now seems, a critical error. They had some unused microcode memory left and they thought that they might as well fill it with something, and came up with a bunch of instructions just to fill the microcode. This bit them in the end, as all those extra instructions needed to be supported by the 386, 486, Pentium and later processors, which didn't use microcode any more.
ARM is a lot newer processor design than the 8086 and the ARM people took a different design route. By then, computers were common and there were a lot of compilers available. So instead of designing an instruction set that is nice for the programmer, they chose an instruction set which is fast for the machine to execute and efficient for the compiler to generate code on. And for a while, the X86 and ARM were different in the way that they execute instructions.
Time then goes by and CPUs become more and more complex. And also microprocessors are designed using computers and not pencil and paper. Nobody uses microcode any more, all processors have a hardwired (pure logic) execution control unit. All have multiple integer calculation units and multiple data buses. All translate their incoming instructions, reschedule them and distribute them among the processing units. Even old RISC instruction sets are translated into new RISC operation sets. So the old question of RISC versus CISC doesn't really exist anymore. We're again back in the register transfer level, programmers ask CPU's to do operations and CPU's translate them into register transfers. And whether that translation is done by a microcode ROM or hardwired digital logic, really isn't that interesting any more.
Best Answer
This is a complex question, because what actually makes a part "digital" can have multiple definitions.
Fundamentally, reality is analog (at least at the scales which most microprocessors operate at). Therefore, you can make a coherent argument that there are not actually any digital microprocessors. "Digital" is a theoretical mechanism for simplifying the expression of analog systems where the analog voltages therein are (as much as possible) constrained to two states, each of which represent a boolean value.
This simplification makes it much easier for our puny human brains to contemplate complex systems, and much easier for people to write software to evaluate the behaviour of said complex systems.
However, if you are asking if any components inside most microprocessors operate outside this simplified view, the answer is generally no.
Basically, at this point, the question is more, assuming you're asking about whether components inside a MCU operate outside of the digital simplification, the question then becomes "How do you define a microprocessor"? Fundamentally, the *CPU core( of almost all microprocessors is purely digital.
However, many, many microprocessors integrate on-die peripherals like the ones mentioned above that are very much "analog" devices, so you must ask if you are defining the entire integrated-circuit as the "microprocessor", or just the actual processing core, which may only be a small part of the actual processor's IC die.