There are actually several reasons.
First and probably foremost, the data that's stored in the instruction cache is generally somewhat different than what's stored in the data cache -- along with the instructions themselves, there are annotations for things like where the next instruction starts, to help out the decoders. Some processors (E.g., Netburst, some SPARCs) use a "trace cache", which stores the result of decoding an instruction rather than storing the original instruction in its encoded form.
Second, it simplifies circuitry a bit -- the data cache has to deal with reads and writes, but the instruction cache only deals with reads. (This is part of why self-modifying code is so expensive -- instead of directly overwriting the data in the instruction cache, the write goes through the data cache to the L2 cache, and then the line in the instruction cache is invalidated and re-loaded from L2).
Third, it increases bandwidth: most modern processors can read data from the instruction cache and the data cache simultaneously. Most also have queues at the "entrance" to the cache, so they can actually do two reads and one write in any given cycle.
Fourth, it can save power. While you need to maintain power to the memory cells themselves to maintain their contents, some processors can/do power down some of the associated circuitry (decoders and such) when they're not being used. With separate caches, they can power up these circuits separately for instructions and data, increasing the chances of a circuit remaining un-powered during any given cycle (I'm not sure any x86 processors do this -- AFAIK, it's more of an ARM thing).
The lines of code have nothing to do with how the CPU executes it. I'd recommend reading up on assembler, because that will teach you a lot about how the hardware actually does things. You can also get assembler output from many compilers.
That code might compile into something like (in a made up assembly language):
load R1, [x] ; meaning load the data stored at memory location x into register 1
add R1, 5
store [x], R1 ; store the modified value into the memory location x
sub R1, 3
store R1, [y]
However, if the compiler knows that a variable isn't used again, the store operation may not be emitted.
Now for the debugger to know what machine code corresponds to a line of program source, annotations are added by the compiler to show what line corresponds to where in the machine code.
Best Answer
The symbol below represents a transistor, the fundamental element of any modern computing device. It acts as a switch; current (the movement of electrons) flows from the bottom wire to the top wire when a voltage is applied to the left wire.
Logic gates are created by combining transistors in various combinations. For example, the following circuit implements a NAND gate. A NAND gate is a logic gate that produces a non-zero voltage at its output (Q) only when the voltage at its two inputs (A and B) is zero.
In this circuit, the output indicated by AB (with a bar over it) is held high by resistor R1. When a voltage is applied to both inputs A and B, it switches on both transistors, shorting AB to ground (zero).
Here is the symbol for a NAND gate:
Logic gates can be combined to produce circuits that perform useful work. This circuit adds two bits; it's called a half-adder:
And this is a full adder:
Full-Adders can be combined to support multiple bits. This circuit is capable of adding two four-bit numbers:
With a bit of imagination, you can see how this could be extended to numbers of any arbitrary size, including a 32 bit or 64 bit addition operation.
Each operation so created with logic gates is mapped in the processor to a number called an opcode that the processor recognizes as the number corresponding to that operation. On an Intel processor, the opcode
maps to this assembly instruction:
Which is used in this assembly code:
Which corresponds to this C code:
So as you can see, the instructions that you write in your software translate into machine instructions that map onto logic gates that are created from transistors that act as switching devices, precisely directing the flow of electrons to produce a specific result.
Disclaimer for the pedants: This is just an illustration. It is not meant to describe precisely what takes place in a given processor, but rather to illustrate the concept of mapping machine instructions to actual circuitry for educational purposes only. Don't drive or operate heavy machinery while reading this post.