Windows – How does the Processor unit identify if the decoded instructions when it wants the next instruction

cachemicroprocessorwindows

I have read that to avoid cache misses, a lot of instructions are fetched together and decoded and kept in the cache, as it is mostly the case the instructions near the REQUESTED INSTRUCTION will be requested in the near future.

I have just started Computer Architecture and am confused as to how does the PROCESSOR knows which instructions have been decoded and kept already in the cache?

Computer is a machine right? So, when it encounters a request for a instruction from the processor, the cache is read, but how to know that the cache has that instruction already decoded?

Is there a marker e.g. HELLO, I am the decoded instruction INSTR2 which comes after instruction one INSTR1 and i belong to this process….?

Please help me out here, I am a beginner in computer architecture and would like some pointers regarding the question,

thank you

Best Answer

Generally speaking, a cache is a layer which abstracts the access to memory. When a piece of information is needed, it is specified by its address. All entries in the cache are tagged with the memory address of the datum that they hold. When the processor requests a datum, the cache control circuitry searches the cache for a matching address.

  • If the cache is fully associative than the entire address (except for the least significant bits) is matched against the entire cache. This matching is not a linear search, but an associative lookup. The cache entries somehow compare themselves to the address in parallel and one of them announces itself as a match.

  • If the cache is set associative then some of the address bits are used to directly select a bucket. For instance if there are 16 buckets, then four bits from the address can be taken as a bucket address 0 to 15. Then an associative lookup for the address takes place within just that bucket. This means that for any given memory address, we know which cache bucket it maps to, but not which specific cache line within that bucket.

  • If a cache is direct mapped then some of the address bits are used to select a single cache line, which either holds data for that address or not. So there is no associative lookup. Each address is mapped to a just a single cache line. (If a program alternately accesses two items at different addresses that map to the same cache line, the performance is bad. This is the worst/cheapest kind of cache.)

When there is a cache hit, then the item can be quickly supplied to the requesting circuit out of the cache. If there is a miss, then a memory access cycle has to be executed. The data is not only given to the requesting circuit, but also installed into the cache (replacing something else that has not recently been accessed).

Instruction caches tend to be specialized, to take advantage of the access patterns and the structure of the data. The cache may work at a higher level, combined with the instruction decoding. The requesting circuit asks not simply for an instruction opcode, but it demands a decoded instruction. The combined caching and decoding circuitry provides it. The idea is the same. Take the address and find a decoded instruction for that address. If it's not found in the cache, then it must be fetched and decoded.

So the answer to the question "how does the processor know" is that the processor is divided into logical units, and these units provide services to each other. The units which request data from memory do not have to be aware of the cache. The responsibility is put into the cache control circuitry. I.e. inside the ovearall processor there effectively a smaller processor which in fact "does not not know" that the data is in a cache.