Electrical – Why are there 2 clock rates (core vs memory clock) in the GPU

clockclock-speedgpuhardware

I learned at school that the clock rate inside a computer is the signal that keep switching between 0 – 1 (or active – inactive). There's also another delayed clock with the same frequency. These 2 clocks are then AND and OR together, and the out put is the enable-clock and set-clock.Clock and delayed-clock

enable-clock vs set-clock

When data is transferred inside the computer, it run from the original-register (inside the processing unit or on RAM) through the bus when the enable-clock is turned on, then set to its destination-register when set-clock = 1. Therefore, I thought that there's only 1 clock speed that run through the entire computer.

Back to the GPU, with its own processing unit and memory. Retailers' product pages always state 2 clock rates for a GPU: core clock and memory clock (which is several times faster than core clock). Which of those 2 clocks refer to the clock I've described above ? And what is the other clock.

Best Answer

As a broad brush-stroke, the product of bits-processed-per-clock and clock-per-second gives you the data throughput of the device. Here, the GPU core processes more data bits per clock than the RAM does.

Devices achieve this by having wider internal datapaths than that going to memory and/or by having more of these datapaths operating in parallel. Both allow for more bits per clock to be processed. Your GPU uses these techniques, having internal data buses that are 1x/2x/4x the memory data bus and having lots of processing engines operating in parallel.

Note that the GPU is not achieving a perfect balance - the core will be slowed by the memory bandwidth.

Related Topic