Electrical – Why are GPU dies so much physically bigger than CPU dies

cpugpu

I understand that the physical sizes of microchips are generally limited by silicon yields. The larger your chip is, the more waste occurs when you hit a defect in the silicon and have to throw it away, and at some point this becomes unsupportable. I've noticed, however, that modern GPUs always seem to be significantly bigger than CPUs.

High-end consumer GPUs seem to run in the 400-600 mm^2 range, with the RTX 2080 Ti sitting at a whopping 775 mm^2.

It's a bit harder to find die sizes for high-end consumer CPUs, but it looks like the i9-9900K, for example, has a die size of 178 mm^2. The latest Ryzen generation has even smaller sizes, what with their "chiplet" architecture, with the largest single die being the I/O die at around 125 mm^2.

Why are GPUs so much larger than CPUs? Is it something to do with silicon yields and chip architecture, or is there some kind of economic consideration?

Best Answer

GPUs are parallel processors. Their performance scales linearly with die size, while bigger dies can be run at lower clocks and so be more efficient while still being faster. Therefore, it makes sense to have the die as large as is economically feasible as it will be faster and more efficient.

CPUs have much less parallelism, and so the return from adding more die area is much smaller, or for many applications, non-existent or even negative. Therefore there is a relatively optimal die size for a consumer CPU on each node, and it is generally quite small. The comparison to the comparison to the 175mm^2 9900k is actually slightly misleading, only about 100mm^2 is CPU. The remainder is a relatively large GPU, video decoder, IO, etc. Since there is also a minimum die size needed to fit all the IO pins, low power CPUs dies each generation are often mostly GPU to fill up the unused die space.