Performance Optimization – Can CPU and GPU Loads Both Reach 100%?

cpugpuoptimizationperformance

This is a general question on a subject I've found interesting as a gamer: CPU/GPU bottlenecks and programming. If I'm not mistaken, I've come to understand that both CPU and GPU calculate stuff, but that one is better in some calculations than the other due to the difference in architecture. For example, cracking hashes or cryptocurrency mining seems way more efficient on GPUs than on CPUs.

So I've wondered: is having a GPU at 100% load while the CPU is at 50% (for example) inevitable?

Or, more precisely: Can some calculations that are normally done by the GPU be done by the CPU if the first one is at 100% load, so that both reach a 100% load?

I've searched a bit about the subject, but have come back quite empty-handed.
I think and hope this has its place in this subsection and am open to any documentation or lecture you might give me!

Best Answer

Theoretically yes, but practically it's rarely worth it.

Both CPUs and GPUs are turing-complete, so any algorithm which can be calculated by one can also be calculated by the other. The question is how fast and how convenient.

While the GPU excels at doing the same simple calculations on many data-points of a large dataset, the CPU is better at more complex algorithms with lots of branching. With most problems the performance difference between CPU and GPU implementations is huge. That means using one to take work from the other when it is stalling would not really lead to a notable increase in performance.

However, the price you have to pay for this is that you need to program everything twice, once for the CPU and once for the GPU. That's more than twice as much work because you will also have to implement the switching and synchronization logic. That logic is extremely difficult to test, because its behavior depends on the current load. Expect very obscure and impossible to reproduce bugs from this stunt.

Related Topic