Power dissipation on chip vs. Over power dissipation

heatintegrated-circuitpower-dissipation

I was reading this pdf regarding computation from MIT's opencourseware and on p.8 through some obscure mathematical manipulation they basically reach the conclusion that the cooler you make your chips, the less heat is dissipated and the bigger the maximum integration density can get. This seems intuitively logical to me, but their last statement piqued my interest:

At lower temperatures, the power dissipation on chip is decreased, but the overall power
dissipation actually increases due to the requirement for refrigeration.

For their actual calculation they used room temperatures, so I'm assuming they mean that if we try to cool the chip (to any temperature lower than room temperature), it would be obselete because the cooling itself would increase the overall power dissipation.

But I don't understand this argument. You could say the same for refrigerators. What does it matter that the overall power dissipation increases? If you cool the chips, you can increase the integration density which would lead to stronger computers. What does it matter that the heat in other parts of the system increases? I might be wrong about this, but this is how refrigerators work too, no? It's just in the laws of thermodynamics..

So an answer to this question would clarify the above statement (or say it's indeed stupid). Why shouldn't we cool chips to low temperatures in order to increase the maximum integration density?

Best Answer

Cooling is a vital part of most computer system design.

Air at room temperature is the usual coolant, because it's available everywhere* and you can move it easily with a small fan.

Some enthusiasts and large systems are liquid cooled. All the plumbing and pumping systems can't be miniaturized, and add cost, so this is usually done only on "tower" size systems: http://www.pcworld.com/article/227964/pc_liquid_cooling_system_do_it_yourself.html

The extreme is liquid nitrogen cooling: http://www.corsair.com/blog/setting-up-your-pc-for-liquid-nitrogen-overclocking/ , which is even more expensive and impractical. It's nearly always easier to stick with conventional cooling and buy more computers to achieve the same computational throughput.

(There are also issues with hold timing when running extremely cold - signals may arrive too early if the chip has not been designed to run at that temperature. If you move the minimum design temperature down to -200 that may force you to move the maximum temperature down as well, leaving you with a product that requires a liquid air supply and thus a very limited market)

  • Except in space, where ultimately all your cooling must be radiative