Electronic – Why do CPUs need so much current

amperagecpucurrentpowertransistors

I know that a simple CPU (like Intel or AMD) can consume 45-140 W and that many CPUs operate at 1.2 V, 1.25 V, etc.

So, assuming a CPU operating at 1.25 V and having TDP of 80 W… it uses 64 Amps (a lot of amps).

  1. Why does a CPU need more than 1 A in their circuit (assuming FinFET transistors)? I know that most of the time the CPU is idling, and the 60 A are all "pulses" because the CPU has a clock, but why can't a CPU operate at 1 V and 1 A?

  2. A small and fast FinFET transistor, for example: 14 nm operating at 3.0 GHz needs how many amps (approximately)?

  3. Does higher current make transistors switch on and/or off more quickly?

Best Answer

  1. CPUs are not 'simple' by any stretch of the imagination. Because they have a few billion transistors, each one of which will have some small leakage at idle and has to charge and discharge gate and interconnect capacitance in other transistors when switching. Yes, each one draws a small current, but when you multiply that by the number of transistors, you end up with a surprisingly large number. 64A is an average current already...when switching, the transistors can draw a lot more than the average, and this is smoothed out by bypass capacitors. Remember that your 64A figure came from working backwards from the TDP, making that really 64A RMS, and there can be significant variation around that at many time scales (variation during a clock cycle, variation during different operations, variation between sleep states, etc.). Also, you might be able to get away with running a CPU designed to operate at 3 GHz on 1.2 volts and 64 amps at 1 volt and 1 amp....just maybe at 3 MHz. Although at that point you then have to worry about whether the chip uses dynamic logic that has a minimum clock frequency, so maybe you would have to run it at a few hundred MHz to a GHz and cycle it into deep sleep periodically to get the average current down. The bottom line is that power = performance. The performance of most modern CPUs is actually thermally limited.
  2. This is relatively easy to calculate - \$I = C v \alpha f\$, where \$I\$ is the current, \$C\$ is the load capacitance, \$v\$ is the voltage, \$\alpha\$ is the activity factor, and \$f\$ is the switching frequency. I'll see if I can get ballpark numbers for a FinFET's gate capacitance and edit.
  3. Sort of. The faster the gate capacitance is charged or discharged, the faster the transistor will switch. Charging faster requires either a smaller capacitance (determined by geometry) or a larger current (determined by interconnect resistance and supply voltage). Individual transistors switching faster then means they can switch more often, which results in more average current draw (proportional to clock frequency).

Edit: so, http://www.synopsys.com/community/universityprogram/documents/article-iitk/25nmtriplegatefinfetswithraisedsourcedrain.pdf has a figure for the gate capacitance of a 25nm FinFET. I'm just going to call it 0.1 fF for the sake of keeping things simple. Apparently it varies with bias voltage and it will certainly vary with transistor size (transistors are sized according to their purpose in the circuit, not all of the transistors will be the same size! Larger transistors are 'stronger' as they can switch more current, but they also have higher gate capacitance and require more current to drive).

Plugging in 1.25 volts, 0.1 fF, 3 GHz, and \$\alpha = 1\$, the result is \$0.375 \mu A\$. Multiply that by 1 billion and you get 375 A. That's the required average gate current (charge per second into the gate capacitance) to switch 1 billion of these transistors at 3 GHz. That doesn't count 'shoot through,' which will occur during switching in CMOS logic. It's also an average, so the instantaneous current could vary a lot - think of how the current draw asymptotically decreases as an RC circuit charges up. Bypass capacitors on the substrate, package, and circuit board with smooth out this variation. Obviously this is just a ballpark figure, but it seems to be the right order of magnitude. This also does not consider leakage current or charge stored in other parasitics (i.e. wiring).

In most devices, \$\alpha\$ will be much less than 1 as many of the transistors will be idle on each clock cycle. This will vary depending on the function of the transistors. For example, transistors in the clock distribution network will have \$\alpha = 1\$ as they switch twice on every clock cycle. For something like a binary counter, the LSB would have \$\alpha\$ of 0.5 as it switches once per clock cycle, the next bit would have \$\alpha = 0.25\$ as it switches half as often, etc. However, for something like a cache memory, \$\alpha\$ could be very small. Take a 1 MB cache, for example. A 1 MB cache memory built with 6T SRAM cells has 48 million transistors just to store the data. It will have more for the read and write logic, demultiplexers, etc. However, only a handful would ever switch on a given clock cycle. Let's say the cache line is 128 bytes, and a new line is written on every cycle. That's 1024 bits. Assuming the cell contents and the new data are both random, 512 bits are expected to be flipped. That's 3072 transistors out of 48 million, or \$\alpha = 0.000061\$. Note that this is only for the memory array itself; the support circuitry (decoders, read/write logic, sense amps, etc.) will have a much larger \$\alpha\$. Hence why cache memory power consumption is usually dominated by leakage current - that is a LOT of idle transistors just sitting around leaking instead of switching.