I know that a simple CPU (like Intel or AMD) can consume 45-140 W and that many CPUs operate at 1.2 V, 1.25 V, etc.
So, assuming a CPU operating at 1.25 V and having TDP of 80 W… it uses 64 Amps (a lot of amps).
-
Why does a CPU need more than 1 A in their circuit (assuming FinFET transistors)? I know that most of the time the CPU is idling, and the 60 A are all "pulses" because the CPU has a clock, but why can't a CPU operate at 1 V and 1 A?
-
A small and fast FinFET transistor, for example: 14 nm operating at 3.0 GHz needs how many amps (approximately)?
-
Does higher current make transistors switch on and/or off more quickly?
Best Answer
Edit: so, http://www.synopsys.com/community/universityprogram/documents/article-iitk/25nmtriplegatefinfetswithraisedsourcedrain.pdf has a figure for the gate capacitance of a 25nm FinFET. I'm just going to call it 0.1 fF for the sake of keeping things simple. Apparently it varies with bias voltage and it will certainly vary with transistor size (transistors are sized according to their purpose in the circuit, not all of the transistors will be the same size! Larger transistors are 'stronger' as they can switch more current, but they also have higher gate capacitance and require more current to drive).
Plugging in 1.25 volts, 0.1 fF, 3 GHz, and \$\alpha = 1\$, the result is \$0.375 \mu A\$. Multiply that by 1 billion and you get 375 A. That's the required average gate current (charge per second into the gate capacitance) to switch 1 billion of these transistors at 3 GHz. That doesn't count 'shoot through,' which will occur during switching in CMOS logic. It's also an average, so the instantaneous current could vary a lot - think of how the current draw asymptotically decreases as an RC circuit charges up. Bypass capacitors on the substrate, package, and circuit board with smooth out this variation. Obviously this is just a ballpark figure, but it seems to be the right order of magnitude. This also does not consider leakage current or charge stored in other parasitics (i.e. wiring).
In most devices, \$\alpha\$ will be much less than 1 as many of the transistors will be idle on each clock cycle. This will vary depending on the function of the transistors. For example, transistors in the clock distribution network will have \$\alpha = 1\$ as they switch twice on every clock cycle. For something like a binary counter, the LSB would have \$\alpha\$ of 0.5 as it switches once per clock cycle, the next bit would have \$\alpha = 0.25\$ as it switches half as often, etc. However, for something like a cache memory, \$\alpha\$ could be very small. Take a 1 MB cache, for example. A 1 MB cache memory built with 6T SRAM cells has 48 million transistors just to store the data. It will have more for the read and write logic, demultiplexers, etc. However, only a handful would ever switch on a given clock cycle. Let's say the cache line is 128 bytes, and a new line is written on every cycle. That's 1024 bits. Assuming the cell contents and the new data are both random, 512 bits are expected to be flipped. That's 3072 transistors out of 48 million, or \$\alpha = 0.000061\$. Note that this is only for the memory array itself; the support circuitry (decoders, read/write logic, sense amps, etc.) will have a much larger \$\alpha\$. Hence why cache memory power consumption is usually dominated by leakage current - that is a LOT of idle transistors just sitting around leaking instead of switching.