CMOS – Why Decreasing Supply Voltage Reduces Maximum Circuit Frequency

cmosdigital-logicintegrated-circuitvlsi

In CMOS circuit design, we know dynamic power is proportional to \$V_{dd}^2Cf\$, so the best way to reduce dynamic power is to reduce \$V_{dd}\$.

However, according to the textbook,

Keeping the same clock frequency becomes unsustainable as \$V_{dd}\$ is continuously reduced, because the rise and fall times of signals stop meeting the noise margin of the gate.

Source: Page 47, Chapter 2, Section 2.5, Parallel Computer Organization and Design
By Michel Dubois, Murali Annavaram, Per Stenström.
Google books link

I don't understand why rise time and fall time increases as voltage decreases.

Best Answer

Decreasing the voltage decreases the maximum frequency that can be used such that the operation of the digital system is as desired.

This is because the equivalent resistance, \$R_{eq}\$, of the CMOS transistor increases as \$V_{dd}\$ decreases. As the equivalent resistance increases the propagation delay of the inverter increases, since the propagation delay is proportional to the equivalent resistance.

The average propagation delay is the average of the high to low propagation time and low to high propagation time. The output rise time and fall time depends on the propagation delay of the transistor.

The equation of propagation delay of the inverter is approximately given by

\$t_p = 0.69*R_{eq}*(C_{in} + C_{ext})\$.

\$C_{in}\$ and \$C_{ext}\$ are the effective internal and effective external capacitances of the gate.

As the propagation delay increases the maximum frequency the circuit can use has to decrease, else it would violate the dynamic discipline of digital circuits. Further reading on dynamic discipline: https://computationstructures.org/notes/sequential_logic/notes.html

Modern computers use Dynamic Voltage and frequency scaling, i.e. the frequency is reduced if the supply voltage is reduced.


I also found additional explanation online:

cmos_reduce_vdd

Fig 5.12 b is the voltage transfer characteristic of a CMOS inverter for the supply voltages of 200 mV, 100 mV, and 50 mV (while keeping the transistor thresholds at the same level). Amazingly enough, we still obtain an inverter characteristic, this while the supply voltage is not even large enough to turn the transistors on! The explanation can be found in the sub-threshold operation of the transistors. The sub-threshold currents are sufficient to switch the gate between low and high levels, and provide enough gain to produce acceptable Voltage Transfer Characteristics.

The very low value of the switching currents ensures a very slow operation but this might be acceptable for some applications (such as watches, for example).

At around 100 mV, we start observing a major deterioration of the gate characteristic. \$V_{OL}\$ and \$V_{OH}\$ are no longer at the supply rails and the transition-region gain approaches 1. The latter turns out to be a fundamental show-stopper. To achieving sufficient gain for use in a digital circuit, it is necessary that the supply must be at least a couple times \$ \psi_{T} = k.T/q \$ (=25 mV at room temperature), the thermal voltage introduced . It turns out that below this same voltage, thermal noise becomes an issue as well, potentially resulting in unreliable operation.

\$V_{DD_{min}} > 2…4k.T/q\$

The above equation presents a true lower bound on supply scaling. It suggests that the only way to get CMOS inverters to operate below 100 mV is to reduce the ambient temperature, or in other words to cool the circuit.

Source for the above figures and explanation: http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee141_f01/Notes/chapter5.pdf