Amplifier Power Comparison – Amplifier Average Power vs. Peak Power

amplifierpower

This is somewhat of a continuation of my question here, where I questioned how a website derived the required power for an amplifier, with the resolution being that it had to do with calculating the average power consumed both by the amp and the load during a sine wave cycle.

My next question is – when determining the ratings for an amplifier power supply, why would we use the average power and not the peak power?

Best Answer

In audio amplifiers, power supply capacitors are being periodically recharged by the sources of power (transformer and bridge rectifier, the old fashioned way; or by more modern, fancy switching supplies.) The capacitors "smooth" out the voltage rail peaks and dips and can supply the peak instantaneous power to the load. Their voltage will droop a little but the negative feedback in the amplifier will keep the output linear with respect to the input source. Meanwhile, the power supply feeds the capacitors a bit later and recharges them back up. So only the average power needs to be supported for audio.

On the other hand, if you were to use the amplifier as a DC amplifier, which is effectively just a voltage-controlled power supply able to supply both plus and minus voltages to a load, then you would need to be able to support the peak power dissipation.

For example, a \$5\:\text{W}\$ audio amplifier for an \$8\:\Omega\$ speaker might use \$\pm 12\:\text{V}\$ rails and have to support two quadrants, each with \$1.8\:\text{W}\$ average dissipation. So this is perhaps about \$8.6\:\text{W}\$ total.

But if this were a voltage-controlled DC power amplifier, then the worst case dissipation would be \$4.5\:\text{W}\$ in the load and another \$4.5\:\text{W}\$ in the quadrant being used. A total of \$9\:\text{W}\$, which is slightly more for the power supply and then also quadrants that have to be able to dissipate almost 3X as much.

Related Topic