Electronic – Understanding capacitors

capacitor

Say I've got a 5V circuit that draws 30W. 1Ws is a joule, so to run this circuit for 1s I need 30 joules of energy.

The amount of charge stored in a capacitor is \$\frac{1}{2} CV^2\$; solving for \$C\$, I'd need a 2.4 farad capacitor to run my circuit for 1s. That's pretty big.

But say my circuit only drew 30 W at an absolute maximum, and only for very brief periods, like, say 0.25 s at a time. Mostly it was drawing 20 W or so. Which means mostly a 25 W power supply would be plenty enough to drive it. Could I use a \$0.1F (= 2.4 F \times 0.25 s \times 5/30W)\$ capacitor to "take up the slack" of that extra 5 W the power supply isn't rated for?

Have I got the theory right, and would this work in practice?

Best Answer

Your theory is correct, but not complete.

When you start discharging capacitor - voltage will drop. When you draw 50% of energy from capacitor - capacitor voltage will drop by 50% by 30%. With power supply connected in parallel - capacitor voltage will not drop so deep, because current will just start flowing from supply, and you will be unable to get energy from capacitor.

Edit: image removed.

More useful information: link

Many power supplies have rating determined by temperature.

If you have power supply with transformer - you can overload it by 20% with no problems at all if you "let it rest" later.

If you have supply based on voltage converter (transformerless) this is more complicated - you may have problems with voltage stability, voltage drops, temperature etc. In general - you shall not overload transformerless power supply, even for a short time.

And one more thing about big capacitors - they are able to draw very large currents when they are charged. Supercapacitors should be charged from current limiter.