Electronic – SMPS design, requirement of generating High Frequency

achvdcswitch-mode-power-supplytransformertransmission

Question:
Is that requirement of converting 50Hz to a higher frequency in SMPS (Switched Mode Power Supply) is just to reduce the transformer/inductor core size?

Background:

Since as we all know that in a typical SMPS design, the AC input is first converted into
DC and then it is chopped and converted to a higher frequency (100KHz to several MHz). There are problems on this approach, such as bad power factor correction, etc.

Why isn't is possible to just use the 50Hz as it is and using a traic and a optocoupler we could do the same thing at 50Hz. I mean turning off cycles when optocoupler is on through the traic and otherwise turning it off. Why is that design not used in power supply designs?

Off topic, but curious readers will fire this one soon, then why we are using such a low frequency in transmission lines. Can't we make power plant rated transformers into a size similar to a television?

[ I originally stolen this idea from one Amstrong oscillator based SMPS].

Best Answer

There is more to a SMPS than just a higher frequency. The duty cycle is also changed.

A higher switching frequency does this:

  • Smaller transformer/inductor
  • Faster transient load response time
  • Smaller output capacitors
  • Lower output voltage ripple
  • LOWER overall efficiency
  • HIGHER RF Noise emitted

Generally speaking, the lower the wattage for the SMPS the higher the switching frequency. The disadvantages of a high frequency, lower efficiency and higher EMI, are easier to deal with when the overall wattage is lower. But there are plenty of exceptions to this rule.

But just using the normal 50/60 Hz AC waveform from the wall does not really help you. The reason for this is that you still need to chop the waveform to vary the duty cycle of the signal going to the transformer. In the design of a SMPS, the turns ratio of the transformer gets you close to the ideal output voltage-- but not close enough. Varying the duty cycle in real-time allows the output voltage to be tweaked to be the proper value (usually within a couple of percent). Without this duty cycle adjustment the output voltage might vary by 10% or more.

But that's not all. Varying the duty cycle of what is already an AC waveform is tricky. Sure, it can be done but why? It is easier to convert the AC input to DC and then chop it than to chop the AC input itself. And there really is no benefit to keeping the original AC frequency.

Which brings us to power factor correction. Using the original 50/60 Hz frequency into the transformer really does not help power factor correction. It would still mainly consume power near the peaks of the AC waveform and not when the AC input is at a lower voltage.