If I want to make a sine-wave inverter where the output AC voltage is lower than the input DC input voltage, how can I see that using pulse-width modulation will be more efficient that just using a transistor as an amplifier if I have a sinusoidal gate driving signal at the desired frequency?
I've heard that transistors generally operate efficiently when "on" or "off", but not in an intermediate regime. My impression is that this is true for both BJTs and FETs, but I'm not sure. This rule of thumb is consistent with other things I've learned, like the fact that CMOS integrated circuits tended to be lower power than equivalent TTL chips, and also the fact that switched mode power supplies are efficient and popular. I've never really challenged this idea before (that switching a MOSFET with a given duty cycle is more efficient than using it as an amplifier). I spent about an hour trying to fact check it earlier and didn't manage to get through it.
What I've done so far: Firstly, I decided to focus on MOSFETs. Secondly, as an example I decided to look at a specific n-channel MOSFET, a Toshiba K3767, because I have a LD7550-based switched mode power supply that uses a K3767 as its power transistor. Thirdly, a bit of internet reading tells me the two main losses will be switching losses and conduction losses. So I guess I could do two calculations, one where I switch with a square wave at a high frequency, like 65 kHz as described in the LD7550 datasheet, and another scenario where I drive the K3767 with a sine wave at a low frequency like 60 Hz.
Am I on the right track here? Is there some really obvious answer, like I2R losses will be huge if I use the MOSFET as an amplifier with a sine wave on the gate?
How can I show that rapidly switching a transistor between on and off at a given duty cycle is more efficient than operating it as an amplifier to achieve the same average output?