Here's the picture: -
It's significantly more complex than what I'm going to say but bear with me: -
Firstly I'm going to ask you to imagine that D1 is a switch like SW1 but, it closes when SW1 opens. What you then get at point (2) on the circuit is a square wave; it has a peak of Vin (let's say 10V) and rapidly drops to 0V when SW1 opens (remember I've asked you to consider that D1 is also a switch). This repeats at some arbitrarily high frequency such as 100kHz.
Let's also say that Vd (your output) is desired to be 5V. Now if the 10V squarewave at (2) spent half of its time at 10V and half of its time at 0V then the average value would be 5V i.e. exactly what you want.
Should you in fact require Vd to be 3.3V then the squarewave at (2) would spend about a third of its time at 10V and about two-thirds of its time at 0V. (Remember I'm asking you to consider that D1 is a switch that closes when SW1 opens).
So you've got a squarewave at (2) that has a duty cycle of (say one-third) AND now you have a low pass filter formed by L1 and C1 - the output from this is pretty much a dc voltage at 3.3V.
Then, you put a load resistor on (\$R_L\$) - does this alter the output voltage average level? The answer is virtually "no" because you are using an inductor and capacitor to form a low pass circuit and providing the inductor's internal resistance isn't too big then there won't be too much of a dc voltdrop across L1 and you'll still get 3.3V at the output.
However, if you load the output too much, the 3.3V will start to droop and this is when the control circuit starts to take over and apply a little more than one-third duty cycle to SW1 being on. This control loop is fundamental in all buck regulator circuits but it isn't necessary to understand the ins and outs of this to comprehend the basic working.
So far I've assumed D1 is a switch (like SW1) and what 've described is called a synchronous buck regulator - it uses two MOSFETs; one for SW1 and one in place of D1. I think it's easier to approach synchronous buck regulators first then move on to understanding standard (but less efficient) buck regulators.
Standard buck regulators have D1 (not a switch) and they would like D1 to behave like a switch (as described above) but it doesn't always do this. For a start it drops 0.7V across it when it is acting like a switch (maybe a bit less if you use a schottky diode). It conducts like a switch but the 0.7V across it loses energy in the form of heat - it can never match the efficiency of the synchronous regulator.
How does D1 behave like a switch - when SW1 is closed, a ramping current runs through L1 and when SW1 eventually opens, the back emf from L1 drives (2) negative in order to keep the current through L1 still flowing. This is the nature of inductors and if this is a little alien to you go and study inductors. This negative voltage rapidly falls below 0V until D1 starts to conduct - now it is behaving like a synchronous regulator (albeit with 0.7V drop across it). SW1 eventually starts conducting again and the cycle repeats.
Previously the squarewave described at point (2) was 10V peak and 0V at the bottom - now it is 10V peak but -0.7V at the bottom. L1 and C1 are still a low pass filter (as mentioned previously and if the duty cycle of the square wave was one-third, the voltage at the output would be about 3.1V. The control system would take over and alter the duty cycle until the output was 3.3V.
But there's a further problem when D1 is just a diode (and not a switch) and this really does make non-synchronous buck regulators quite tricky to get to grips with. If the load is very light, D1 doesn't act like a switch (as per a synchronous regulator) and the output voltage rises and rises because the energy stored in L1 keeps getting pumped into C1 and of course the output voltage rises. It's not a big problem because the control loop keeps this in check by applying ever smaller duty cycles the the squarewave at point (2).
I've got the point in my explanation where I need to take a break so if you manage to wade through this and want more let me know. The upshot of what I would explain is surrounds storing energy in the inductor (when SW1 closes), transferring it to the capacitor and making sure that the transferred energy \$\times\$ frequency (cycles per second) matches the power needed by the load resistor at the voltage the regulator is intending to regulate at.
Synchronous regulators are far easier to explain!!
Just at first sight, your formula gives an energy (Joule), not power (Watt)...
If the "...custom equipment has an oscilloscope which is monitoring the voltage across the inductor and the current through it...", then the inductor losses can be calculated right out of the measured values (i.e. right from definition of average power) as:
\$ P_{losses} = \frac{1}{T} \int_0^Tv(t)i(t)dt \$, (average value of instant power during period)
where v(t) is the waveform of voltage across the inductor, i(t) is the waveform of current through it and T is period of these waveforms. Provided that the oscilloscope is a digitizing equipment, then, in principle, the corresponding voltage and current samples from within one period have to be multiplied, summed, multiplied by the sample interval and divided by the period (T) length.
For instance the trapezoidal integration method can be used:
If there are n equidistant samples (of \$ v_i, i_i \$, i = 1 to n) covering one period T, then the losses can be calculated as:
\$ P_{losses}= \frac{1}{(n-1)} \cdot (\frac{{v_1} \cdot {i_1} + {v_n} \cdot {i_n}}{2}+ \Sigma_{i=2}^{n-1} v_i \cdot i_i) \$
2015-04-12, \$ \textbf 1^{st} \$ appendix
As I already stated in the very beginning, your formula is not okay. At first, the T in it is superfluous (it is already incorporated in the duty cycle, D). Let's have a look at it a bit more closely. It can be rewritten (omitting the T, of course) as:
\$ P_{AC} = [D \cdot (V_{IN}-V_{OUT})-(1-D) \cdot V_{OUT}] \cdot I_{RIPPLE} = (D \cdot V_{IN}-V_{OUT}) \cdot I_{RIPPLE} \$,
but is it already okay?
You wrote "…Since the inductor has some AC losses from eddy current and hysteresis, I took the power during the charging period and subtracted the power during the discharge period and what would be left is the loss…".
In principle, this idea is right in my opinion, but:
- The voltage across L during \$ t_{ON} \$ (term with D multiplier) is:
\$ V_{L\_on} = V_{IN}-V_{PMOS\_SWITCH\_ON}-V_{OUT} \$,
not just \$ V_{IN}-V_{OUT} \$ (the PMOS switch contribution isn't negligible).
- The voltage across L during \$ t_{OFF} \$ (term with (1-D) multiplier) is:
\$ V_{L\_off} = -(V_{OUT}+V_{DIODE\_SWITCH\_ON}) \$,
not just \$ -V_{OUT} \$ (neither the diode switch contribution is negligible).
- If we presume both the above voltages as constants during their time intervals and the ripple current being "pure" sawtooth waveform, then value that must be used in the calculation on the place of current is \$ I_{RIPPLE}/2 \$ (i.e. its average value – it follows from the very first formula, because if v(t) = const., then it can be factored out the integral and the rest is the ripple current average value).
The resulting formula will be then:
\$ P_{AC} = [D \cdot V_{L\_on}+(1-D) \cdot V_{L\_off}] \cdot \frac{ I_{RIPPLE}}{2} \$
( \$ V_{L\_off} \$ is negative in relation to \$ V_{L\_on} \$, we have to measure both the voltages the same way, that's why the "+" operator is used in the formula)
It is questionable, however, whether the speculated presumptions (3) are "sufficiently" valid/met and how much they affect accuracy of the result.
Best Answer
Yes
Yes, or low enough resistance to not affect the result much. Remember that all real inductors have parasitic series resistance, so being able to neglect a small resistance matters.
The capacitor C should be large enough to prevent \$V_o\$ changing very much.
Of course the output voltage is not perfectly constant. There is always some voltage ripple in this circuit. So the inductor current is not a perfect ramp. But if it's designed reasonably, it will be close enough to a ramp to do useful analysis with that approximation.