You need to distinguish between MPPT and what you do with the energy transferred.
MPPT is the process of acquiring maximum available energy from a source under given energy input conditions and/or of delivering maximum energy to the load under given conditions. For MPPT to be able to function the load has to be willing to ACCEPT all the emergy offered.
In most cases when battery charging from a solar source, the energy that the battery COULD take is greater than the energy available and MPPT serves to optimise the transfer of what is available.
When the battery is in a mode where it can accept less energy than the panel can optimally make available MPPT cannot be fully utilised - as the MAXIMUM POWER is in excess of the desired power. This is not a "fault" of the MPPT system - just a characteristic of the load.
So:
First of all, the way I understand it, the implementation of MPPT works via varying the resistance of the circuit such that panel output voltage * current drawn are at the maximum point on the power curve.
Almost - but the difference is crucial. The MPPT controller varies the load so that the load will accept as much power as possible OR so that the panel will deliver as much energy as possible. This effectively alters the "effective" or dynamic resistance but there is no resistance in the traditional sense and no intentional energy dissipation. The most useful analogy is that an MPPT system is an impedance translator that matches source and load impedances. This is not a familiar 'image' but is probably closer to what it does than many other 'metaphors'.
Then the battery charger section takes whatever voltage is 'given' by the MPPT section and drops it to within the battery charging voltage band (e.g. 13.5-14.5V for lead acid), maybe using a buck converter.
No. The conversion to appropriate level for charging is very much part of the MPPT system proper. The buck converter is essentially an impedance converter - we just don't usually see it that way.
Further, the way I understand it, most MPPT battery chargers use the constant-current bulk-charge method followed by a constant voltage topping method (perhaps followed by a constant voltage float charge).
The charge method is independent of the MPPT action proper. The designer decides on a battery charging algorithm set and the MPPT controller then matches the charger requirement to the panel energy at each stage. CCCV /float / ... are all just "clients" of the MPPT process.
Now what mystifies me is how the charger is supposed to run a constant-current bulk-charge algorithm while using the power coming to it efficiently. Because, if the charger simply regulates the output voltage via a sense resistor such that the output current remains at some predetermined constant level, that voltage is dependent on the battery characteristics and charge state, so essentially the battery charging power curve is fixed for a fixed current.
As above, for MPPT to work the energy that COULD be used has to be more than the energy available.MPPT then makes as much available as it can.
When charging LA (lead acid) batteries from mains power the choice of charge rate is usually determined by battery factors or cost. eg a domestic battery charger may provide 2A max when 10A would be acceptable due to cost considerations. But in a designed system where cost is important but secondary, a decision may be made to charge at a maximum of C/5 or C/10 or whatever. Assume C/10 for an example system. If the same system is used with a solar source and the solar source is able to provide more than C/10 then MPPT will not be fully used. But if the PV system can provides say about C/15 then MPPT will allow I to be set as high as possible without exceeding C/10.
SO
So what if the MPPT suddenly gives more power? Does it get wasted? What if the MPPT produces less power? Does the constant current become not so constant?
As above, if MPPT can provide more than C/10 under CC conditions then, yes, not all the available energy is used - if there is no other use for the energy it IS "wasted".
Lastly, if there is such a thing as a constant-voltage MPPT charger (which makes more sense to me for this situation) how would transition from bulk charge to topping charge be carried out during changing conditions (changing current). Plateau detection for instance would be impossible.
Again, it is up to the battery charger designer. There is no "MPPT charger" per se - there is just a charger that uses MPPT to get as much energy as possible. IF a charger implemented CV charging it would taake a certain amount of energy in a given situation. If enough energy to do this then CV at the chosen V is not possible BUT MPPT will provides as much energy as possible. If MPPT can meet the whole need then some energy will be "wasted".
A little used but sensible approach to excess energy is to use it for heating in some form. This could be water heating, space heating, fruit drying or whatever. A heater load is close to 100% efficient (with losses in supply leads and connections producing heat which may not be useful). If the heat is useful then it is an excellent means of utilising otherwise "wasted" energy. Note that this applies to energy directly sourced by a PV panel. It does not apply directly to energy which has been stored in a battery etc as cycle lifetimes and conversion efficiencies and other factors also apply.
But how can a constant-current charging algorithm (which might take hours) be determined when the future conditions are unknown? Either it is not constant-current or the charger must leave a margin of error and thereby waste a lot of power.
Constant current is applied until Vbattery reaches some threshold. Icc is usually the MAXIMUM allowed for whatever reason. If less is available if just takes longer before Vmax is reached.
Imax in a mains system is usually set by battery factors and cost and Imax_actual will usually erqual Imax_design.
In a PV system with limited energy I_right_now will usually be < I_max_allowed and MPPT can help.
If I_max_possible_now is usually > I_CC_Max then you are wasting money using MPPT. –
In a CONSTANT CURRENT charging system, if the MPPT is able to produce enough power to meet that current level, then everything is OK, but if suddenly clouds pass between the sun and the panel, and there is not enough power to sustain that current level, it obviously drops off. But then what does the 'smart charger' think of that? It may look as if the battery suddenly had enough. Any ideas on how MPPT chargers might deal with this situation?
This should not be a problem in the CC mode - if Vbattery is < Vmax then you supply what current you can and continue. MPPT works in this case - it takes whatever PV energy is available, adjust the load impedance to maximise panel output and then supplies current to the charger in such a manner that current is maximised.
I confused constant voltage and current with the last bit of that comment about it looking as if the battery had enough. But the question holds. You say that if the MPPT can provide more power when there is a need for it, then it does so, and this is why it is so useful. But what about the stability of the charger feedback information, if current and voltage are both changing all over the shop? Doesn't a charger need to build a nice steady curve to know where its at? My mains charger takes 10min to do this at the start.
You need more arcane skills - black magic is involved:-).
You are asking questions about at least two topics and they are essentially independent but tightly coupled in many applications.
Q1 is "Waht does MPPT do" and is well enough covered above.
The 2nd question is "What algorithm should I use to charge this battery in the face of a variable energy supply which sometimes cannot provide as much energy as I can obtain". I assume that this all relates to a "12V" lead acid system as that's what you mention. I've done (too) much playing with small solar NimH charging and am working on small solar LiFePO4 charging at present. I have less solar LA experience but the chemistries' requirements overlap somewhat.
In the CC mode MPPT may be useful. Also at the start of CC but as I falls a stage is reached where PV can cope without MPPT. If you interrupt the controller temporarily and it is PV aware it is easy in most cases to pick up where you left off. CC is easy.
A charger in CC mode should not get lost and should not need much effort or time to establish that CC is what is required - EXCEPT when the manufacturer is doing or tying to do or pretending to do something unusual or 'fancy'.
I do not know what your mains charger does or why - model and link would be useful.
LA CCCV basic charging is relatively straightforward. Manufacturers may add in analysis or conditioning or arcane hopefulness and one needs to know what they say they are achieving.
In most case if Vbat is > Vmin_normal then you can start in at defined CC. If you do not know the battery capacity you will not know what CC_max is and they may be establishing capacity by looking at deltaV or whatever at the start by applying various standard currents and seeing how voltage changes.
Once you hit Vmax and enter CV the battery is in charge of current.
When the controller decides Iccv has fallen to a target level or max allowed charge time has expired it may terminate charge or apply a topping charge - but again the battery is in charge of the current.
CV is a problem if low sun causes Ichg to drop below Iterminate or if you "wake up" with the battery late in the C charge cycle and sun energy is low. ie the low sun energy may reduce Ichg under CV so that it drops below Vterminate, but a bit of timing intelligence and an awareness of the state of the available energy will probably suffice. eg if you were in CV mode for 30 minutes and it usually takes ~= 4 hours and solar energy plummeted or night time came you can make decisions to deem the cycle incomplete even thoug Ichg is < Iterminate.
If I'm not mistaken though, with constant-current bulk-charging, the current setting (however this is determined, probably by battery datasheet) must be less than the expected average available current (for a particular power point on the charging voltage curve) or otherwise it will be difficult to maintain that current in changing weather conditions. Now this means that the charger must be able to provide more power by some margin than the batteries reasonably take, which means that the system is still wasting power (maybe inevitably). – William 1 hour ago
" ...If I'm not mistaken though ..." -> You are :-).
Or, you are not wrong if you define it as NECESSARY to have CONSTANT current CC bulk charging - but it is almost never NECESSARY to do this.
If you decide that you want eg 2000 deep cycles out of a battery and the way to do this is to charge ag eg C/12.3456 or whatever, and nothing else, then the only way to do this with certainty in a solar powered system is to ensure that there are never storms or clouds (or nights).
BUT in most cases CC Imax is not (as I have said above) some special magic figure that must be adhered to closely but rather is a maximum set for (usually) battery health reasons. It is unlikely that if you decide that 12A max is max that using 8 to 12 & varying and occasionally 2A or 0 A is going to do any harm. The end of CC charging is almost never set by timer and calculation but by the battery reaching a desired voltage. Taking somewhat longer to get there than absolute minimum is not usually a problem.
| An area where having at least some Imin available is required is in providing a topping charge where the PV may be running out of ability to provide enough I at elevated V and you may be able to charge the battery forever at below required rate and never complete the charge. I've see this exact problem reported in systems with very small PV capacity relative to battery capacity. –
Although we have already spent some time chatting about some details of your implementation, I'll try to take you through the steps I take in designing a long-life LiFePO/Solar project and leave you to fill in the specifics.
First thing to do, with regards to all your power conversions and intermediary steps, is find the losses. If you have a microcontroller that you put to sleep a lot and uses only 1mA on average, you are not going to care about one, two or three conversions in between.
But if you have a module that uses 50mA with 400mA peaks, you may want to pay very close attention to that module: Will it run off the battery as well, or can you cheat it out of the equations by powering it directly from the Solar, for example if it reports the amount of energy generated wirelessly once an hour. In that case you may even want to control its converter with the microcontroller, to save energy for charging and other stuff the 59 minutes each hour you don't need the converter's 3 ~ 10mA quiescent current, if that's a factor.
The next thing you could consider is: Does my MCU and application need a very smooth 3.3V? LiFePO4 is a very good choice for your application for various reasons. One of them is its minimum voltage of 2V (2.5V advised), which you can even safeguard with a 2.7V brown-out setting. Most 3.3V MCU's can also handle 3.6V, which happens to be the LiFePO4 peak voltage. So you may not necessarily need anything between the battery and the application, which saves a lot of waste as well.
For the reference, LiFePO4 in this case is a very good choice for many reasons:
- Their voltage curve is very flat compared to Li-Ion or LiPo. About 80% of its power is delivered between 3.4V and 3.2V, so they offer very easy to dimension conversion settings. (The buck or boost margin to account for remains small over most of the battery energy content).
- Their internal chemistry is very robust, allowing a much wider temperature range of current drain. Be aware, though, they can still not be charged below freezing though, so you need to account for that.
- They don't easily outgas, so they don't inflate as weirdly as LiPo's.
- Damage to a cell is still extremely unlikely to cause explosions or in many cases even fire.
- Their self discharge over wide temperature range is usually marginally lower even than other Lithium chemistries.
As a point of interest: The protected Q&A posted by Russel that you link to for info about LiIon and LiFePO4 is not very useful, there's many assumptions made there that are not even correct for LiIon, let alone LiFePO4. To start with the assumption of linearity of the chemical charge process. Best to forget about that post.
When it comes to charging and discharging LiFePO4 the currents are quite limited compared to modern LiPoly cells, but they are much more permissive toward over-tension, since the Iron Phosphate structure is more resistant to pure lithium plating. But I'd still advise you to use a dedicated protection chip or ready-bought circuit (for sub-1A applications I buy them in bulk for nearly no money at all). They drain micro-ampere's, take a load of testing and risk off your hands, and the best thing is, they feature analogue circuitry that reacts quickly and efficiently to over-current situations caused by damaged wiring.
This will allow you to focus on power-management of all your modules in your MCU without the risk of overloading the interrupt window in your code, and then skipping a beat in detecting over-current, over-coltage, etc.
When charging a LiFePO4 at about 0.75C, you can usually keep the fixed current even up to 3.9V without damage (given the cell is between 5 and 50 degrees Celcius), so if you charge with a fixed current, you can just let the protection switch it off (they are often set to 3.7V and might allow a 10ms peak of 3.8V). So if you have a system (MCU or dedicated) that makes 0.75C current with a 4V or 4.5V limit, or depending on the protection, even just 5V, the protection chip or circuit will take care of it all.
If you assume you have Device 1 that needs 200mA, but not always, at 2.7V to 5V (this is a broad assumption, but many devices like wireless modules have allowances like that). And that you have a uC that uses about 1mA active, and you let it sleep at 25μA as much as possible that also handles 2.7V to 5V, you could do something like this:
simulate this circuit – Schematic created using CircuitLab
D2, D1 R1 and R2 are meant to let the uC know when the solar cell is operational enough for the buck converter to operate. You can then use this information to control charging of the battery, along with the temperature of the cell and you can use that to turn on the high power module when there is enough power.
M2 allows you to actively turn on charging of the battery. M1 allows you to control the extra device.
I added D3 to indicate the presence of the MOST body diode. It's still better to turn on the MOST when you start using higher currents from the battery, to have less waste in the MOST's body diode, or an extra diode you place.
When charging is complete, the Cell protection will release and the power rail will float away from 3.8V up to 4.25V, you could even use that to detect that happening. (Compare VCC versus internal reference, for example). You can then monitor how often you reach the maximum charge state. You can also then disable charging for a while, to prevent continually peaking it off at its limit voltage. It's better to have them relax/drain a bit before you re-charge.
Best Answer
A solar panel is a current source over most of its characteristic; the voltage it shows is "set" by what you connect to it.
When you connect a battery to it, the voltage will be set by that battery; connect a charger to it, and the voltage will be set by the input impedance of that charger. This voltage may be nowhere near the voltage at the MPP; for instance, a 5V battery wouldn't be a good match for a 12V solar panel.
The idea of an MPPT is to keep the panel producing maximum power under all circumstances. The MPP is where V * I is maximal, or the maximum rectangular area that will fit under the panel's I/V characteristic, about 10W in this example:
(Image source - my website: Using a solar panel for USB charging)
Suppose you connect a 5V-output DC/DC converter to a solar panel; it would work fine, but it would set its input impedance (by varying the PWM) to a point that doesn't use the full power of the solar panel (the "you are here" point), only about 3W of the available 10W.
By varying the input impedance, you can arrive at the MPP. The optimal way of doing that is by varying the input impedance of the DC/DC converter and measuring if more power is delivered at its output.
A simpler (but less efficient) way is setting a fixed point on or near the MPP, say 16V, and reduce the input current of the DC/DC converter (by varying its input impedance by varying the PWM) when the panel voltage drops below that 16V. This method if often used these days for small applications.
In both methods, the input of the DC/DC converter is regulated, not its output, the output behaving like a current source.
The charger is a separate part of the system, and has its own regulation for correctly charging your batteries, using the output of the DC/DC converter.