Although we have already spent some time chatting about some details of your implementation, I'll try to take you through the steps I take in designing a long-life LiFePO/Solar project and leave you to fill in the specifics.
First thing to do, with regards to all your power conversions and intermediary steps, is find the losses. If you have a microcontroller that you put to sleep a lot and uses only 1mA on average, you are not going to care about one, two or three conversions in between.
But if you have a module that uses 50mA with 400mA peaks, you may want to pay very close attention to that module: Will it run off the battery as well, or can you cheat it out of the equations by powering it directly from the Solar, for example if it reports the amount of energy generated wirelessly once an hour. In that case you may even want to control its converter with the microcontroller, to save energy for charging and other stuff the 59 minutes each hour you don't need the converter's 3 ~ 10mA quiescent current, if that's a factor.
The next thing you could consider is: Does my MCU and application need a very smooth 3.3V? LiFePO4 is a very good choice for your application for various reasons. One of them is its minimum voltage of 2V (2.5V advised), which you can even safeguard with a 2.7V brown-out setting. Most 3.3V MCU's can also handle 3.6V, which happens to be the LiFePO4 peak voltage. So you may not necessarily need anything between the battery and the application, which saves a lot of waste as well.
For the reference, LiFePO4 in this case is a very good choice for many reasons:
- Their voltage curve is very flat compared to Li-Ion or LiPo. About 80% of its power is delivered between 3.4V and 3.2V, so they offer very easy to dimension conversion settings. (The buck or boost margin to account for remains small over most of the battery energy content).
- Their internal chemistry is very robust, allowing a much wider temperature range of current drain. Be aware, though, they can still not be charged below freezing though, so you need to account for that.
- They don't easily outgas, so they don't inflate as weirdly as LiPo's.
- Damage to a cell is still extremely unlikely to cause explosions or in many cases even fire.
- Their self discharge over wide temperature range is usually marginally lower even than other Lithium chemistries.
As a point of interest: The protected Q&A posted by Russel that you link to for info about LiIon and LiFePO4 is not very useful, there's many assumptions made there that are not even correct for LiIon, let alone LiFePO4. To start with the assumption of linearity of the chemical charge process. Best to forget about that post.
When it comes to charging and discharging LiFePO4 the currents are quite limited compared to modern LiPoly cells, but they are much more permissive toward over-tension, since the Iron Phosphate structure is more resistant to pure lithium plating. But I'd still advise you to use a dedicated protection chip or ready-bought circuit (for sub-1A applications I buy them in bulk for nearly no money at all). They drain micro-ampere's, take a load of testing and risk off your hands, and the best thing is, they feature analogue circuitry that reacts quickly and efficiently to over-current situations caused by damaged wiring.
This will allow you to focus on power-management of all your modules in your MCU without the risk of overloading the interrupt window in your code, and then skipping a beat in detecting over-current, over-coltage, etc.
When charging a LiFePO4 at about 0.75C, you can usually keep the fixed current even up to 3.9V without damage (given the cell is between 5 and 50 degrees Celcius), so if you charge with a fixed current, you can just let the protection switch it off (they are often set to 3.7V and might allow a 10ms peak of 3.8V). So if you have a system (MCU or dedicated) that makes 0.75C current with a 4V or 4.5V limit, or depending on the protection, even just 5V, the protection chip or circuit will take care of it all.
If you assume you have Device 1 that needs 200mA, but not always, at 2.7V to 5V (this is a broad assumption, but many devices like wireless modules have allowances like that). And that you have a uC that uses about 1mA active, and you let it sleep at 25μA as much as possible that also handles 2.7V to 5V, you could do something like this:
simulate this circuit – Schematic created using CircuitLab
D2, D1 R1 and R2 are meant to let the uC know when the solar cell is operational enough for the buck converter to operate. You can then use this information to control charging of the battery, along with the temperature of the cell and you can use that to turn on the high power module when there is enough power.
M2 allows you to actively turn on charging of the battery. M1 allows you to control the extra device.
I added D3 to indicate the presence of the MOST body diode. It's still better to turn on the MOST when you start using higher currents from the battery, to have less waste in the MOST's body diode, or an extra diode you place.
When charging is complete, the Cell protection will release and the power rail will float away from 3.8V up to 4.25V, you could even use that to detect that happening. (Compare VCC versus internal reference, for example). You can then monitor how often you reach the maximum charge state. You can also then disable charging for a while, to prevent continually peaking it off at its limit voltage. It's better to have them relax/drain a bit before you re-charge.
Connecting a battery to a PV cell without some kind of switcher to buck/boost or otherwise control the voltage of your panel will not make for the kind of results you're looking for.
If you just wire the battery and PV in parallel, the effect can be similar to putting an old battery in with a new battery; in short, the effect will be that current will flow from one source to the other and your battery might see the cell as a load, or vice versa.
As for point 2, more current would typically make for faster charging times, however, if we inspect the IV curve of a PV we see that as we draw more current, the voltage will begin to sag. Thus, we may run into the problem described above.
It is desirable to operate a PV at the MPP because solar can be quite an investment in certain cases. As such, we want to not necessarily draw the most current nor the most voltage, but the maximum amount of usable power with which to do work. Is MPP necessary for a battery charger? Debatable. I think the most important point is to make sure we're only loading the panel, and not loading the battery unintentionally. A SEPIC converter might be used in a battery charging scenario to get you where you need to be.
Best Answer
The minimum requirement is a diode from the PV (solar) panel to the battery.
This will clamp the PV panel voltage to a diode drop above the battery voltage.
Panel output will be > Imp and < Isc. It will effectively be close to being a constant current source. This arrangement will give you about Vbat/Vmp of the panel maximum wattage or about 11/18 or or around 60%.
You can get much closer to full panel output wattage with a simple buck converter. This could be a discrete design using a few transistors and an inductor - but it's easier and almost as cheap tp use something like a MC34063. These are very old, lower max frequency than most modern SMPS IC's, but very flexible, available and low cost.
A MC34063 buck converter can be built with the IC, an inductor, a Schottky diode and a few resistors and capacitors. Efficiency can probably approach 90%. Using an external MOSFET will help efficiency.
You can get close to MPPT performance by using the panel loaded voltage as your controlled voltage - the converter functions to try and keep Vpanel at Vmp. This can get within a few percent of MPPT power out over much of the range of solar input.