I was looking at a couple of charging ICs (BQ2002 for instance) and some of the fast charger ICs can charge up to 2A whilst running of a 5v supply. In this case charging a empty 1v NiMH cell means that 4v * 2A ~ 8w is being dissipated by the IC? Do they use some kind of internal buck converter to step the voltage down?
Electronic – How do integrated charger ICs dissipate differences in VCC and the battery voltage
battery-chargingintegrated-circuit
Related Solutions
You said yourself that depending on what part of the charging process you are in, you keep the current constant or try to maintain that voltage. That's going to require some kind of controller, though not necessarily PID or some subset thereof.
The characteristics of a charging battery change very slowly relative to what even a slow microcontroller can measure and react to. Batteries also don't exhibit second order effects like inertia, like motor speed as a function of current does. Both these together allow very simple control schemes to work well.
Probably about the simplest control scheme for a switching power supply is pulse on demand. It is always stable and robust, although results in more ripple than a more finely tuned control scheme can accomplish.
When the output is below the regulation threshold, you do a pulse, else you don't. To avoid inductor saturation, you may always not do a pulse at the next slot immediately after a previos one, but that's a detail.
I've done pulse on demand switching power supplies with the PIC 10F204 a bunch of times. The code spins in a loop checking the comparator output as long as it is indicating the output is above the regulation threshold. When the output falls below the threshold, the code following the loop is executed, which produces a pulse. The instruction cycles to jump back to the top of the loop and do the next comparator check usually take enough time so that it's OK to do the next pulse righ away if the comparator indicates the output is still below the threshold.
Sometimes this can go meta-stable by producing two pulses in a row before the feedback catches up to the output having gone higher, but in all cases it remains stable as long as the maximum load isn't exceeded.
This sort of system is fine for battery charging, except that you have two thresholds, one for voltage and one for current. You only do a pulse if the output is below both. The higher level logic can adjust the limits as the battery progresses thru the charging procedure.
Maybe you are getting confused as the PWM of the both systems are for different functions. The PWM of the Microchip document controls the buck converter transistor. There are 2 PWMs in the NEC document one controls the buck converter transistor, but the other controls a charge control transistor. The latter is used for the voltage and current measurement decision.
So in the case of the NEC document it is like pjc50 mentions, there is no current flow when the PWM is off (for the charge control transistor), so you cannot measure the current there. It has some advantages of measuring the voltage of the battery when no charging (or discharging) current is applied, as you are closer to the real open circuit voltage of the battery. Only closer because of the relaxation effect of batteries, which is on a seconds to minutes timescale, so much slower than your typical PWM signals.
Why exactly it would result in erroneous operation if you would measure the voltage and current during the on time of the PIC PWM is not really obvious to me. The only hint I could find was that the PWM gets disabled and adjusted after the measurements, which should be done when PWM is low (otherwise you will get strange PWM pulses).
As the PIC solution involves a buck converter but does not use the charge control transistors, there will always be a current flowing to the battery regardless of the state of the PWM, so you won't have a benefit of getting closer to the open circuit voltage.
Generally you want to get your measurements as close to the open circuit voltage as possible if you are doing voltage based state of charge indication. So ideally you want to measure the voltage if no current is going in or out of the battery and have waited for some minutes to let the battery settle (relaxation effect), waiting is usually omitted. The current would introduce an error because of the internal resistance, so the voltage you measure would be too high while charging and you'd estimate the state of charge as too high.
To be on the safe side, you'd still switch from constant current to constant voltage mode when the measured voltage reaches 4.2 V, primarily because you don't monitor the internal resistance of the battery at the same time to calculate the internal cell voltage. (And I think that approach to fast charging was recently patented for whatever reason)
Related Topic
- Electronic – Is voltage an accurate metric for testing the charge of an NiMH battery
- Electronic – Can i charge 18650 single cell using buck converter
- Electronic – What should be the V specs of a step down transformer for a 10amp car battery charger
- Electronic – How does the different types of fast charge affect a lithium battery on a cellphone in terms of heat
Best Answer
BQ2002 and similar ICs don't actually see the charging current path through them. They are just controllers, they don't regulate by themselves. They have an output (the CC pin) which is used to indirectly control the external, high-current passing element.
You can have a look to a reference design provided by TI. The CC output controls a LM317 which is used as the regulating element (where the thermal considerations indeed apply).
This way, you can have a much greater flexibility in your design (use whatever regulator you want, linear or switched, with whatever specs you need for your specific case).