View Single Post
Old
  (#10)
BrianG
RC-Monster Admin
 
BrianG's Avatar
 
Offline
Posts: 14,609
Join Date: Nov 2005
Location: Des Moines, IA
07.08.2009, 06:09 PM

I don't agree fully with the theory that the charger is not 100% efficient causing more mAh to be put back in. I agree that the charger is not 100% efficient, but the mAh into the battery is mAh put into the battery, period. I would suspect an LVC value that was set too low (and so charges up more) or actual capacity is higher than rated (I have a MA 2s2p pack rated for 8Ah that is made up of 4.2Ah cells, which would be 8.4Ah). Any losses during battery charging would be evident in cell heat. And since lipos stay cold when charging, that's not it. Oh, there may be a tiny bit of heat losses, but nowhere near the amount you'd see with NiMHs.

Determining capacity (mAh) depends on what you accept as the terminating voltage. I like to see a depleted pack at ~3.6v/cell after about 10 minutes of being disconnected from anything. I then just charge it back up and see how much was put back in. Alternatively, you could use an Eagletree device to measure mAh removed during a run, but cells heat up when run hard, which is lost energy, so I wouldn't see that as an accurate measure of actual pack capacity.

And measuring true C rating is even more difficult. Just as above, you have to set what you accept as a suitable voltage drop. Example with a 5Ah battery:

If I accept 3.5v/cell as the minimum loaded voltage, I will increase the current load until it hits that level. I might get 100A as an example. I would rate this cell as 20C.

But, if I accept 3.0v/cell as the minimum loaded voltage, I will probably be able to increase the current to 130A before it hits that voltage. This would equate to 26C.

Which is right? They are both the same battery, but different ratings.

See? This is where battery manufacturers play games with us. Let's say we have Brand A and Brand B batteries. They might both be rated 5Ah @ 20C, but Brand A is $50 more. Let's say that during testing, Brand A maintains 3.5v/cell at that load, and Brand B maintains 3.0v/cell at that same load. Since Brand A is pricier, people will buy Brand B because it seems cheaper for an equal quality cell, when it clearly isn't.

The only way to get a true measure would be to consistently test each battery using the same procedures, which is lacking in the industry. Standardized testing has been talked about a number of times, but goes nowhere because it is simply too expensive to get the equipment to do proper testing.

Sorry about the long post, but I hope I made sense.

Last edited by BrianG; 07.08.2009 at 06:11 PM.
  Send a message via Yahoo to BrianG Send a message via MSN to BrianG  
Reply With Quote