Electronic – Is it generally safe to use any USB charger with any USB device? How does device know what current is charger able to provide

battery-chargingusb

USB-IF's standards define some current limits chronologically:

USB 2.0 (2000):

  • 100 mA for low-power device
  • 500 mA for high-power device

Battery Charging specification (BCS) 1.0 (2007):

  • 1.5 A for charging ports

USB 3.0 (2008):

  • 150 mA for low-power device
  • 900 mA for high-power device

Battery Charging specification 1.2 (2010):

  • 5 A for charging ports

And subsequently USB Power Delivery (PD) since 2012, which use negotiation protocol on power pin to find out host max. current.


Client device should enumerate host to determine current which is possible for host to provide. However can safely assume at least 100 (150) mA even without it. If client found out that it is connected to charging port (CP), it can safely drain up to 1.5 A of current. In case of standard downstream port (SDP), client have to ask for high-power mode using USB protocol communication (which host might or might not allow), before draining more current.

USB protocol is somewhat complicated (I doubt chargers are talkative), BCS implementation is more prevalent, isn't it?

And there is problem having my interest. When I connect device to charger (at least one of side surely doesn't implement USB PD), with maximal output current below standard 1.5 A, device starts charging with more than 100 mA, even more than 500 mA. But how does it know it is possible? Output value is lower than BCS 1.5 A, so charger shall not identify itself as charging port and thus device shall doesn't exceed 500 mA.


I did some tests falsificating this.

I used these chargers (max. current, DC voltage):

  • a Few Years Old charger with 2 ports – totally 1 A
  • 2016 tablet charger – 2 A, 5.2 V
  • 2010 charger – 150 mA LOL (but that is probably mistake, because it is unbranded …), 5.6 V

and devices:

  • 2011 Xperia mini pro
  • 2013 Moto X
  • 2016 Lenovo Tab 2 A10-30

Test's results (same 12-cm cable used, devices charge below 50 %, off screen, max. charge current after a while):

  • 2011 phone + 2010 charger = 400 mA
  • 2011 phone + FYO = 750 mA
  • 2011 phone + tablet charger = 650 mA or 750 mA (I did more tests and every results in either of two values)
  • 2013 phone + 2010 charger = 600 mA
  • 2013 phone + FYO = 1000 mA (charger max.)
  • 2013 phone + 2016 tablet charger (max. 2 A) = 1300 mA (apparently device max.)
  • tablet + 2010 charger = 1000 mA
  • tablet + FYO = 1150 mA (exceeding maximum)
  • tablet + its charger = 1900 mA

I noticed two strange things (cases):

  1. Every device drains different current from first charger. Tablet does have USB 2 (Snapdragon 210), so it cannot be negotiated high-power supply of USB 3.
  2. Exceeding maximum current of FYO by tablet seems dangerous in compare to Moto X which relatively strictly keep at maximum specified on label.

I read answer on similar question, but this confuses me. Charger is voltage source, so how is possible for charger voltage drop down from nominal ~5 V to below 2 V? Does charger as a power source always have so big internal resistance, that device must lower current to achieve 4.3 V on battery (in case of li-ion accumulator) because of voltage losses?

So final question:
How is exactly discovering of maximum current drain done? And is it safe (because draining more current, than charger have been designed for, might be a serious problem – vide test)? Or should I compare output of device's original and actual charger before charging?

And "bonus" question. How does BCS 1.2 device recognize between BCS 1.0 and 1.2 charger (up to 1.5 A vs. 5 A)?


EDIT:

From Maxim:

In USB 2.0, it is during enumeration and configuration that the device learns how much current a USB port can source. Enumeration and configuration require a digital conversation between the device and the host. BC1.1 expands the USB spec. In addition to the USB 2.0 options, BC1.1 also allows "dumb" methods of determining port type so that, with some ports, charging can take place without enumeration.

So enumeration about power supply does connected device (because it is device, which controls current on bus).

Enumeration is described by Maxim as:

The initial data exchange between the device and the host to identify device type.

From wiki:

The USB specification required that devices connect in a low-power mode and communicate their current requirements to the host, which then permits the device to switch into high-power mode.

In fact either permit, or deny. Because also SDP is either low-power or high-power.

Best Answer

Client device should enumerate host to determine current which is possible for host to provide. However can safely assume at least 100 (150) mA even without it. If client found out that it is connected to charging port (CP), it can safely drain up to 1.5 A of current. In case of standard downstream port (SDP), client have to ask for high-power mode using USB protocol communication (which host might or might not allow), before draining more current.

This premise is completely wrong, in all aspects.

  1. To start, "client devices" do not "enumerate host", the host enumerates the plugged devices.

  2. "USB client" is not asking for "high-power mode" using USB protocol. All detection of port power capabilities are done completely outside the USB protocol, it is an independent process and is/was devised as such.

  3. "USB clients" are not asking for anything. The host provides a "charging signature" that advertises its capability, and the "client" just takes everything it can or what it needs.

A "USB client" however might have different needs that are not necessary at the top of port power capability. The device might have its battery fully charged, and won't draw any more charge from the port or draw only a little. Or it can have a normally-discharged battery, and then it will draw the maximum, which depends on particular battery, and might not be equal to the charging port capability. Or the device can have a nearly dead or very weak battery, so it will take only a small "pre-charge" current.

More, normal mobile device do have a circuitry that monitors the voltage level on VBUS, and reduce their charging consumption down, so the VBUS level is maintained at least at 4 - 4.5 V level. The voltage drop can be due to the calbe being too thin, or the current draw exceeds nominal rating for the charging port. All chargers are supposed to be not just a source of fixed voltage - they should have a "soft" cut-off load curve, so the device can reduce its consumption and maintain healthy VBUS level. This feature differentiates chargers from constant-voltage power supplies.

Considering the information given above, you should re-evaluate all conclusions from your experiments.