POE, POE+, UPOE – Differences and Uses

ciscopower-over-ethernet

I see that Cisco now has a 60 watt POE option. What was the limitation that caused POE to have to be developed in stages of 15, 30, and then 60 watts? Why could they not just do 60 watts from the inception of POE?

Best Answer

Like many standards, the initial standard is never sufficient to the developing technology. Once people have the "ability" to do something, they will often find new (and previously unexpected) ways of using that ability. 802.11 wireless is a prime example as no one originally expected it to develop into a "wired replacement" that we are heading towards today in many cases.

802.3af was largely implemented to power VoIP phones and early access points. The list of devices that started using it grew quickly, and as it grew and the technology changed, it was found that 802.3af didn't deliver enough power for everything people wanted. For instance, it could power a video camera, but maybe not a PTZ video camera. Or it could power that single radio access point, but not a 802.11n (more complex and power hungry) dual radio access point. Basic VoIP phones were fine, but high definition screen with video capability VoIP phones didn't have the power needed.

It was also found to be somewhat inefficient in it's power delivery. Short of vendor proprietary extensions, a PoE port always delivered 15.4W of power to the device, even if the device didn't need that much.

So, another standard was developed to meet these needs, 802.3at. This provides up to 30W of power and allows devices to negotiate their power needs. If you only need 3W of power, it can do so and doesn't need to deliver 30W. Interoperation with 802.3af devices is accomplished by delivering 15.4W to any device that doesn't negotiate for more or less.

Cisco came up with the 60W for exactly the reason I gave to start out (they were also one of the first to deliver inline power, and higher than 15.4W of power through proprietary protocols). If the "ability" is there, then people will come up with ways to use it. Their thought process is "why limit what we can do within the power budget? Let's just provide more power."

This is both good and bad. Good because we will see new abilities by PoE devices or entirely new PoE devices that were not previously realized. Bad because there are other concerns to keep in mind.

For instance, most people don't consider heat in their cable plant when thinking about PoE (for instance look here or here for examples). The more power you run through a cable, the more heat that it generates and needs to be dissipated. This may reduce how far you can run cables depending on the category of your cables. Others have raised concerns because data cabling is often "bundled" with up to hundreds of data cables being tightly bound together and this can result in higher temperatures in the center of the bundles.

Cisco claims that UPoE is better than 802.3at at heat dissipation, but I have yet to find a non-Cisco document (or non-Cisco sponsored) that bears this out.

Another concern is that the more power you need to deliver to end devices through PoE, the more power the switch needs to draw. How big would the power supplies on a Cisco 4500 have to be to provide up to 60W on its potential 384 UPoE ports (in addition to the power needs of the switch itself)? UPSes to provide reliable power to these pieces of network equipment would then have to be upsized as well.

If it shapes up that the industry find use for 60W, then the IEEE will draft another ammendment/standard.

Related Topic