Just a simple physics question: per this link, Cat5e twist rate is lower than that of Cat6, and Cat6 cables are (on average) thinner. A higher twist rate means a longer copper wire per length of cable. Both these facts would make one assume that the resistance per unit distance of a Cat6 cable would be higher than that of a Cat5e. Presumably this is not the case, as modern PoE standards are allowing for higher power while keeping the voltage the same, meaning that more power would be dissipated in the cable if it were true. How is this mitigated? Is the copper of higher grade?
PoE losses: Cat5e vs Cat6
cablepower-over-ethernet
Related Solutions
Like many standards, the initial standard is never sufficient to the developing technology. Once people have the "ability" to do something, they will often find new (and previously unexpected) ways of using that ability. 802.11 wireless is a prime example as no one originally expected it to develop into a "wired replacement" that we are heading towards today in many cases.
802.3af was largely implemented to power VoIP phones and early access points. The list of devices that started using it grew quickly, and as it grew and the technology changed, it was found that 802.3af didn't deliver enough power for everything people wanted. For instance, it could power a video camera, but maybe not a PTZ video camera. Or it could power that single radio access point, but not a 802.11n (more complex and power hungry) dual radio access point. Basic VoIP phones were fine, but high definition screen with video capability VoIP phones didn't have the power needed.
It was also found to be somewhat inefficient in it's power delivery. Short of vendor proprietary extensions, a PoE port always delivered 15.4W of power to the device, even if the device didn't need that much.
So, another standard was developed to meet these needs, 802.3at. This provides up to 30W of power and allows devices to negotiate their power needs. If you only need 3W of power, it can do so and doesn't need to deliver 30W. Interoperation with 802.3af devices is accomplished by delivering 15.4W to any device that doesn't negotiate for more or less.
Cisco came up with the 60W for exactly the reason I gave to start out (they were also one of the first to deliver inline power, and higher than 15.4W of power through proprietary protocols). If the "ability" is there, then people will come up with ways to use it. Their thought process is "why limit what we can do within the power budget? Let's just provide more power."
This is both good and bad. Good because we will see new abilities by PoE devices or entirely new PoE devices that were not previously realized. Bad because there are other concerns to keep in mind.
For instance, most people don't consider heat in their cable plant when thinking about PoE (for instance look here or here for examples). The more power you run through a cable, the more heat that it generates and needs to be dissipated. This may reduce how far you can run cables depending on the category of your cables. Others have raised concerns because data cabling is often "bundled" with up to hundreds of data cables being tightly bound together and this can result in higher temperatures in the center of the bundles.
Cisco claims that UPoE is better than 802.3at at heat dissipation, but I have yet to find a non-Cisco document (or non-Cisco sponsored) that bears this out.
Another concern is that the more power you need to deliver to end devices through PoE, the more power the switch needs to draw. How big would the power supplies on a Cisco 4500 have to be to provide up to 60W on its potential 384 UPoE ports (in addition to the power needs of the switch itself)? UPSes to provide reliable power to these pieces of network equipment would then have to be upsized as well.
If it shapes up that the industry find use for 60W, then the IEEE will draft another ammendment/standard.
6,000+ feet... so? Cable is CHEAP compared to the labor to install it. If you're constrained by budget, skip the terminations (the patch panels back at the 'core' and the jacks at the 'ends'.)
Figure out what you really need, (the number and type of cable needed to serve the existing users/systems). And then install MORE; I'd suggest install TWICE what you currently need. But if you're pulling into those little columns in the cube corners, maybe just leave the extra cables up in the ceiling (labeled!), or only pull two extras per current four... or something like that.
Best Answer
The cable category defines the high-frequency parameters of the cable (mostly attenuation and crosstalk). For PoE, the serial resistance of the cable matters which isn't defined by category.
Good plenum cable grew somewhat thicker by custom from Cat-3 over Cat-5 to Cat-6(A), patch cables vary greatly down to 30 AWG.
Essentially, the thicker the cable (lower AWG) the better the PoE performance. The initial IEEE 802.3af-2003 defined a maximum loop resistance
between pairs(thx jonathanjo) of 20 Ω, 802.3at-2009 lowered that to 12.5 Ω and the new 802.3bt-2018 to 6.25 Ω. Accordingly, the maximum current increased from 350 mA (af) to 600 mA (at) to 1860 mA (bt 4-pair), enabling the power increase.The loop resistance limits the maximum power due to the induced voltage drop: at 600 mA current and 6 Ω cable resistance, the voltage drop is (U = R*I) 3.6 V - from the perhaps 48 V you start with at the PSE you've got 44.4 V left at the PD. Multiplied by .6 A, that's 26.6 W with 3.6 * .6 = 2.2 W lost in the cable. (Note that the 25.5 W maximum 802.3at power is worst case.)