CPU Power Management – Impact on Server Performance

central-processing-unitelectrical-power

I was doing some simple hand benchmarking on our (live) database server during non-peak hours, and I noticed that queries returned somewhat erratic benchmark results.

I had enabled the "Balanced" power saving plan on all our servers a while ago, because I figured they were nowhere near high utilization and this way we could save some energy.

I had assumed this would have no significant, measurable impact on performance. However, if CPU power saving features are impacting typical performance — particularly on the shared database server — then I am not sure it's worth it!

I was a little surprised that our web tier, even when at 35-40% load, is down-clocking from 2.8 Ghz @ 1.25V to 2.0 Ghz @ 1.15V.

I fully expect the down-clocking to save power, but that load level seems high enough to me that it should be kicking up to full clock speed.

Our 8-cpu database server has a ton of traffic, but extremely low CPU utilization (just due to the nature of our SQL queries — lots of them, but really simple queries). It's usually sitting at 10% or less. So I expect it was downclocking even more than the above screenshot. Anyway, when I turned power management to "high performance" I saw my simple SQL query benchmark improve by about 20%, and become very consistent from run to run.

I guess I was thinking that power management on lightly loaded servers was win-win — no performance loss, and significant power savings because the CPU is commonly the #1 or #2 consumer of power in most servers. That does not appear to be the case; you will give up some performance with CPU power management enabled, unless your server is always under so much load that the power management has effectively turned itself off. This result surprised me.

Does anyone have any other experience or recommendations to share on CPU power management for servers? Is it something you turn on or off on your servers? Have you measured much power are you saving? Have you benchmarked with it on and off?

Best Answer

I'm not sure about servers, but the current thinking in embedded devices is not to bother with steps between low-power and flat-out because the extra time involved will eat your power savings, so basically they run low power until they get any real amount of cpu load at which point they flip over to fastest-possible so they can finish the job and get back to idling at low power.