Web-server – Servers – Buying New vs Buying Second-hand

Architectureweb-serverwindows-server-2008

We're currently in the process of adding additional servers to our website. We have a pretty simple topology planned: A Firewall/Router Server infront of a Web Application Server and Database Server.

Here's a simple (and technically incorrect) diagram that I used in a previous question to illustrate what I mean:

Diagram

We're now wondering about the specs of our two new machines (the Web App and Firewall servers) and whether we can get away with buying a couple of old servers. (Note: Both machines will be running Windows Server 2008 R2.)

We're not too concerned about our Firewall/Router server as we're pretty sure it won't be taxed too heavily, but we are interested in our Web App server. I realise that answering this type of question is really difficult without a ton of specifics on users, bandwidth, concurrent sessions, etc, etc., so I just want to focus on the general wisdom on buying old versus new.

I had originally specced a new Dell PowerEdge R300 (1U Rack) for our company. In short, because we're going to be caching as much data as possible, I focussed on Processor Speed and Memory:

  • Quad-Core Intel Xeon X3323 2.5Ghz (2x3M Cache) 1333Mhz FSB
  • 16GB DDR2 667Mhz

But when I was looking for a cheap second-hand machine for our Firewall/Router, I came across several machines that made our engineer ask a very reasonable question: If we stuck a boat load of RAM in this thing, wouldn't it do for the Web App Server and save us a ton of money in the process?

For example, what about a second-hand machine with the following specs:

  • 2x Dual-Core AMD Opteron 2218 2.6Ghz (2MB Cache) 1000Mhz HT
  • 16GB DDR2 667Mhz

Would it really be comparable with the more expensive (new) server above?

Our engineer postulated that the reason companies upgrade their servers to newer processors is often because they want to reduce their power costs, and that a 2.6Ghz processor was still a 2.6Ghz processor, no matter when it was made.

Benchmarks on various sites don't really support this theory, but I was wondering what server admin thought.

Thanks for any advice.

Best Answer

First off, a 2.6GHz processor is not a 2.6GHz processor if they're from different generations. You're correct in thinking twice about that. This has been true for a long time now (at least since the 486 / Pentium days), and so it's important to point out to your engineer just how wrong the Megahertz Myth is. Especially given the massive performance improvements i7 based chips offer over Core / Core2 based ones at the same clock speed.

That being said, that's not my first concern with this plan. My first concern is that used servers will have a significantly reduced operational life that a new server, since you don't know how it was previously used, under what conditions, or what'll happen to it in transit on the way to you. Generally speaking, for production systems, reliability should always take precedence to performance, since it'll cost you way more to fix a dead production server than to upgrade a server that's too slow.

My feeling is that the price difference would have to be very, very substantial to even want to look at doing this, and that if you're buying used, you'll want to redundantly cluster them just to be safe.