Dell R910 with Integrated PERC H700 Adapter

dellh700raidsas

I am in the process of designing an architecture based around a single Dell R910 server running Windows Server 2008 Enterprise.

I would like the server to have 8 RAID1 pairs of spinning disks, so I intend to implement:

Dell R910 Server
Integrated PERC H700 Adapter with 1 SAS expander on each SAS connector (so 8 expanders in total)
7 RAID1 pairs of 143Gb 15K HDD, each paired on one connector using an expander
1 RAID1 pair of 600Gb 10K HDD, paired on the remaining connector using an expander

My main concern is not to introduce bottlenecks in this architecture, and I have the following questions.

  1. Will the PERC H700 Adapter act as a bottleneck for disk access?
  2. Will using SAS expanders for each RAID1 pair cause a bottleneck or would this be as fast as pairing disks directly attached to the SAS connectors?
  3. Can I mix the disks, as long as the disks in each RAID1 pair are the same? I assume so.
  4. Can anyone recommend any single-to-double SAS Expanders that are known to function well with the H700?

Cheers

Alex

Best Answer

  1. No, the PERC H700 Adapter will not be a bottleneck for 16 spinning rust disks.
  2. You want the members of each RAID-1 pair on different channels and expanders, in order to increase reliabiity. That way a bad cable, channel, or expander doesn't take the whole RAID-1 set offline. A 15K spinning rust disk can only do about 2 Gbps max with sequential reads, so you can put three disks these per 6 Gbps channel. Often you can do much more than threee, because only backups really do streaming sequential reads or writes. All real workloads have a lot of random IO, which will bring even a 15k disk down to just a few MB per second in throughput.
  3. Yes, you can mix disks, but why? Also, why aren't you using RAID-10 instead of a bunch of separate RAID-1 arrays?
  4. Unfortunately no, but any standards-compliant SAS Expander will work.

The real suggestion I would make is this: Unless you are running the large hadron colider, in general aggregate disk bandwidth doesn't matter, only IOPS matter for the overwhelming majority of workloads. Stop trying to make spining rust disks fast - they aren't. Disk is the new tape.

If you need performance, most workloads need IOPS more than bandwidth or capacity. Buy your Dell server with the cheapest SATA drives (to get the carriers), and then replace those cheap SATA drives with smallest numbber of Intel 500-series SSDs that meets your capacity needs. Dell's SSD offerings are terribly overpriced comparted with Intel SSDs from say NewEgg, even though the Intels perform better and are more reliable than whatever Dell is shipping for SSDs (Samsung?).

Make one big RAID-5 array of SSDs. Even just 3 modern MLC SSDs in RAID-5 will absolutely destroy 16 15k spinning rust disks in terms of IOPS, by a factor of 10x or more. Sequential throughput is a non-issue for most applications, but the SSDs will also be 2x faster than spinning disks in that regard. Use large capacity 7.2k SATA disks for backup media or for archiving cold data. You'll spend less money and use less power with SSDs.

Resistance to SSDs over reliability are largely FUD from conservative storage admins and SAN vendors who love their wasteful million-dollar EMC arrays. Recent "enterprise MLC SSDs" are at least as reliable as mechanical disks, and probably much more reliable (time will tell). Wear leveling makes write lifetime a non-issue, even in the server space. Your biggest worry is firmware bugs rather than hardware failure, which is why I suggest going with Intel SSDs.

Related Topic