RAID10 (16 x 256GB SSDs) in PowerEdge R720 for Build Server

raidssd

I'm trying to put together specs for a top of the line Dell R720 (running Windows Server 2008R2) build server (C#), and I'm unsure what the best hard drive / RAID setup would be.

Our space requirements are:

  • C (OS) – 100GBs
  • E (SOURCE) – 600GBs
  • F (OUTPUT) – 850GBs

Utilizing the Dell R720, I was going to go for:

  • 16 x 2.5" drive bay
  • 16 x 256 Crucial m4 SSDs
  • H710P HW Raid Controller
  • 2 x RAID1 (C drive)
  • 6 x RAID10 (E drive)
  • 8 x RAID10 (F drive)

Without investing in ioDrives, does the setup above appear ideal for the fastest write speeds possible? I understand the risks we're taking with non-OEM, non-enterprise SSDs, but my goal here isn't stability, it's to attain the best write performance possible for F: drive.

—EDIT—

Some points @ewwwhite raised:

Do you know what your realistic I/O needs are? ##

Unfortunately I do not, other than:

  1. Copy data from network to E: drive as fast as possible (10GbE, 6Gbps SATAIII)
  2. Read data from E: drive into Visual Studio.NET (2008) Compiler
  3. Write data sequentially (as 4GB+ binaries) to F: drive

Mostly random read/writes? Sequential reads/writes?

I'm not familiar enough with how linear our C# build process reads / writes, but I believe it is more sequential than random.

How much room and storage space do you need

100GBs for the OS, 600GBs available for SOURCE, 850GBs available for output. About 85% utilization for the SOURCE and OUTPUT volumes.

Best Answer

Most RAID controllers have a cap on how many solid state drives can be accommodated at a time. There is a point of diminishing returns. For HP Smart Array P410 controllers, it's about 6 x SSD's in RAID0.

Do you know what your realistic I/O needs are? What is the working set of data and the expected data output size? What would the I/O pattern be? Mostly random read/writes? Sequential reads/writes? How much room and storage space do you need? This matters a bit in the design.

I wouldn't eschew enterprise drives at this point just yet. There are differences between various SSD offerings, and some devices are capable of far greater throughput than others. For example, a single ZeusRAM SSD drive has 8GB capacity, but can sustain 100,000 IOPS at 800+MB/sec and microsecond latencies and won't wear-out. Or reconsider a FusionIO drive. It's on par with the price of the solution you're proposing and would be a far more efficient approach.

Can you help us understand what you realistically need?