RAID configuration for large NAS

filesnetwork-attached-storageraid

I'm thinking of building a 24 1TB disk NAS box, but I'm not sure what the best drive configuration is. I'm looking at using the areca ARC-1280ML-2G controller, and hanging all 24 drives off of it.

I'd like it all to be mounted as one volume, due to the type of data we're storing on it. One crazy idea we had was to configure 6 4-disk RAID 5 volumes, then do software RAID 5 over those 6 volumes. That would mean any one volume could die on us and we'd still not lose data.

I should note that this is an R&D project, we've got an upcoming application where we'll be needing tens of terabytes of storage to be fast and highly available. But for the initial R&D phase we can accept some risk.

What is the best solution to this type of configuration? With 24 1 TB disks, it's likely that more than one will fail at the same time (or within the time it takes to rebuild the volume after a first failure), so I'm having trouble finding a good solution.

Best Answer

There is already a RAID level for what you want; it's called RAID 10.

The MTBF for professional and consumer level drives has increased by an order of magnitude in recent years, the uncorrectable error rate has stayed relatively constant. This rate is estimated at 10^14 bits, so one bit per 12 terabytes read, for consumer SATA drives, source.

So, for every scan of your passes of your 24Tb drive, statistically you will encounter at least 2 single bit errors. Each of those errors will trigger a RAID5 rebuild, and worse, during rebuild a second error will cause a double fault.

Related Topic