RAID – Addressing High Failure Rate of Large Drives

hard drivehardwareraid

I recently deployed a server with 5x 1TB drives (I won't mention their brand, but it was one of the big two). I was initially warned against getting large capacity drives, as a friend advised me that they have a very low MTBF, and I would be better getting more, smaller capacity drives as they are not 'being pushed to the limit' in terms of what the technology can handle.

Since then, three of the five disks have failed. Thankfully I was able to replace and rebuild the array before the next disk failed, but it's got me very very worried.

What are your thoughts? Did I just get them in a bad batch? Or are newer/higher capacity disks more likely to fail than tried and tested disks?

Best Answer

You probably got a bad batch. I am nervous about deploying arrays built from disks from the same batch for that reason -- they are likely to have a similar life-span, which makes getting replacements potentially very exciting when one fails.

It isn't impossible that there is some design defect with the drives, that's definitely happened before; however usually the Internet is full of complaints about the drive if there is really something wrong with it, as opposed to the usual background noise that you'll find about anything.