SSD’s in servers, best practice redundancy

raidredundancyssd

So some might say Im an experienced infrastructure architect guy and have been putting together server farms, storage subsystems and networks for a number of years in datacenter environments (specialising in virtualisation).

I have been putting together large SAS and SATA arrays, SAN, DAS, NAS, Local for ages but I have not yet used SSD's for any of my storage in the datacenter environment. I have just one question that is making me lose sleep…

If using SSD's for performance, how does one provide redundancy like in a standard RAID 1/10/5/50 set up when SSD's main failure cause is their write endurance?

I would assume using SSD's in a RAID 10 for example will give me great performance, however all drives in the array would fail at the same time due to experiencing the same write load at roughly the same time? Thus meaning I have the same ticking countdown to failure, not on just 1 SSD, but all in the array.

Am I missing something or is there another best practice method for providing redundancy to SSD's? Or is this a non issue since the endurance limit of an SSD is just that, the end of its life and I cannot protect against that anyway?

Thanks

Tom

Best Answer

Take a look at: Consumer (or prosumer) SSD's vs. fast HDD in a server environment

In short, treat these like normal disks. RAID them. Don't be concerned about why the SSD fails, but just that they may fail. Follow the RAID controller's recommendations.

Related Topic