SSD hardware RAID as local VMWARE datastore

raidssdstoragevmware-esx

i plan to buy a new ESXi VSphere Server with a LSI Megaraid Controller and 8 x Intel S3520 SSDs.

  • Hypervisor should be started from USB-Flashdrive
  • Hardware RAID10 containing 8 x Intel S3520 SSDs
  • RAID10-set should be formatted as Datastore containing the VMs.

there are vmware-kb articles that mention how to deal with SSDs: Enabling the SSD option on SSD based disks/LUNs that are not detected as SSD by default (2013188)

and also, that 4K and 512e drives (SSDs) are not supported: FAQ: Support statement for 512e and 4K Native drives for VMware vSphere and VSAN (2091600)

should I use SSDs in an hardware RAID?

EDIT: main question: if I create an hardware raid with this SSDs – will the problems mentioned in kb2091600…:

First, the sector size of virtual disks exposed to the Guest OS of virtual machines is still 512n. For some guest applications, such as MS Exchange, the guest will create an I/O workload that depends on what drive type is exposed to the guest. Because the guest continues to see a traditional 512 sector drive (512n), the guest OS does not make any attempt to generate 4KB-aligned I/O. This may result in non-optimal performance of Exchange workloads on top of 512e drives.

Second, internal I/O generated by ESXi is not 4KB-aligned and thus not optimized for 512e drives. For example, the VMFS snapshot file format is not optimized for 512e drives and can, in some cases, cause a severe negative performance impact. The same applies for VMFS locking and ATS (atomic-test-and-set) operations.

…still occur? Is this Raid-Set an "external storage array" like implied "below":

This article applies to both HDD and SSD Direct attached drives. This does not apply to external storage arrays as long as LUNs presented to ESXi initiators use 512 logical sector size (READ_CAPACITY should report 512 logical block).

Best Answer

Yes, you should definitely use SSD’s in a hardware RAID since it is your production storage and it needs to be redundant because if you lose storage – you lose everything.

Depending on your workload and IOPS requirements I would strongly recommend considering SSD RAID5 instead of RAID10. Flash RAID10 is a great waste of usable capacity and RAID5 will still give you the same READ (which is 70-90% in virtualized infrastructure) speed as RAID10 and a single SSD drive WRITE performance which is still quite good around 20k-30k IOPS that means around 40-60 generic virtual machines or more/less depending on your environment. The main reason people do not like RAID5 is the long rebuild time on spindles but since you have an all-flash setup and rather small drives the rebuild time is insignificant.

Unfortunately, hardware RAID controller will most probably present same block size virtual disk that the drives used in it so you might need 512 byte disks to be OK.