Optimal configuration with Synology as HA iSCSI backend for vSphere

high-availabilityiscsiraidsynologyvmware-vsphere

I'm looking into purchasing a new storage backend for our VMware vSphere Essentials package. We have 3 ESXi hosts, currently mainly using DAS. The goal is to upgrade to Essentials Plus this year, to leverage its extra power in High Availability and other integrated availability options.

As a long-time Synology user, I was thrilled at today's announcement of the new RS815+ models, with 400MB/sec throughput and support for serverside HA and SSD caching. I'm intent on deploying this as the iSCSI backend for our vSphere data store. I'm not sure however on what is required to achieve the highest possible level of redundancy.

Assuming a single server setup, the obvious options are:

  1. 3x HDD in RAID-5 with 1x SSD as cache
  2. 3x SSD in RAID-5 with 1x SSD as cache
  3. 4x SSD in RAID-10

The data throughput is limited by definition at 400MB/sec because of the 4 trunked Gbit LAN ports. This brings the main questions, all related directly:

  • Am I right in assuming RAID-5 is still not advisable for present-day SSDs, thus making option 3 the best for durability? Current Samsung 850 Pro series carry an unlimited 10-year guarantee, so it would appear this is no longer relevant.
  • Am I right in assuming that option 1 will perform worst of these setups, as being the storage backend of 10+ servers means continous unpredictable random access, causing too many cache misses, thus making the HDD RAID array the bottleneck instead of the network?
  • Is there any advantage of option 2 over 3 or vice versa, as I think both will perform identically in real world scenarios?

To complicate, I'm considering shelling out for 2 rackstations, to utilize the HA features of DSM. This means there will be extra network and disk load for synchronization. Would I need to set these servers up simultaneously, or could I add the second one months later (budget constraints…)? And would it need to have the exact same storage config, or could I for example fill it with much cheaper HDDs, as it will only run in passive mirror 99.99% of the time anyway?

Or is redundancy at the Synology level not even required, considering that vSphere Essentials Plus provides vSphere Replication, which should allow me to mirror to an NFS-datastore on one of our much slower other storage servers, as it's purely for fallback anyway?

Best Answer

To answer your questions:

  • RAID5 is not advisable, period. A single bad read during a rebuild kills your entire array. This is more of an issue with HDDs than SSDs, but it's still a consideration. RAID10 is generally the best option, but RAID6 (dual-parity) can also do quite well.
  • You're comparing Jaguars to Hyundais, there. SSDs have better random access, but are vastly more expensive per GiB. You need to have a good idea of what your current IO usage is to get any useful indication of what you should go with. Enterprise-class HDDs aren't exactly slouches, and reading data from one isn't going to bring your server to its knees.
  • The performance difference between option 2 and 3 would be dependant on your current IO usage. If you have completely random reads, option 3 would be better.

If I were to suggest something to start with, not knowing your IO usage, it would be to get the bay expansion and populate it with 5-6 reasonably sized HDDs in RAID6 and a SSD for caching.