ISCSI SAN for vCenter Cluster

iscsistorage-area-networkvmware-esxivmware-vcenter

I'm looking to setup a SAN for a VMWare vCenter cluster of ESX hosts. I'd like it to be the primary storage device for VMs. There will be up to 10 hosts in the cluster eventually.

I'm creating a test environment with FreeNAS installed on a Poweredge R610. If it goes well we may be able to buy an R720 or something with a lot of drive bays. I have a RAID-Z on the drives on the R610 with a 120GB SSD cache. I plan to connect this server to a dedicated/isolated gigabit switch with jumbo-frames enabled, and all the hosts will connect to this switch for iSCSI.

That's what I have so far.

  1. Is FreeNAS a good option for this?
  2. How do I set this up on the VMWare side? Do I attach each host individually to the SAN via iSCSI? Or do I add it directly to vCenter? I would like the ability to load balance VMs across hosts easily without having to transfer VMDKs, which I assume is fairly implicit since they will all be on the SAN anyway.

Any help, tips, things you've learned from experience would be greatly appreciated!

I should note that I have never setup a SAN/iSCSI before, so I'm treading into new waters.

EDIT
This post has helped me realize that I am going to need to invest a lot in some more high-end networking gear to make this perform in the way that I am wanting. Time to re-evaluate!

Best Answer

This question depends heavily upon your VMware vSphere licensing tier..., the applications and VMs you intend to run and the amount of storage space and performance profile you need.

Answer that first, as an organization that can afford to properly license ten ESXi hosts in a vSphere cluster should be prepared for a higher-end storage solution and networking backbone than what you're planning. (Ballpark price for the right licensing level for that many hosts is ~$100k US.)

I think the scope of this question is a bit too broad, since there are established design principles for VMware installations of all sizes. I'll try to summarize a few thoughts...

ZFS-specific notes:

  • RAIDZ is a poor choice for virtual machine use in just about every use-case. It also won't allow you the expansion you may need over time. Using mirrors or triple-mirrors would be preferred to handle the random read/write I/O patterns of virtual machines. For ZFS best-practices, check out THINGS NOBODY TOLD YOU ABOUT ZFS.
  • SSD choice is important in ZFS-based solutions. You have L2ARC and ZIL cache options. They are read-optimized and write-optimized, respectively. The characteristics of the SSDs you'd use in each application are different. Quality SAS SSDs for ZFS caching are $1500+ US.
  • FreeNAS is not a robust VMware target. It has a reputation for poor performance. If you're set on ZFS, consider something like NexentaStor, which can run on the same type of hardware, but is compatible with VMware's storage hardware acceleration (VAAI) and has some level of commercial support available.

VMware vSphere notes:

  • Identify which features you'll need in VMware. vMotion, Storage vMotion, High Availability, Distributed Resource Scheduling, etc. are all helpful cluster management features.
  • A quick licensing guide: The lowest VMware tier that provides vMotion is the Essentials Plus package @ ~$5000 US. That only accommodates three 2-CPU servers. Pricing jumps considerably from there. $10k or more, scaling with the number of host servers.

Networking notes:

I run large VMware clusters in my day-job. When I'm dealing with more than three hosts, I start to rely on 10GbE connections between the storage array and switch(es). My connections to individual hosts may remain 1GbE, but are often also 10GbE.

Isolate your storage network from data. Jumbo frames aren't always the key to performance. If you DO enable them, make sure they're configured end-to-end on every device in the path.