I'm looking to setup a SAN for a VMWare vCenter cluster of ESX hosts. I'd like it to be the primary storage device for VMs. There will be up to 10 hosts in the cluster eventually.
I'm creating a test environment with FreeNAS installed on a Poweredge R610. If it goes well we may be able to buy an R720 or something with a lot of drive bays. I have a RAID-Z on the drives on the R610 with a 120GB SSD cache. I plan to connect this server to a dedicated/isolated gigabit switch with jumbo-frames enabled, and all the hosts will connect to this switch for iSCSI.
That's what I have so far.
- Is FreeNAS a good option for this?
- How do I set this up on the VMWare side? Do I attach each host individually to the SAN via iSCSI? Or do I add it directly to vCenter? I would like the ability to load balance VMs across hosts easily without having to transfer VMDKs, which I assume is fairly implicit since they will all be on the SAN anyway.
Any help, tips, things you've learned from experience would be greatly appreciated!
I should note that I have never setup a SAN/iSCSI before, so I'm treading into new waters.
EDIT
This post has helped me realize that I am going to need to invest a lot in some more high-end networking gear to make this perform in the way that I am wanting. Time to re-evaluate!
Best Answer
This question depends heavily upon your VMware vSphere licensing tier..., the applications and VMs you intend to run and the amount of storage space and performance profile you need.
Answer that first, as an organization that can afford to properly license ten ESXi hosts in a vSphere cluster should be prepared for a higher-end storage solution and networking backbone than what you're planning. (Ballpark price for the right licensing level for that many hosts is ~$100k US.)
I think the scope of this question is a bit too broad, since there are established design principles for VMware installations of all sizes. I'll try to summarize a few thoughts...
ZFS-specific notes:
VMware vSphere notes:
Networking notes:
I run large VMware clusters in my day-job. When I'm dealing with more than three hosts, I start to rely on 10GbE connections between the storage array and switch(es). My connections to individual hosts may remain 1GbE, but are often also 10GbE.
Isolate your storage network from data. Jumbo frames aren't always the key to performance. If you DO enable them, make sure they're configured end-to-end on every device in the path.