Using a high end NAS as a vSphere/HyperV Storage

storagevirtualizationvmware-vsphere

I have a project on to replace the storage that our virtual farms are using. We replaced the hosts last year but due to budget constraints we couldn't afford to replace the storages until now.

A high end NAS from a well respected company has come to my attention. This NAS has 24 drive bays, 16GB RAM, an 8 core Xeon CPU and two 10GbaseT network interfaces.

I can outfit two of these things entirely with 960GB Samsung enterprise SSDs and still pay less than what I would for a single SAN from the likes of Dell outfitted with less spinning rust storage. I feel I can't ignore this.

So I guess I have two questions:

1) Could the NAS cope with the workload? The farm has three virtual hosts. It holds pretty much all of the business's servers including user file storage, DCs and a few SQL databases.

2) This thing talks iSCSI and NFS. It strikes me as a fairly bad idea to present this thing as block storage to the virtual hosts when in fact it isn't. Layering two file systems (VMFS and Ext3) on this thing seems wasteful whereas at least if I use NFS the VMDKs would be stored directly on the main file system. Would I be better using NFS or iSCSI?

Best Answer

So I guess I have two questions:

1) Could the NAS cope with the workload? The farm has three virtual hosts. It holds pretty much all of the business's servers including user file storage, DCs and a few SQL databases.

A: It's absolutely OK for NAS to handle your workload! Since 2012 it's preferred by Microsoft to have SMB3 which is file over iSCSI/FC protocols which are block. The problem is - very few SAN/NAS vendors have properly implemented SMB3 stack: most have issues with SMB Multichannel and SMB3 Direct (RDMA), and these guys are major driving force to adopt SMB3 in production. For example NetApp...

https://library.netapp.com/ecmdocs/ECMP1196891/html/GUID-3E1361E4-4170-4992-85B2-FEA71C06645F.html

Data ONTAP does not support the following SMB 3.0 functionality: SMB Multichannel SMB Direct SMB Directory Leasing SMB Encryption

2) This thing talks iSCSI and NFS. It strikes me as a fairly bad idea to present this thing as block storage to the virtual hosts when in fact it isn't. Layering two file systems (VMFS and Ext3) on this thing seems wasteful whereas at least if I use NFS the VMDKs would be stored directly on the main file system. Would I be better using NFS or iSCSI?

A: It's absolutely OK to use iSCSI with VMware, Hyper-V is OK-ish, but NFS is VMware only, you can't do NFS with Hyper-V (or SQL Server if you care, it isn't as bad but has own limitations).

https://www.starwindsoftware.com/blog/hyper-v-vms-on-nfs-share-why-hasnt-anyone-thought-of-that-earlier-they-did-in-fact-2

http://windowsitpro.com/hyper-v/hyper-v-vms-nfs

https://www.brentozar.com/archive/2012/01/sql-server-databases-on-network-shares-nas/

Back to iSCSI vs NFS. I guess both are identical from the performance point of view (unless you do iSER which isn't working great with ESXi 6.5), but NFS is much easier to manage!

http://www.unadulteratednerdery.com/2014/01/15/storage-for-vmware-setting-up-iscsi-vs-nfs-part-1/

http://community.netapp.com/t5/Network-Storage-Protocols-Discussions/NFS-or-iSCSI-for-ESXi-5-5-and-or-6/td-p/114345

My bet is on NFS here!

Related Topic