Your plan is not nuts. As usual, there's more than a few ways to attack this based on what you're trying to achieve and how to protect your data.
First up, you can present a raw LUN to a VM using a "Raw Device Mapping". To do this:
- Present the LUN to the ESXi host (or host group, if you are going to use clustering/HA)
- Add a disk to your VM, select Raw Device Mapping, point at the LUN
- Rescan the SCSI bus inside the VM
- fdisk, mount and add to fstab, just like a normal disk.
Upside: fast to set up, fast to use, easy, can represent the disk to physical host if you find yourself needing to V2P down the track
Downside: you may lose some VMware-based snapshot/rollback options, depending on if you use physical or virtual compatibility mode
An alternate option is to create VMFS on the LUN to create a datastore, then add a VMDK disk to the VM living on that datastore.
- Upside: it's Storage vMotion-friendly if you ever buy a license to use it. This allows for hot migration of VMDK disks between LUN's and even SAN's.
In both cases, you're in a similar risk position should VMware or your VM eat the filesystem during a failure; one is not drastically better than the other although what recovery options will be available will be quite different.
I don't deploy RDM's unless I have to; I've found they don't buy me much flexibility as a VMDK (and I've been bitten by bugs that made them impractical when performing other storage operations (since fixed - see RDM section in that link))
As for your VM, your best bet for flexibility is to store your fileserver's boot disk as a VMDK on the SAN so that you can have other hosts boot it in the case of a host failure. Using VMware's HA functionality, booting your VM on another host is automatic (the VM will boot on the second host as if the power had been pulled; expect to perform the usual fsck's and magic to bring it up as in the case of a normal server). Note, HA is a licensed feature.
To mitigate against a VM failure, you can build a light clone of your fileserver, containing the bare minimum required to boot and have SAMBA start in a configured state and store this on each host's local disk, awaiting you to add the data drive from the failed VM and power it on.
This may or may not buy you extra options in the case of a SAN failure; best case scenario, your data storage will require a fsck or other repair, but at least you don't have to fix, rebuild or configure the VM on top. Worst case, you've lost the data and need to go back to tape... but you were already in that state anyway.
Given you're coming from Windows, you'll find that NFS is....different.
The issue you're running into is fairly common. NFS is passing the UID and GID of the files/directories back and forth between the machines with the assumption that the user and group IDs are mapped identically on both. This means that you can get a situation where the UID/GID on the server is passed back to a NFS client, but it can't matched in the client's /etc/passwd
or /etc/group
, which means no access.
In the (distant) past this was co-oridnated with NIS and NIS+, although there are other schemes that have been wedged into this framework (Samba's Winbind being one of them). However, this requires a central ID server, followed by a lot of hand-fixing permissions.
There are different ways to fix this, but the cheapest/quickest is to create a group with the same group ID number on both machines - say, group ID 50000 - and set the group bits on the file server while adding the appropriate user to the group on the client; then use the group permissions on the files to control access. Not a great solution but it will work. Note that you could have problems with services that explicitly change their group at runtime (aka privledge drop) and you might need to change the setting that controls what group is assumed at runtime to ensure it is the one you have created.
For those files that come inbound via a Windows share (aka Samba) simply force the group to be the same as the one you create. That way all files automatically get the "right" GID.
Best Answer
1) SANs are all about blocks so these are iSCSI, Fibre Channel, and FCoE. But you can hardly find "true" or "pure" SANs these days maybe only Nimble and Equallogic who can't do also NAS protocols like NFS and SMB in addition to block ones. These are called "multiprotocol" storage appliances.
http://www.netapp.com/us/media/tr-3490.pdf (Multiprotocol NetApp guide)
2) You'd better go SMB here. NetApp can talk some lower dialect of SMB3 (no SMB Multichannel and no SMB Direct yet) and it's preferred way to feed files to Windows because NFS on Windows just sucks. Linux/*BSD guys will use Samba client.
https://library.netapp.com/ecmdocs/ECMP1196891/html/GUID-3E1361E4-4170-4992-85B2-FEA71C06645F.html (SMB3 dialect NetApp "talks")
https://library.netapp.com/ecmdocs/ECMP1366834/html/GUID-07F8E056-12EF-4591-8BEA-7C28F7B54854.html (SMB share permissions on NetApp howto)