Your plan is not nuts. As usual, there's more than a few ways to attack this based on what you're trying to achieve and how to protect your data.
First up, you can present a raw LUN to a VM using a "Raw Device Mapping". To do this:
- Present the LUN to the ESXi host (or host group, if you are going to use clustering/HA)
- Add a disk to your VM, select Raw Device Mapping, point at the LUN
- Rescan the SCSI bus inside the VM
- fdisk, mount and add to fstab, just like a normal disk.
Upside: fast to set up, fast to use, easy, can represent the disk to physical host if you find yourself needing to V2P down the track
Downside: you may lose some VMware-based snapshot/rollback options, depending on if you use physical or virtual compatibility mode
An alternate option is to create VMFS on the LUN to create a datastore, then add a VMDK disk to the VM living on that datastore.
- Upside: it's Storage vMotion-friendly if you ever buy a license to use it. This allows for hot migration of VMDK disks between LUN's and even SAN's.
In both cases, you're in a similar risk position should VMware or your VM eat the filesystem during a failure; one is not drastically better than the other although what recovery options will be available will be quite different.
I don't deploy RDM's unless I have to; I've found they don't buy me much flexibility as a VMDK (and I've been bitten by bugs that made them impractical when performing other storage operations (since fixed - see RDM section in that link))
As for your VM, your best bet for flexibility is to store your fileserver's boot disk as a VMDK on the SAN so that you can have other hosts boot it in the case of a host failure. Using VMware's HA functionality, booting your VM on another host is automatic (the VM will boot on the second host as if the power had been pulled; expect to perform the usual fsck's and magic to bring it up as in the case of a normal server). Note, HA is a licensed feature.
To mitigate against a VM failure, you can build a light clone of your fileserver, containing the bare minimum required to boot and have SAMBA start in a configured state and store this on each host's local disk, awaiting you to add the data drive from the failed VM and power it on.
This may or may not buy you extra options in the case of a SAN failure; best case scenario, your data storage will require a fsck or other repair, but at least you don't have to fix, rebuild or configure the VM on top. Worst case, you've lost the data and need to go back to tape... but you were already in that state anyway.
Best Answer
I can't see why you shouldn't be able to do this with any standard ESX\ESXi setup.
You should be able to do it with Raw Device Mappings. Assuming the zoning and LUN presentation procedures on the SAN end remains unchanged you can use vmkfstools to tell ESX to rescan and detect the new LUNs at the host level. Once that is found you can then either create VMDK's and add them to the VM or present the entire volumes as Raw Device Mappings (RDM's).
With ESX you could script this on the Service Console command line but for ESXi you will need to use either the PowerCLI (Powershell) or the Perl CLI tools. The vSphere Management Appliance is a Linux appliance that has all of the tools pre-packed if you want to take that route. The documentation for all three CLI approaches can be found here.
The general outline of what you will want to do is:
1. Rescan for the new LUNs on the host.
You will probably just want to scan the relevant HBA's that the LUN is presented to so substitute vmhbaX with the relevant HBA names that are connected to your SAN.
2. Create an RDM stub that maps to the new LUN
You will need to figure out the LUN reference for your LUNS and set the vmdk to a location and name that makes sense in your environment. There are a couple of syntax variants with this and I haven't used this on ESXi 4 but this format used to work fine for me on 3.5. There are two RDM modes, if you need more SCSI functionality then Raw Device Mapping Passthrough mode may be more appropriate for you, in that case replace the -r with a -x.
3. Present the new disk to your VM(s).
Once you have the disk prepared in this way there are a couple of ways to present these to the OS within the VM. You can edit the VM config and add an entry for this device, or you could have this specific target vmdk already configured in a VM and you could run through the above discovery steps while the VM was powered off. If you want a more dynamic mechanism the best way to do what you want to do would be to use the VMware Disk Mount utility - this allows you to directly mount the RDM (or any other VMDK) from within the guest OS without having to mess with the VM config.
If you are using the remote CLI for the rescan and vmkfstools parts you may have to specify the target host and authentication credentials as part of the commands.
The same approach could be used with standard VMDK files but you would need to format the LUN as a VMFS first and then create a suitable VMDK on it. As far as I can tell from your description there is no benefit to you to be gained from doing it that way.