Note: I've never done this before.
The shared storage RDM should be possible, although having five nodes accessing it could introduce hilarity.
The technique you want is similar to how two node Microsoft Clustering Services is implemented (with a shared quorum drive); VMware provide a documented method for how to achieve it.
The solution looks well documented, if a little hairy. I'd recommend building and testing it in a lab before considering letting it anywhere near your production cluster.
Good luck.
I do a lot of VMware consulting work and I'd say that the percentages are closer to 80% of the installed base use high availability shared storage (FC, iSCSI or high end NAS) and a lot of my clients are SME's. The key factor that I've found is whether the business treats its server up time as critical or not, for most businesses today it is.
You certainly can run very high performance VM's from direct attached storage (a HP DL380 G6 with 16 internal drives in a RAID 10 array would have pretty fast disk IO) but if you are building a VMware or any other virtualized environment to replace tens, hundreds, or thousands of servers, then you are insane if you aren't putting a lot of effort (and probably money) into a robust storage architecture.
You don't have to buy a high end SAN for the clustering functions - you can implement these with a fairly cheap NAS (or a virtualized SAN like HP\Lefthand's VSA) and still be using certified storage. However if you are using shared storage and it doesn't have redundancy at all points in the SAN\NAS infrastructure then you shouldn't really be using it for much more than testing. And redundancy is (at a minimum) dual (independent) HBA's\storage NICs in your servers, dual independent fabrics, redundant controllers in the SAN, battery backed cache\cache destaging, redundant hot swappable fans and power supplies etc, RAID 5\6\10\50 and appropriate numbers of hot spares.
The real live difference between your systems is that if one of your standalone systems catastrophically fails you have a lot of work to do to recover it and you will incur downtime just keeping it patched. With clustered SAN attached systems, patching the hypervisors, or even upgrading hypervisor hardware, should result in zero downtime. A catastrophic server failure simply brings the service down for the length of time that it takes to reboot the VM on a separate node (at worst) or if you have Fault Tolerance covering those VMs then you have no downtime at all.
Best Answer
While you surely can use OS-specific tools to measure the IOPS rate within a virtual machine, you may fall victim to the various timing problems present in VMs so you would get inaccurate results. Thus, I would suggest using the "disk VM" view of
esxtop
/resxtop
on your hypervisor to get real-time figures or esxplot / vscsiStats for collection and histogram functionality of the same in more detail.