I do a lot of VMware consulting work and I'd say that the percentages are closer to 80% of the installed base use high availability shared storage (FC, iSCSI or high end NAS) and a lot of my clients are SME's. The key factor that I've found is whether the business treats its server up time as critical or not, for most businesses today it is.
You certainly can run very high performance VM's from direct attached storage (a HP DL380 G6 with 16 internal drives in a RAID 10 array would have pretty fast disk IO) but if you are building a VMware or any other virtualized environment to replace tens, hundreds, or thousands of servers, then you are insane if you aren't putting a lot of effort (and probably money) into a robust storage architecture.
You don't have to buy a high end SAN for the clustering functions - you can implement these with a fairly cheap NAS (or a virtualized SAN like HP\Lefthand's VSA) and still be using certified storage. However if you are using shared storage and it doesn't have redundancy at all points in the SAN\NAS infrastructure then you shouldn't really be using it for much more than testing. And redundancy is (at a minimum) dual (independent) HBA's\storage NICs in your servers, dual independent fabrics, redundant controllers in the SAN, battery backed cache\cache destaging, redundant hot swappable fans and power supplies etc, RAID 5\6\10\50 and appropriate numbers of hot spares.
The real live difference between your systems is that if one of your standalone systems catastrophically fails you have a lot of work to do to recover it and you will incur downtime just keeping it patched. With clustered SAN attached systems, patching the hypervisors, or even upgrading hypervisor hardware, should result in zero downtime. A catastrophic server failure simply brings the service down for the length of time that it takes to reboot the VM on a separate node (at worst) or if you have Fault Tolerance covering those VMs then you have no downtime at all.
I know it seems like a good idea but any time I've tried P2V'ing MSCS clusters I've ended up regretting it. For file & Print sharing there are few direct negative side effects but for SQL clusters it is very messy. You can't P2V the virtual server name (FS1) directly, you have to break the cluster and migrate the individual nodes. But as I said it's a bad idea in any case.
If these are just file server clusters then the simplest thing to do is just export the shares (they are just a registry key), build a new clean VM, attach the existing data disk as an RDM and import the registry keys. Once your happy that it's working, rename the machine as the old virtual node name.
Best Answer
Check the block size on the VMFS datastore that you're virtualizing to. A block size of 1MB (the default) has a maximum VMDK file size of 256GB; this would prevent a 500 GB disk from being created on that datastore.
If this is indeed the case, you'll need to switch to a larger block size. Unfortunately, reformatting the datastore is the only way to accomplish this.
For more info, VMware's KB covering block sizes is here.