I ran VMWare Server (1.0x, then 2) for 18 months before ESXi came out free. I did the gamut of performance tuning tricks but was never truely happy with VM performance in Server. Systems that do alot of context switching / handle many network connections are not well suited for Server when ESXi is out there for free.
As long you have a backup solution for ESXi, it is a far better solution. My VM's sit out on NFS running on LVM on top of 15K SAS Raid10. I do 'in VM file' file backups and snapshot backups at the LVM level for the images on the NFS server.
I was not able to run MS Exchange, MS SQL Server, or even a MS File & print server under VMWare Server with acceptable results. Under ESXi they hum right along. Linux VMs enjoy the same level of improvement as well.
The other comments here are great as well - no need to retype.
Jeff
I do a lot of VMware consulting work and I'd say that the percentages are closer to 80% of the installed base use high availability shared storage (FC, iSCSI or high end NAS) and a lot of my clients are SME's. The key factor that I've found is whether the business treats its server up time as critical or not, for most businesses today it is.
You certainly can run very high performance VM's from direct attached storage (a HP DL380 G6 with 16 internal drives in a RAID 10 array would have pretty fast disk IO) but if you are building a VMware or any other virtualized environment to replace tens, hundreds, or thousands of servers, then you are insane if you aren't putting a lot of effort (and probably money) into a robust storage architecture.
You don't have to buy a high end SAN for the clustering functions - you can implement these with a fairly cheap NAS (or a virtualized SAN like HP\Lefthand's VSA) and still be using certified storage. However if you are using shared storage and it doesn't have redundancy at all points in the SAN\NAS infrastructure then you shouldn't really be using it for much more than testing. And redundancy is (at a minimum) dual (independent) HBA's\storage NICs in your servers, dual independent fabrics, redundant controllers in the SAN, battery backed cache\cache destaging, redundant hot swappable fans and power supplies etc, RAID 5\6\10\50 and appropriate numbers of hot spares.
The real live difference between your systems is that if one of your standalone systems catastrophically fails you have a lot of work to do to recover it and you will incur downtime just keeping it patched. With clustered SAN attached systems, patching the hypervisors, or even upgrading hypervisor hardware, should result in zero downtime. A catastrophic server failure simply brings the service down for the length of time that it takes to reboot the VM on a separate node (at worst) or if you have Fault Tolerance covering those VMs then you have no downtime at all.
Best Answer
Where I work we have been presenting the ESX servers fiber attached LUN's which are 500GB in size. Due to SCSI reservation issues it seems that 500GB is the optimal size to give FC attached storage to ESX.
A big trend coming up is using NFS mounted storage for your ESX datastores, especially now that that 10gbps Ethernet is becoming mainstream. NFS presents many advantages to traditional fiber attached storage as well in an ESX environment.
It would help to know what type of storage you are using as different storage has different features and options which can be leveraged toward VMware environments.