I've recently implemented an EqualLogic PS4000XV, connected via 2 x Powerconnect 6224 switches to a pair of R710 ESXi hosts running off SD cards.
Performs very well for HA, vMotion etc. You can get MPIO working between the SAN and hosts fairly easily (just a bit of esxi remote cli futzing).
The only drawback of the PS4000XV that I've seen so far is the storage processor setup - You have a pair of SPs in the unit but only one is active at one time. The other one sits there with its ports offline. So you're losing some possible performance there, as you're talking via a max of 2 or 3 interfaces. If you fail-over to the other SP there's a time lag while it negotiates its ports and spins up. I'm assuming the higher spec PS units don't have this limitation and can provide an active/active SP configuration.
Dell's recommendation for the SAN is to configure it as a single RAID 50 array, so you end up with 6.2Tb usable.
Your expansion option with this model is basically 'buy another unit'. However you just plug it into the same storage fabric (switches), and it can all be managed through a single 'group' IP and console... so not much of a management overhead.
Regarding HA in VMWare - When you enable HA it only happens for a single VM... You don't tie two VMs together. The VM is spun up in lock-step across 2 hosts at once, and memory is synced between the hosts so that if one host fails, the other host instantly picks up the slack. So that method doesn't fit with 'use the dev VM as DR with HA'.
If you're planning to literally use your dev VM as a DR fail-over for the productin VM, Please think very carefully about the practicalities of that. Your DR configuration needs to be able to spin up as a production service with as little time-lag as possible (that's the whole point of DR), so how close to production is the dev box going to be at any one point in time?
EDIT
To enable vMotion on ESX, you must ensure the CPUs running on your host servers are near-identical. If you run incompatible CPUs, you won't be able to vMotion/cluster/HA between the hosts. VMWare and vendors provide compatibility matrices to verify this:
I had this same issue, but only for our replicated LUNs in the cluster. We were migrating from 4.0u1 to 4.1u1. The solution was simply to log into each host and run the command:
# esxcfg-volume -l to view the datastores
# esxcfg-volume -m "vmfs_label_name"
Then go back to the VI client and refresh Storage - the datastore should be in the inventory.
I'm similarly not entirely comfortable with the solution, kinda weird, but thought I'd share.
Best Answer
I reckon you hit a bug.
See this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2013128
To workaround this issue:
In the ESXi command line, run this command:
This command loads the module.
Connect to vCenter Server through the vSphere Client.
Select the ESXi host and click the Configuration tab.
Click Software > Advanced Settings.
You see that Migrate.Enabled is set to zero because the module was not loaded earlier.
Set Migrate.Enabled to 1 and click OK.
You should now be able to vMotion or add a network card to the virtual machine.