I've recently implemented an EqualLogic PS4000XV, connected via 2 x Powerconnect 6224 switches to a pair of R710 ESXi hosts running off SD cards.
Performs very well for HA, vMotion etc. You can get MPIO working between the SAN and hosts fairly easily (just a bit of esxi remote cli futzing).
The only drawback of the PS4000XV that I've seen so far is the storage processor setup - You have a pair of SPs in the unit but only one is active at one time. The other one sits there with its ports offline. So you're losing some possible performance there, as you're talking via a max of 2 or 3 interfaces. If you fail-over to the other SP there's a time lag while it negotiates its ports and spins up. I'm assuming the higher spec PS units don't have this limitation and can provide an active/active SP configuration.
Dell's recommendation for the SAN is to configure it as a single RAID 50 array, so you end up with 6.2Tb usable.
Your expansion option with this model is basically 'buy another unit'. However you just plug it into the same storage fabric (switches), and it can all be managed through a single 'group' IP and console... so not much of a management overhead.
Regarding HA in VMWare - When you enable HA it only happens for a single VM... You don't tie two VMs together. The VM is spun up in lock-step across 2 hosts at once, and memory is synced between the hosts so that if one host fails, the other host instantly picks up the slack. So that method doesn't fit with 'use the dev VM as DR with HA'.
If you're planning to literally use your dev VM as a DR fail-over for the productin VM, Please think very carefully about the practicalities of that. Your DR configuration needs to be able to spin up as a production service with as little time-lag as possible (that's the whole point of DR), so how close to production is the dev box going to be at any one point in time?
EDIT
To enable vMotion on ESX, you must ensure the CPUs running on your host servers are near-identical. If you run incompatible CPUs, you won't be able to vMotion/cluster/HA between the hosts. VMWare and vendors provide compatibility matrices to verify this:
Best Answer
The SAN itself doesn't need an Ethernet connection in this case, all it needs is FC to the ESXi hosts. The ESXi hosts will need GigE Ethernet. During VMotion, the source ESXi starts sending machine-state to the target ESXi host over the VMKernel connection in the virtual-switch. That's actually a connection between the two ESXi hosts over your physical Ethernet switch. Meanwhile, when state is fully transferred control of the VMDK files is passed over to the target server it it goes live. So, vMotion requires both FC and Ethernet.
HA (what I presume you meant by HV) requires vMotion to work, so should be available if vMotion is working.
Unless you meant Hardware Virtualization, or direct LUN presentation. That can also work, but is trickier. The same volume needs to be presented to both ESXi hosts using exactly the same LUN number, if it doesn't have the same LUN number that volume won't be visible when the VM is on one or the other ESXi host.