Let's try a little bit more complicated way. Try to use some other iSCSI solution to check if it's a ESX trouble or, iSCSI itself.
I'll redcomend you StarWind. You can download trial there.
I've recently implemented an EqualLogic PS4000XV, connected via 2 x Powerconnect 6224 switches to a pair of R710 ESXi hosts running off SD cards.
Performs very well for HA, vMotion etc. You can get MPIO working between the SAN and hosts fairly easily (just a bit of esxi remote cli futzing).
The only drawback of the PS4000XV that I've seen so far is the storage processor setup - You have a pair of SPs in the unit but only one is active at one time. The other one sits there with its ports offline. So you're losing some possible performance there, as you're talking via a max of 2 or 3 interfaces. If you fail-over to the other SP there's a time lag while it negotiates its ports and spins up. I'm assuming the higher spec PS units don't have this limitation and can provide an active/active SP configuration.
Dell's recommendation for the SAN is to configure it as a single RAID 50 array, so you end up with 6.2Tb usable.
Your expansion option with this model is basically 'buy another unit'. However you just plug it into the same storage fabric (switches), and it can all be managed through a single 'group' IP and console... so not much of a management overhead.
Regarding HA in VMWare - When you enable HA it only happens for a single VM... You don't tie two VMs together. The VM is spun up in lock-step across 2 hosts at once, and memory is synced between the hosts so that if one host fails, the other host instantly picks up the slack. So that method doesn't fit with 'use the dev VM as DR with HA'.
If you're planning to literally use your dev VM as a DR fail-over for the productin VM, Please think very carefully about the practicalities of that. Your DR configuration needs to be able to spin up as a production service with as little time-lag as possible (that's the whole point of DR), so how close to production is the dev box going to be at any one point in time?
EDIT
To enable vMotion on ESX, you must ensure the CPUs running on your host servers are near-identical. If you run incompatible CPUs, you won't be able to vMotion/cluster/HA between the hosts. VMWare and vendors provide compatibility matrices to verify this:
Best Answer
My google-fu was apparently lacking today. I've found this in the mean time: http://viops.vmware.com/home/docs/DOC-1407
Per that document, issuing fdisk -lu on your esx host will give you the information. partitions that have a start of 128 are correctly aligned, while those that start at 63 are misaligned.
Thanks!