Nicely structured approach and you're asking all the right questions. Your suggested redesign is excellent.
ESX 3.5 doesn't really do iSCSI Software Initiator multipathing but it will happily failover to another active or standby uplink on the vSwitch if a link fails for any reason . The VI3.5 iSCSI SAN Configuration Guide has some information on this, not as much as I'd like but it is clear enough. You shouldn't have to do anything on the ESX side when you change over but you will no longer get any link aggregation effects (because your uplinks are going to two separated non-stacked switches), only failover. Given the weakness of multipathing in the ESX 3.5 iSCSI stack this probably wont have any material effect but it might because you have multiple iSCSI targets so bear it in mind. I'm sure you know this already but Jumbo frames are not supported with the Software Initiator on ESX 3.5 so that's not going to do anything for you until you move to ESX 4.
In setting up the ESX vSwitch and VMkernel ports for iSCSI with ESX4 the recommendation is to create multiple VMkernel ports with a 1:1 mapping to uplink phyiscal NICs. If you want to create multiple vSwitches for this you can or you can use the NIC teaming options at the port level so that you have a single NIC designated as active per VMkernel port with 1 or more as standby. Once you have the ports\vSwitch configured you then need to bind the ports to the iSCSI multipath stack and it will then handle both multipathing and failover more efficiently. Given the way this works there is no need to worry about teaming across the switches, the multipath driver is doing the work at the ip-layer. This is just a quick idea of how it works, it is described in very good detail in the VI 4 iSCSI SAN Configuration Guide. That will explain everything you need to do, including how to set up Jumbo frame support properly.
As far as the stacking is concerned I don't think you need or want to do it for this config, in fact Dell's recommended design for MD3000i iSCSI environments is not to stack the switches as far as I can recall, for precisely the reason you mention. For other iSCSI solutions (Equallogic) high bandwidth links between arrays is required so stacking is recommended by Dell but I've never had a satisfactory explanation of what happens when the master fails. I'm pretty sure the outage during the new master election will be shorter than the iSCSI timeouts so VM's shouldn't fail but its not something I'm comfortable with and things will definitely stall for an uncomfortable period of time.
There has been some debate in the comments to Chopper3's answer that is not well informed because of some poorly understood aspects of Equallogic's networking requirements and multipathing behaviour.
First the VMware side:
For starters on the ESXi side the current recommendation, when using the iSCSI Software Initiator, from VMware (for ESX\ESXi 4.1) and Dell is that you should have a single physical Nic mapped to each VMkernel Port that will be used for iSCSI. The binding process that is now recommended enforces this. It requires that you have only one active physical nic and no standby nics for each VMkernel port. No bonding allowed. Now you can cheat this and go back afterwards and add a failover nic but the intention is that MPIO will handle the failover so this serves no useful purpose (at least when everything is working as intended by VMware).
The default multipathing policy will allow active, active connections to an Equallogic array using round robin.
Second the Equallogic side:
Equallogic arrays have dual controllers that act in an active\standby mode. For the PS4000 these have two Gigabit Nics on each controller. For the active controller both of these nics are active and can receive IO from the same source. The network configuration recommends that the array's nics should be connected to separate switches. From the server side you have multiple links that should also be distributed to separate switches. Now for the odd part - Equallogic arrays expect that all initiator ports can see all active ports on the arrays. This is one of the reasons you need a trunk between the two switches. That means that with a host with two VMkernel iSCSI ports and a single PS4000 there are 4 active paths between the initator and the target - two are "direct" and two traverse the ISL.
For the standby controller's connections the same rules apply but these nics will only become active after a controller failover and the same principles apply. After failover in this environment there will still be four active paths.
Third for more advanced multipathing:
Equallogic now have a Multipath Extension Module that plugs into the VMware Plugable Storage Architecture that provides intelligent load balancing (using Least queue depth, Round Robin or MRU) across VMkernel ports. This will not work if all vmkernel uplink nics are not able to connect to all active Equallogic ports. This also ensures that the number of paths actually used remains reasonable - in large Equallogic environments the number of valid paths between a host and an Equallogic Group can be very high because all target nics are active, and all source nics can see all target nics.
Fourth for larger Equallogic Environments:
As you scale up an Equallogic environment you add additional arrays into a shared group. All active ports on all member arrays in a group must be able to see all other active ports on all other arrays in the same group. This is a further reason why you need fat pipes providing inter switch connectiongs between all switches in your Equallogic iSCSI fabric. This scaling also dramatically increases the number of valid active paths between initiators and targets. With an Equallogic Group consisting of 3 PS6000 arrays (four nics per controller vs 2 for the PS4000), and an ESX host with two vmkernel ports, there will be 24 possible active paths for the MPIO stack to choose from.
Fifth Bonding\link aggregation and Inter Switch links in an Equallogic Environment:
All of the inter array and initator<->array connections are single point to point Gigabit connections (or 10Gig if you have a 10Gig array). There is no need for, and no benefit to be gained from, bonding on the ESX server side and you cannot bond the ports on the Equallogic arrays. The only area where link aggregation\bonding\whatever-you-want-to-call it is relevant in an Equallogic switched ethernet fabric is on the interswitch links. Those links need to able to carry concurrent streams that can equal the total number of active Equallogic ports in your environment - you may need a lot of aggregate bandwidth there even if each point to point link between array ports and iniatator ports is limited to 1gbps.
Finally:
In an Equallogic environment traffic from a host (initiator) to an array can and will traverse the interswitch link. Whether a particular path does so depends on the source and destination ip-address for that particular path but each source port can connect to each target port and at least one of those paths will require traversing the ISL. In smaller environments (like this one) all of those paths will be used and active. In larger environments only a subset of possible paths are used but the same distribution will happen. The aggregate iSCSI bandwidth available to a host (if properly configured) is the sum of all of its iSCSI vmkernel port bandwidth, even if you are connecting to a single array and a single volume. How efficient that may be is another issue and this answer is already far too long.
Best Answer
Here's two possible ways to accomplish this:
separate NICs on the VM Host for the iSCSI traffic and other IP traffic
combined NICs on the VM Host for both iSCSi and other IP traffic
If you're concerned about mistakes or losing access to the service, just move one NIC at a time. You do have multipathing set up, right?