Dell MD3220i Multipathing

delliscsistorage-area-network

our company recently purchased a Dell MD3220i with 24x 300GB SAS 10K disks for a project. To my great horror, the management tool does not seem to allow any ethernet bonding of the 2×4 ports even thought it was a requirement. How did you overcome this? Multipathing? Use the SAS 6GBit Port? Storage virtualization? I'd really like to just use bonding and have redundancy without the OS handling this… 🙁

Any hints?

Best Answer

This is simpler than you think.

That box has 2 controllers, each with 4 x 1Gbps ports, now you're right that you don't seem to be able to use any form of LACP or Etherchannel bonding/trunking but there's still lots of scope for not only multipathing but load-balancing.

Essentially all you need to do is run 2 links per controller to one switch and the other two to another switch, repeat this for the second controller. This way each controller has 2Gbps going to one switch and 2Gbps going to the second switch.

Now you don't mention what OSs you're dealing with here but with both W2K8 (with MPIO/DSM) installed and VMWare's ESX/ESXi you simply need to define the iSCSI connections from the servers NIC/NICs(ideally two or more, each going to different switches) to at least two of the SAN ports, at least one per controller. I'm sure it's the same with Linux but I've less experience with iSCSI on that sorry.

This way you get SAS-path, controller, port, cable, switch and server-NIC failure resilience, the servers will have a valid path in the event of any of these single failure scenarios. Plus if you look at the path management policies available you should have a number of them to choose from, including round-robin which may be able to give you >1Gbps of overall throughput.

Hope this makes sense and is of help. Oh and the SAS port is for expansion really, it's a bit of a misnomer in this situation. I'll try to add a Visio diagram later if I can ok.

Related Topic