VMWare vSphere 5: 4 pNICs for iSCSI vs. 2 pNICs

iscsinicvmware-vsphere

New SAN for me, never used before: it's an IBM DS3512, dual controller with a quad 1GbE NIC per controller that a client bought and needs help setting up.

Hosts (x2) have 8 pNICs and while I usually reserve 2 pNICs for iSCSI per host (and 2 for VM, 2 for management, 2 for vMotion, staggered across adapters), these extra ports on the SAN have me wondering if storage I/O would be significantly improved with 2 additional NICs per host, or if the limitations of the vmkernel/initiator would prevent the additional multipaths from ever being realized.

I'm not seeing alot of 4 pNIC iSCSI implementations per host; 2 is the de facto standard from what I've read/seen online. I could and probably will do some I/O testing, but just wondering if there's a "wall" that someone else has discovered long ago (i.e. before 10GbE) that makes a 4 NIC iSCSI per host setup somewhat pointless.

Just to clarify: I'm not looking for a how-to, but an explanation (link to paper, VMWare recommendation, benchmark, etc.) as to why 2-NIC configurations are the norm vs. 4-NIC iSCSI configurations. i.e. storage vendor limitations, VMKernel/initiator limitations, etc.

Best Answer

If I were in your position, I'd assess whether the I/O needs dictate more than the bandwidth and path selection offered by the two physical NICS and 1GbE iSCSI. Honestly, I use 10GbE more than anything nowadays, but with proper MPIO configuration, there's no harm in adding additional iSCSI ports.

What's your VMWare license level? If you're not using DRS, for instance, some of this may be moot.

As for making your multiple-path iSCSI more effective, you'll want to change the path selection to Round-Robin and lower the number of I/O operations before switching paths from the default of 1000 to 1... That's what I use for HP and other SAN solutions.

The IBM DS3512 specific implementation guide is here.

Related Topic