ESXi: Storage Adapter Paths – Active/Standby to Active/Active

storagevmware-esxi

We have a couple of servers acting as VM hosts (ESXi v5.1) and they are connected to a Dell MD3220 SAS Array (2 cables each, split across 2 controllers).

We've just noticed that multipath wasn't working, and digging deeper it turns out we'd not mapped the second Port Identifier in each server's HBA card to a Host in the array management software (MDSM).

Mapping these Port Identifiers to a host has enabled a second path in ESXi, however this current configuration differs slightly from near-identical system that we have already configured. In this reference system, the two paths are marked as Active (I/O) and Active, whereas in this current system, the paths are described as Active (I/O) and Standby.

We've tested this current configuration and it works fine – the standby path is switched to Active (I/O) as soon as the original path is broken.

However, I'm curious to know the difference in behaviour and how to make the systems match.

FYI: The original system only differs from the current system insofar as the server model is slightly different; they are connecting to identical array and are running identical versions of ESXi v5.1. The only difference (I assume) is the configuration.

Best Answer

The difference is that with Active-Active both paths are being used for live traffic, this means you have essentially twice the bandwidth to your array, with Active-Passive only one link is carrying live traffic with the other ready to take over as required.

This may well be a simple path-policy issue, look at a single datastore's configuration, if you look at it's path data you'll see the path usage policy - I tend to use SATP_RR which is the round-robin variant but this may be slightly different for your storage (I'm no Dell expert sorry). If you wish to set this to be a new default then take a look at THIS link.

Related Topic