Linux Multipath – Can a Host connect to two different SANs

iscsimultipathstorage-area-network

I may need to power-cycle a SAN and i'm looking to avoid downtime. I have some options to essentially copy live LUNs from one SAN to another. Currently i have a single iSCSI SAN and i connect to it using iscsi (of course) multipathd.

I'm considering buying a duplicate SAN – i would like to connect to that in the same way. the way i understand the the multipath.conf directives, my devices {} section acts as a filter of sorts – in this case it would expose both SANs to the host.

I figure that the wwid presented to the host is enough globally distinguish LUNs/paths, but i've never had to do this before.

My basic question is :

  1. is it even possible to connect multiple SANs to a host without it having a fit (i suspect it is fine at the iSCSI level prior to involving multipath)?

Best Answer

Yes, you can connect multiple storage arrays to the same iscsi host. If you are using stock iscsi targets, you might get by without looking into multipath.conf at all. You do need to edit it if the array requires specific path checker or prioritizer.

If you have multipath running and the target is mounted through its device mapper (DM) device name /dev/mapper/[WWID]_p1 or similar and you have enough RAM and a window of low filesystem load you could theoretically survive power cycling the array without any downtime. Though this should be tested in advance.

If you replicate your data to a separate array, you will need to tell your applications when to switch from one array to the other. Multipath does not do that for you, instead it manages paths within networks with exactly one source (LUN) and one sink (host). It can manage multiple such networks within one host but no balancing between them.

Here is a case with two arrays connected to one host with one path each

# multipath -ll
[wwid1] dm-2 [VENDOR],[MODEL]
[size=14T][features=1 queue_if_no_path][hwhandler=1 emc][rw]
\_ round-robin 0 [prio=4][active]
 \_ 3:0:0:1 sdc        8:32  [active][ready] 
[wwid2] dm-0 [VENDOR],[MODEL]
[size=11T][features=0][hwhandler=0][rw]
\_ round-robin 0 [prio=1][active]
 \_ 2:0:0:0 sda        8:0   [active][ready] 

You should access the volumes through /dev/mapper/wwid1 and /dev/mapper/wwid2 to get MPIO involved.

Here is another example, where the array has two controllers thereby the host can be connected to the array using two paths.

# multipath -ll
mpathb ([WWID]) dm-0 [VENDOR],[MODEL]
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=30 status=active
  |- 20:0:0:0 sdc 8:32 active ready running
  `- 19:0:0:0 sdb 8:16 active ready running

The volume can be accessed via /dev/mapper/mpathb since user_friendly_names is set to yes in multipath.conf