Your configuration looks weird; normally you'd have 4 paths to the same device (that is, 4 /dev/sdX devices per multipath device). The array controller typically is able to inform the host about the priority for each path, so you have 2 paths with higher priority and 2 with lower priority. Then dm-multipath multiplexes IO over the 2 high priority paths (the "selector" option with the default rr_min_io=100). Now, you have 2 path groups both with the same prioruty, so maybe dm-multipath is spreading IO over both of them, which might not be what your SAN admin wants you to do. Another weird thing is that the devices are marked with "undef" rather than "ready". Yet another strange thing is that your path numbering looks quite weird (everything goes along the same path?). Are you really sure everything is properly cabled together, properly zoned etc.?
A typical output from "multipath -ll" should look like
sanarch3 (3600508b4000683de0000c00000a20000) dm-6 HP,HSV200
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=100][active]
\_ 0:0:0:5 sdc 8:32 [active][ready]
\_ 1:0:0:5 sdk 8:160 [active][ready]
\_ round-robin 0 [prio=20][enabled]
\_ 0:0:1:5 sdg 8:96 [active][ready]
\_ 1:0:1:5 sdo 8:224 [active][ready]
There you see 4 paths grouped into 2 priority groups, and IO is done over devices sdc and sdk while sdg and sdo are idle and used only during a failure.
EDIT So the reason why you should see 4 paths is that you have 2 HBA ports and the array has 2 redundant controllers. Then you have 2 redundant networks with a final switch layer providing cross-network connections. Thus both HBA's see both controllers, hence 4 paths for each LUN. You can see that in my example above for the SCSI ID numbering, which goes as [host controller ID]:[channel ID]:[target controller ID]:[LUN ID]. What you then can see above is that the active paths are both on controller #0, since in this case controller #0 happens to "own" the LUN; IO is possible via the other controller but at a performance penalty since the other controller would (depending on the controller implementation) need to forward the IO to the owning controller. Hence the controller reports that the paths that go to controller #0 have higher priority.
So from your question one sees that there is no path to the other controller at all. And, in case you don't have redundant controllers and networks, why bother with multipath in the first place?
Best Answer
This term is used most commonly in reference to how SAN storage volumes get connected to the servers that they're assigned to. For instance, with a multipath fibre channel setup, there would be redundant fiber paths between the SAN and the server, with each path going through different FC switches, connecting to different FC cards, etc. This way, if any single piece of hardware goes down (be it a FC switch, FC card, fiber patch, etc.), IO will still be able to continue. The same principles can be applied to iSCSI.