Why is there a shift between the WWN reported from the controller and the Linux system

hard drivezfs

I am in the process of setting up a ZFS storage system with a large number of disks. I want to use WWN numbers to identify devices (aliased in vdev_id.conf) but when trying to get the WWN numbers, I noted a shift between the WWN reported by the LSI controller (LSI MegaRAID SAS 9380-8e, all disks are in JBOD mode) and the WWN shown in /dev/disk/by-id:

$ storcli64 /c0/e72/s0 show all | grep 5008472696
WWN = 5000C5008472696C
0 Active 6.0Gb/s   0x5000c5008472696d 
1 Active 6.0Gb/s   0x5000c5008472696e 

$ ll /dev/disk/by-id/wwn-0x* | grep 5008472696
/dev/disk/by-id/wwn-0x5000c5008472696f -> ../../sdn

On this set of hardware (Disks, JBOD, controller), the pattern seems to be consistent (shifted by 3).

I have a 'feeling' that this is caused by the dual link but I could not find a suitable explanation for this behavior. Any suggestions will be greatly appreciated.

EDIT:

Another sample would be

WWN = 5000C50084726B78
0 Active 6.0Gb/s   0x5000c50084726b79 
1 Active 6.0Gb/s   0x5000c50084726b7a 

The controller reports 0x5000c50084726b7b, which is consistent with the suggested explanation given by Matthew.

Best Answer

You have a transport address and individual port identifiers for the dual-port SAS drives you're using (multipath). One WWN, multiple port identifiers.

A better illustration would be examining the output of:

multipath -l, lsscsi --wwn and lsscsi --transport

Like:

# lsscsi --transport
[0:0:0:0]    disk    sas:0x5000c50023601236          /dev/sdb
[0:0:1:0]    disk    sas:0x5000c50023614aee          /dev/sdc
[0:0:2:0]    disk    sas:0x5000c5007772e5fe          /dev/sdd
[0:0:4:0]    disk    sas:0x5000c5002362f346          /dev/sdf

# lsscsi --wwn
[0:0:0:0]    disk                                    /dev/sdb
[0:0:1:0]    disk    0x5000c50023614aef              /dev/sdc
[0:0:2:0]    disk                                    /dev/sdd
[0:0:4:0]    disk    0x5000c5002362f347              /dev/sdf

# multipath -ll | grep 3500
35000c50023614aef dm-6 HP      ,EF0450FARMV
35000c5002362f347 dm-3 HP      ,EF0450FARMV

For your purposes, go with the WWN. If this is ZFS, please elaborate on your controller and JBOD solution. If it's Linux, you should be using the DM Multipath and building the pool with the DM devices. Also see: https://github.com/ewwhite/zfs-ha/wiki