Redhat – QLogic FC HBA + IBM DS5100 on RHEL 5.4 LUNs not detected as SCSI disks

fibre-channelibmqlogicredhatstorage-area-network

I've got a brand new DS5100 SAN connected to multiple hosts (HS22 Blade in BladeCenter H) via two independent fabrics. Switch (Brocade 20p for BladeCenter) is zoned properly, ie every host in the BladeCenter sees the LUNs via both fabrics. RHEL detects the qla2xxx driver for the builtin QLogic QMI2572 4G FC CIO for BladeCenter and I can “see'' the LUNs beeing presented as output from dmesg:

qla2xxx 0000:24:00.0: Found an ISP2532, irq 209, iobase 0xffffc20000022000
qla2xxx 0000:24:00.0: Configuring PCI space...
PCI: Setting latency timer of device 0000:24:00.0 to 64
qla2xxx 0000:24:00.0: Configure NVRAM parameters...
qla2xxx 0000:24:00.0: Verifying loaded RISC code...
qla2xxx 0000:24:00.0: Allocated (64 KB) for EFT...
qla2xxx 0000:24:00.0: Allocated (1414 KB) for firmware dump...
scsi4 : qla2xxx
qla2xxx 0000:24:00.0: 
QLogic Fibre Channel HBA Driver: 8.03.00.10.05.04-k
QLogic QMI2572 - QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter
ISP2532: PCIe (5.0Gb/s x4) @ 0000:24:00.0 hdma+, host#=4, fw=4.04.09 (85)
PCI: Enabling device 0000:24:00.1 (0140 -> 0143)
ACPI: PCI Interrupt 0000:24:00.1[B] -> GSI 42 (level, low) -> IRQ 138
qla2xxx 0000:24:00.1: Found an ISP2532, irq 138, iobase 0xffffc20000024000
qla2xxx 0000:24:00.1: Configuring PCI space...
PCI: Setting latency timer of device 0000:24:00.1 to 64
qla2xxx 0000:24:00.1: Configure NVRAM parameters...
qla2xxx 0000:24:00.1: Verifying loaded RISC code...
qla2xxx 0000:24:00.1: Allocated (64 KB) for EFT...
qla2xxx 0000:24:00.1: Allocated (1414 KB) for firmware dump...
scsi5 : qla2xxx
qla2xxx 0000:24:00.1: 
QLogic Fibre Channel HBA Driver: 8.03.00.10.05.04-k
QLogic QMI2572 - QLogic 4Gb Fibre Channel Expansion Card (CIOv) for IBM BladeCenter
ISP2532: PCIe (5.0Gb/s x4) @ 0000:24:00.1 hdma+, host#=5, fw=4.04.09 (85)
qla2xxx 0000:24:00.0: LOOP UP detected (4 Gbps).
qla2xxx 0000:24:00.1: LOOP UP detected (4 Gbps).
Vendor: IBM       Model: 1818      FAStT   Rev: 0730
Type:   Direct-Access                      ANSI SCSI revision: 05
scsi 4:0:0:0: Attached scsi generic sg1 type 0
Vendor: IBM       Model: 1818      FAStT   Rev: 0730
Type:   Direct-Access                      ANSI SCSI revision: 05
scsi 4:0:1:0: Attached scsi generic sg2 type 0
Vendor: IBM       Model: 1818      FAStT   Rev: 0730
Type:   Direct-Access                      ANSI SCSI revision: 05
scsi 5:0:0:0: Attached scsi generic sg3 type 0
Vendor: IBM       Model: 1818      FAStT   Rev: 0730
Type:   Direct-Access                      ANSI SCSI revision: 05
scsi 5:0:1:0: Attached scsi generic sg4 type 0 

The problem now is that they aren't recognized as beeing SCSI disks, rather generic SCSI devices (/dev/sg{1-4}). An output from “sg_map -i -sd -x'' displays:

/dev/sg1  4 0 0 0  0  IBM       1818      FAStT   0730
/dev/sg2  4 0 1 0  0  IBM       1818      FAStT   0730
/dev/sg3  5 0 0 0  0  IBM       1818      FAStT   0730
/dev/sg4  5 0 1 0  0  IBM       1818      FAStT   0730

My basic understanding is that even though this is a multi-pathed setup I don't have to have it enabled or actually use MPIO. I've tried a quick workaround via device mapper multi-pathing but wasn't getting any output from multipathd. “sg_map'' shows that these devices are disks (-sd flag) but the LUNs are not beeing attached as /dev/sd*. Do I have to manually create the proper device nodes? Do i have to use IBM's RDAC or SDD driver to see them?

Best Answer

That was an easy one; Forgot to setup the mapping between the LUNs and the nodes.

Related Topic