Connecting HP gen 5 SAS raid cage to a third party (3ware, LSI) controller

hardware-raidhotswaphpraidsas

I have a ML350 G5 that I'm thinking of repurposing to save money. I'm looking to install FreeNAS but it (ZFS) doesn't play nice with the HP e200i card that's part of the motherboard from what I've read. I'd like to buy a good, used pcie x4 / x8 RAID card for cheap and connect it directly to the backplane, allowing me to continue using the LFF cage for my drives.

The backplane appears to use 2ea 4 lane SAS cables with sff-8484 connectors on both ends – can I disconnect one and using a breakout cable, reroute to my add-in RAID card? In my mind, that would allow me to electrically split the cage in half – 3 drives using the e200i, 3 drives using the new card.

I have no idea how much logic is part of a RAID backplane or a HP backplane. I don't know if it's a "dumb" component that only makes an electrical connection from the HD to the RAID controller or if it's "smart", performing logic functions that basically makes it proprietary compatible.

thoughts? thanks!

Best Answer

If I were dealing with that model/vintage of server (circa 2005-2008), I would probably make use of the existing setup... A few points:

  • The 6-disk 3.5" backplane in a G5 ML350 is a dumb component. There's no RAID logic or SAS expansion built in.
  • You can connect this backplane and cage to any RAID controller or SAS HBA, provided you use the right cabling. SFF-8484 on the backplane side, and possibly SFF-8087 on the controller side, if you use a newer controller.
  • This is old hardware, so understand the limits of your PCIe slots, SAS bandwidth (3.0Gbps)
  • If you use SATA drives, the link speeds will be capped at 1.5Gbps per disk if you use a period-correct HP Smart Array controller (E200i, P400, P800).

What would I do?

  • I'd drop FreeNAS. It's not that great a solution, and you'll lose some of the HP ProLiant platform monitoring features. The on-disk ZFS format under FreeNAS is a bit quirky, too... FreeNAS has been the fodder of a few WTF ServerFault questions.
  • Instead, ZFS-on-Linux or an appliance package that leverages it would be a better option. Check out the free Community Edition of QuantaStor or ZetaVault.

Finally, for this scale of hardware, it makes sense to just use your existing HP Smart Array E200i controller.

  • If you take the approach of a ZFS-focused OS and a JBOD-capable controller or HBA, you'll have to allocate disks for the OS, as well as the data. That's a potential waste of disk space. If you approach this with partitions or slices of the disks, your ZFS configuration will become extremely complex and fraught.
  • The E200i is a capable controller and you'll have the benefit of a write cache (if the RAID battery is present and healthy).
  • If you really want to use ZFS, you can do so on top of a hardware RAID controller. I do this all the time in order to provide some ZFS features (snapshots, compression, etc.) while still having the ease and flexibility of hardware array monitoring.
  • HP Smart Array controllers can be configured to provide multiple logical drives (block devices) from a group of disks (an "Array"). In the example below, I configured the E200i in an ML350 G5 server with 4 500GB SATA disks to provide a 72GB OS drive and 240GB and 200GB drives to be used as separate ZFS zpools.

    Smart Array E200i in Slot 0 (Embedded)    (sn: QT8CMP3716     )
    
    Internal Drive Cage at Port 1I, Box 1, OK
    
    Internal Drive Cage at Port 2I, Box 1, OK
    array A (SATA, Unused Space: 0  MB)
    
    
      logicaldrive 1 (72.0 GB, RAID 1+0, OK)
      logicaldrive 2 (240.0 GB, RAID 1+0, OK)
      logicaldrive 3 (200.0 GB, RAID 1+0, OK)
    
      physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 500 GB, OK)
      physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 500 GB, OK)
      physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 500 GB, OK)
      physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 500 GB, OK)
    

zpool status output

  pool: vol1
 state: ONLINE
  scan: scrub repaired 0 in 1h33m with 0 errors on Thu Jan  1 09:19:21 2015
config:

    NAME                                       STATE     READ WRITE CKSUM
    vol1                                       ONLINE       0     0     0
      cciss-3600508b1001037313620202020200007  ONLINE       0     0     0

errors: No known data errors

  pool: vol2
 state: ONLINE
  scan: scrub repaired 0 in 2h3m with 0 errors on Thu Jan  1 09:49:35 2015
config:

    NAME        STATE     READ WRITE CKSUM
    vol2        ONLINE       0     0     0
      cciss-4542300b6103731362020202067830007      ONLINE       0     0     0
Related Topic