Well, I use a D2700 for ZFS storage and worked a bit to get LEDs and sesctl features to work on it. I also have SAS MPxIO multipath running well.
I've done quite a bit of SSD testing on ZFS and with this enclosure.
Here's the lowdown.
- The D2700 is a perfectly-fine JBOD for ZFS.
- You will want to have an HP Smart Array controller handy to update the enclosure firmware to the latest revision.
- LSI controllers are recommended here. I use a pair of LSI 9205-8e for this.
- I have a pile of HP drive caddies and have tested Intel, OCZ, OWC (sandforce), HP (Sandisk/Pliant), Pliant, STEC and Seagate SAS and SATA SSDs for ZFS use.
- I would reserve the D2700 for dual-ported 6G disks, assuming you will use multipathing. If not, you're possibly taking a bandwidth hit due to the oversubscription of the SAS link to the host.
- I tend to leave the SSDs meant for ZIL and L2arc inside of the storage head. Coupled with an LSI 9211-8i, it seems safer.
- The Intel and Sandforce-based SATA SSDs were fine in the chassis. No temperature probe issues or anything.
- The HP SAS SSDs (Sandisk/Pliant) require a deep queue that ZFS really can't take advantage of. They are not good pool or cache disks.
- STEC is great with LSI controllers and ZFS... except for price... They are also incompatible with Smart Array P410 controllers. Weird. I have an open ticket with STEC for that.
Which controllers are you using? I probably have detailed data for the combination you have.
If you're running Windows 2008 on this system, you will want to use the HP Array Configuration Utility in either its graphical or command-line form. For the purposes of this question, I'd recommend the command line utility.
Once installed, you will want to navigate to your Programs Menu and open the Array Configuration Utility (it may specify "command line").
Once inside the utility, run the following command: ctrl all show config
This will output something like:
Smart Array E200i in Slot 0 (Embedded) (sn: QT8CMP3716 )
array A (SATA, Unused Space: 1145787 MB)
logicaldrive 1 (72.0 GB, RAID 1+0, OK)
logicaldrive 2 (180.0 GB, RAID 1+0, OK)
logicaldrive 3 (120.0 GB, RAID 1+0, OK)
physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SATA, 500 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SATA, 500 GB, FAILED)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SATA, 500 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SATA, 500 GB, OK)
If any of the output indicates failed or pre-failure, that's enough evidence for HP to send a replacement disk.
Once you've verified that there's a bad disk. You can identify the drive with its blue LED indicator. In this case, I would use the following command to identify the drive:
ctrl slot=0 physicaldrive 1I:1:2 modify led=on
This will blink the drive in question for one hour.
As for the mechanics of drive replacement, you need to identify the disk, press its release lever, open the drive handle, then remove the disk. Installation is done in the reverse order.
If you're interested in adding more disk space to the array, see these questions on Server Fault...
HP RAID array - hpacucli
What are the good ways to migrate a RAID array to bigger disks?
Repurpose spare drive in HP ProLiant RAID 5 array
Best Answer
Non-brand disks usually work perfectly in brand servers. On some server models, management tools can complain about "non original disks" (I saw this happen on IBM servers), but apart from this, there are no technical issues in using them.
The server's vendor will probably not support such a configuration, though.
BTW, "brand" disks are usually nothing more than standard disks with a different label on them.