The H200 can present at most 2 logical drives (virtual disks) so your plan for 3 RAID groups wont fly with the H200 controller alone. You could configure the SSD's in one RAID 10 group, and the 8 SAS HDD's in the other but that wont deliver isolation between the IO on the two data drives.
Which document are you referring to that states that the SAS drives must be in slots 0 and 1 for mixed SAS\SSD setups? The H200 user guide doesn't mention that restriction, the only restrictions that are mentioned are that all drives in a RAID group must be either all SSDs or HDDs and either all SAS or all SATA.
ZFS does not do disk I/O, device drivers below ZFS do disk I/O. If the device does not respond in a timely manner, or as in this case, disrupts all other devices on the expander, then it is not visible as a failure to ZFS. All ZFS sees is a slow I/O.
There is a bug in Intel X-25M firmware that affects their behaviour during heavy loads and can cause reset storms. This problem affects all OSes and cannot be solved at the OS layer. Please contact your hardware supplier for fixes or remediation.
If a read is expected to be satisfied by the L2ARC, then the read will be attempted there. ZFS then relies on the lower-layer drivers to report an error. For this case, the drive continues to reset and retry for as many as 5 minutes before declaring the I/O as failed, depending on the driver, device, and default timeout settings. Only after the lower layer drivers declare the I/O as failed will ZFS retry on the pool.
NexentaStor's volume-check and disk-check runners look for additional error messages and alert you via email and fault logging. The disk-check runner has been improved in the 3.1 release to help alert you for specifically the conditions exhibited by broken firmware in SSDs.
Bottom line: your hardware is faulty and will need to be fixed or replaced.
Best Answer
The LSI 9260-8i has 8 x 6Gb/s SAS ports and that's the amount of throughput you'll get.
However, depending on your workload/application, disks rarely run at 100% throughput. It's more likely you'll reach the max number of IOPS of your disks (unless you're doing bulk reads, streaming, etc).
It also depends on how your SAS backplane "exports" these disks. For instance, some backplanes will have as many mini-SAS or standard SAS ports as there are disks. Others will use an expander like the SuperMicro 846E1 (24 disks for a mini-SAS x4 port).
SuperMicro has a chassis that (TQ edition) that exports all disks individually, in that case you'd need 3 x 8-port HBAs to connect all 24-disks. It makes cable management a bit trickier but has better performance.
My suggestion is that you first determine what workload your application will be throwing at this server before considering it doesn't fit your needs. If it's a NAS/SAN appliance, pay attentio