Since I haven't gotten anywhere trying to monitor the condition of the PERC 5i with MegaCLI in Nexentastor (I also tried Dell's Openmanage software but they don't have a version for Solaris), I've since installed a Dell SAS 6/ir controller in the 2950. This allows Nexentastor to see the disks individually in JBOD mode, and therefore Nexentastor can directly monitor the condition of each disk and the array.
I had seen on some other forums that you had to flash the firmware on the SAS 6/ir in order for it to support JBOD mode. But I just didn't create any raid arrays in the 6/ir bios setup, and Nexentastor saw all the disks individually. So it seems that the SAS 6/ir does support JBOD with the stock firmware- there just isn't an option that says "JBOD" in the bios setup.
Technically everyone may not consider this exactly an answer to the question asked, but I think it is ultimately the best way to address the problem of not being able to monitor the condition of raid arrays created by Dell Perc controllers in Nexentastor/Opensolaris. And as I was able to find two SAS 6/ir cards on Ebay for $30 each, this seems to be the best way to avoid having to use third party software to monitor the raid condition. Also, JBOD is the preferred way to present disks to Nexentastor/Solaris anyway.
However, I know I’ve seen many others say they are using Perc controllers with Nexentastor, so some insight into how to install MegaCLI on Nexenta would definitely be welcome.
First off all, see this post: What are the different widely used RAID levels and when should I consider them
Notice the difference between RAID 10 and RAID 01.
Match this to your setups (both of them labeled as RAID 10 in your text). Look carefully.
Read the part in the link I posted where it states:
RAID 01
Good when: never
Bad when: always
I think your choice should be obvious after this.
Edit: Stating things explicitly:
Your first setup is a mirrored pair of 4 drive stripes.
Span 0: 4 drives in RAID0 \
} Mirror from span 0 and span 1
Span 1: 4 drives in RAID0 /
If any drives fails in a stripe then that stripe of lost.
In your case this means:
1 drive lost -> Working in degraded mode.
2 drives lost. Now we use some math.
If the drive fails in the the same span/stripe then you have the same result as 1 drive lost. Still degraded. One of the spans is off-line
If the second drive happens in the other span/stripe:
Whole array off line. Consider your backups. (You did make and test those, right?)
The chance that a second drive fails in the wrong span is 4/7th (4 drives left in the working span, each of which can fail) and only 3/7th that is fails in the span which is already down. Those are not good odds.
Now the other setup, with a stripe of 4 mirrors.
1 drive lost (any of the 4 spans):
Array still works.
2 drives lost:
85% chance that the array is still working.
That is a lot better then in the previous case (which was 4/7th, or 57%).
TL:DR: use the second configuration: It is more robust.
Best Answer
Openmanage as suggested above is a great tool that I think MUST be installed on any compatible Dell server that give information and allows configuration of more than just the RAID controller. However the Megaraid Storage Manager allows more control of the RAID controller than Openmanage or even accessing the RAID controller BIOS.
To get the utility, look at Dell's support pages for older servers fitted with the PERC5/6 such as the Poweredge 29xx or Rxx0. (obviously you need to match your operating system).
The later version of the utility installs on the server to be administered. The older version (if you can find it) seems to be split into a server and client part. When installing the package on a server with a PERC it installs both server and client. If installing on a pc without a PERC it only installs the client.
The advantage in the older version is that you can have one client on your normal desktop and connect this to any of the servers as and when required.