First off all, see this post: What are the different widely used RAID levels and when should I consider them
Notice the difference between RAID 10 and RAID 01.
Match this to your setups (both of them labeled as RAID 10 in your text). Look carefully.
Read the part in the link I posted where it states:
RAID 01
Good when: never
Bad when: always
I think your choice should be obvious after this.
Edit: Stating things explicitly:
Your first setup is a mirrored pair of 4 drive stripes.
Span 0: 4 drives in RAID0 \
} Mirror from span 0 and span 1
Span 1: 4 drives in RAID0 /
If any drives fails in a stripe then that stripe of lost.
In your case this means:
1 drive lost -> Working in degraded mode.
2 drives lost. Now we use some math.
If the drive fails in the the same span/stripe then you have the same result as 1 drive lost. Still degraded. One of the spans is off-line
If the second drive happens in the other span/stripe:
Whole array off line. Consider your backups. (You did make and test those, right?)
The chance that a second drive fails in the wrong span is 4/7th (4 drives left in the working span, each of which can fail) and only 3/7th that is fails in the span which is already down. Those are not good odds.
Now the other setup, with a stripe of 4 mirrors.
1 drive lost (any of the 4 spans):
Array still works.
2 drives lost:
85% chance that the array is still working.
That is a lot better then in the previous case (which was 4/7th, or 57%).
TL:DR: use the second configuration: It is more robust.
Are you saying that you had three SAS disks in a RAID 0 arrangement? That means that there was no redundancy.
Do you what specifically about the power failure impacted your environment?
Was there just a power outage? A power surge? Electrical storm? Lightning?
I'm not certain that a power outage would result in an array failure... However a failed disk COULD result in array failure.
When you look at the SAS controller's status, what does it say? Please post the details in your question. Regardless, you did have a RAID 0 configuration, so a failed disk means your data is likely unrecoverable. Remember, RAID 0 strips the data across the member disks. So recovery would be challenging.
Do you want to image the disks knowing this information?
As for mounting the disks, this may not be possible without specialized tools. Those drives have array metadata, and at this point, the array has failed. Again, details will help, but you wouldn't be able to just mount one of the disks.
Also, you're right... USB->SAS adapters do not exist.
Best Answer
Smartmontools has extensions that allow it to poll a drive for SMART data through an LSI (as well as others) RAID array. Normally, this isn't something you can do as the RAID abstraction obscures direct interfaces with the drives.
Smartmontools might not be installed on your machine. However, it is native to most "main repositories" of most distributions, and there is even a Windows version at: http://sourceforge.net/projects/smartmontools/files/
It can be used to poll a drive behind an LSI MegaRAID controller like so:
smartctl -a -d megaraid,N /dev/sdX
Where "-a" means display all disk data, -d means device type (megaraid being the type in your case), followed by N which means the drive number in that controller. To access the drive in slot 0, you would say 0 here. If you wish to poll all four of your drives, run this command four times, replacing N with 0 to 3. sdX is the RAID abstraction itself, as seen normally within the operating system. Yours is probably sda.
You will see a long output from each drive, and what you're looking for is either a reported general SMART failure (which you might not find, as your controller isn't rejecting drives), or reported "offline uncorrectable sectors" or "pending sectors". Any drive with more than 0 in this field is bad. No mercy should be given to those fields, as it takes a LOT of failed reads to increment either value by one.
You can also perform a short or long test like so (same rules above apply):
smartctl -t [long|short] -d megaraid,N /dev/sdX