First off all, see this post: What are the different widely used RAID levels and when should I consider them
Notice the difference between RAID 10 and RAID 01.
Match this to your setups (both of them labeled as RAID 10 in your text). Look carefully.
Read the part in the link I posted where it states:
RAID 01
Good when: never
Bad when: always
I think your choice should be obvious after this.
Edit: Stating things explicitly:
Your first setup is a mirrored pair of 4 drive stripes.
Span 0: 4 drives in RAID0 \
} Mirror from span 0 and span 1
Span 1: 4 drives in RAID0 /
If any drives fails in a stripe then that stripe of lost.
In your case this means:
1 drive lost -> Working in degraded mode.
2 drives lost. Now we use some math.
If the drive fails in the the same span/stripe then you have the same result as 1 drive lost. Still degraded. One of the spans is off-line
If the second drive happens in the other span/stripe:
Whole array off line. Consider your backups. (You did make and test those, right?)
The chance that a second drive fails in the wrong span is 4/7th (4 drives left in the working span, each of which can fail) and only 3/7th that is fails in the span which is already down. Those are not good odds.
Now the other setup, with a stripe of 4 mirrors.
1 drive lost (any of the 4 spans):
Array still works.
2 drives lost:
85% chance that the array is still working.
That is a lot better then in the previous case (which was 4/7th, or 57%).
TL:DR: use the second configuration: It is more robust.
Based on looking through some datasheets here, here, and here, it would appear that LSI uses the term "commissioned" to describe a hot spare that has been put into active duty, for one reason or another.
In general, your analysis of the array capacity is probably the most reliable fact here - if the array capacity only makes sense when you include that drive's capacity, then that drive is probably an active member of that array.
Based on both of these pieces of information, I would confidently say that if you remove that drive, the array is going to fall into a degraded state. Since you're using RAID6, you wouldn't have too much to worry about, but you should definitely figure out how this happened in the first place. My best advice would be if you still have active support for this setup, get on the phone with the vendor until they give you an answer.
Best Answer
Check your filesystems after repairing your array, in case there was silent data corruption.
You can lose two entire drives in a four drive RAID 10. Depending on which of those drives are failing, you may not be screwed one bit. Make sure both of those drives are members of opposite RAID 1 arrays. If they are, you're almost certainly fine. You also have a hot spare, and that should act as a "spillover" space for most controllers - though I'm not certain if your controller will do this because I don't know what it is.
Even if your controller does not use a hot spare as scratch space or emergency space it should still have been doing patrol reads regularly, which may have detected these issues and relocated data areas. Your controller log would be a good place to see if that's happened during at least the last few patrol reads. I've no idea how old these media errors are, though.
Regarding your adapter, if you're not running manufacturer "certified" drives in your controller, your controller won't necessarily be so intelligent about ejecting members when they begin to fail - typically only being able to eject them when they drop out or report a serious SMART failure. However, a drive can have been going bad for quite some time before triggering its overall SMART health report.
Even if it's not fine, perform the rebuild and do a consistency check + filesystem check. You'll also see filesystem I/O errors in dmesg if you've actually been running into filesystem level corruption. Worst case, you'll need to restore some files or the whole array from backup. Do the rebuild one disk at a time, not both. Start with replacing the most ragged disk.