I hate to say "don't use SATA" in critical production environments, but I've seen this situation quite often. SATA drives are not generally meant for the duty cycle you describe, although you did spec drives specifically rated for 24x7 operation in your setup. My experience has been that SATA drives can fail in unpredictable ways, often times affecting the entire storage array, even when using RAID 1+0, as you've done. Sometimes the drives fail in a manner that can stall the entire bus. One thing to note is whether you're using SAS expanders in your setup. That can make a difference in how the remaining disks are impacted by a drive failure.
But it may have made more sense to go with midline/nearline (7200 RPM) SAS drives versus SATA. There's a small price premium over SATA, but the drives will operate/fail more predictably. The error-correction and reporting in the SAS interface/protocol is more robust than the SATA set. So even with drives whose mechanics are the same, the SAS protocol difference may have prevented the pain you experienced during your drive failure.
I've found that when I've had to tune for lower latency vs throughput, I've tuned nr_requests down from it's default (to as low as 32). The idea being smaller batches equals lower latency.
Also for read_ahead_kb I've found that for sequential reads/writes, increasing this value offers better throughput, but I've found that this option really depends on your workload and IO pattern. For example on a database system that I've recently tuned I changed this value to match a single db page size which helped to reduce read latency. Increasing or decreasing beyond this value proved to hurt performance in my case.
As for other options or settings for block device queues:
max_sectors_kb = I've set this value to match what the hardware allows for a single transfer (check the value of the max_hw_sectors_kb (RO) file in sysfs to see what's allowed)
nomerges = this lets you disable or adjust lookup logic for merging io requests. (turning this off can save you some cpu cycles, but I haven't seen any benefit when changing this for my systems, so I left it default)
rq_affinity = I haven't tried this yet, but here is the explanation behind it from the kernel docs
If this option is '1', the block layer will migrate request completions to the
cpu "group" that originally submitted the request. For some workloads this
provides a significant reduction in CPU cycles due to caching effects.
For storage configurations that need to maximize distribution of completion
processing setting this option to '2' forces the completion to run on the
requesting cpu (bypassing the "group" aggregation logic)"
scheduler = you said that you tried deadline and noop. I've tested both noop and deadline, but have found deadline win's out for the testing I've done most recently for a database server.
NOOP performed well, but for our database server I was still able to achieve better performance adjusting the deadline scheduler.
Options for deadline scheduler located under /sys/block/{sd,cciss,dm-}*/queue/iosched/ :
fifo_batch = kind of like nr_requests, but specific to the scheduler. Rule of thumb is tune this down for lower latency or up for throughput. Controls the batch size of read and write requests.
write_expire = sets the expire time for write batches default is 5000ms. Once again decrease this value decreases your write latency while increase the value increases throughput.
read_expire = sets the expire time for read batches default is 500ms. Same rules apply here.
front_merges = I tend to turn this off, and it's on by default. I don't see the need for the scheduler to waste cpu cycles trying to front merge IO requests.
writes_starved = since deadline is geared toward reads the default here is to process 2 read batches before a write batch is processed. I found the default of 2 to be good for my workload.
Best Answer
Its not possible to disable or bypass this RAID, but you can configure each drive as separate JBOD/RAID0 array. Easiest is to go to MegaRaid Cli (CRTL+Y before boot). There you just run this command:
-CfgEachDskRaid0 -aAll
see: http://ehaselwanter.com/en/blog/2012/11/26/MegaRaid-as-fake-JBOD-for-swift/
Than you should see all drives separately in OS