RAID1 – mdadm mirror – not performing parallel reads as expected

mdadmraidsoftware-raid

We have a three-way RAID 1 mirror which is powered by mdadm. I think I read that mdadm is supposed to take multiple simultaneous read requests and distribute them across different drives in the mirror (parallelize the reads) to improve read performance, but in our testing and watching the output of iostat -xm 1, it appears only /dev/sda is being used even though I/O to that device is being saturated from 5 different md devices.

Am I misunderstanding something? Does mdadm need to be configured differently? Does our version (CentOS 6.7) not support this? I'm not sure why it's behaving this way.

Benchmark setup – run the following commands simultaneously:

dd if=/dev/md2 bs=1048576 of=/dev/null count=25000
dd if=/dev/md3 bs=1048576 of=/dev/null count=25000
dd if=/dev/md4 bs=1048576 of=/dev/null count=25000
dd if=/dev/md5 bs=1048576 of=/dev/null count=25000
dd if=/dev/md6 bs=1048576 of=/dev/null count=25000

While those are going watch the output of iostat -xm 1 (sample output included below – the mirror is made up of sda, sdb, and sdc).

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda           100669.00     0.00 10710.00    0.00   435.01     0.00    83.18    33.28    3.11   0.09 100.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
md1               0.00     0.00 19872.00    0.00    77.62     0.00     8.00     0.00    0.00   0.00   0.00
md2               0.00     0.00 18272.00    0.00    71.38     0.00     8.00     0.00    0.00   0.00   0.00
md5               0.00     0.00 18272.00    0.00    71.38     0.00     8.00     0.00    0.00   0.00   0.00
md7               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
md6               0.00     0.00 18240.00    0.00    71.25     0.00     8.00     0.00    0.00   0.00   0.00
md4               0.00     0.00 18208.00    0.00    71.12     0.00     8.00     0.00    0.00   0.00   0.00
md3               0.00     0.00 18528.00    0.00    72.38     0.00     8.00     0.00    0.00   0.00   0.00
md0               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Best Answer

Perform the test again, but change it so you have all five reads being performed on the same MD device (e.g. /dev/md2) and you should see the them being distributed.

A single read operation will only read from one drive in a mirror. It will start with the first disk assigned to the mirror, which in this case looks to be /dev/sda. Since you have 5+ MD devices configured and are performing a single read operation from each device they're all pulling from /dev/sda.

I would recommend doing away with having multiple MD devices configured and just use a single device spanning the entire SSD.

Alternatively, alter your testing method to force it to task several different drives. Take a look at bonnie++, it's pretty spiffy.

Related Topic