Lvm – RAID 10 + LVM layout

lvmraid

I have a disk shelve with 30x 6TB disks and a HW RAID Controller. I plan to set those up as RAID 10, giving me 90TB of usable space for write-heavy workloads.

The minimum number of disks for RAID 10 is 4 and while each additional pair of disks would still increase IOPS the actual gain decreases. I found that an (unofficial) optimum would be around 6 to 8 disks per RAID 10 volume (giving about 225-400 IOPS with 7.2k SATA drives).

Most example on the web consider 4-disk setups. I found (only) one resource implicitly suggesting to build multiple small RAID10 volumes and concatenate (not: stripe) those using LVM.

What would be the better (i.e. technically sound) setup:

  • 5x 6-disk RAID10 concatenated by LVM
  • 1x 30-disk RAID10 (if supported by the controller)
  • any alternative solution I am missing

Best Answer

Choosing a RAID strategy really depends on what your workload is. If you have something like a database where IOPs are more of a concern than bandwidth then you would want to choose a RAID strategy where IOPs is the priority.

In some cases, read performance (for something like a storage archive) is more important than write performance so that becomes the priority.

In general, the more disks+stripes you have to distribute your write load the quicker the write performance is. Higher RAID levels (RAID 6) are usually chosen as a reliability tradeoff mitigated by a stripe (RAID-60) to try and gain some write performance back but this is a very broad generalization.

In your case, I would start with a 4 disk RAID-10 as a baseline and test it out. Add the LVM layer next, and test it. Bump up the spindles, test, next try RAID-60, test it, etc. Your choice of controller cards will also affect performance as they are not all created equal. Bonnie++ is a good utility for testing read/write performance.

Related Topic