Linux – Slow disk I/O in KVM with LVM and md raid5

debiankvm-virtualizationlinuxlvmmdadm

I have been battling with a kvm setup under debian for two weeks now, specifically with guest I/O disk performance.

The system:

-Supermicro 1018d-73mtf (X10SL7-F motherboard)
-16GB ECC/UB
-Intel Xeon E3-1240v3
-6xWD Red 750GB 6Gb/s

On this i am running a Debian Wheezy on two of the disks, the other four disks are setup with md for raid5 with LVM on top for guest storage.
Performance directly on the raid5 (measured by creating a LV and mounting it and running bonnie++ and dd tests) is fine, giving me ~220/170MB/s read/write, but on guests i get decent reads and 40-50MB/s writes, tested on both Windows (Server 2012) and Linux (Debian).
I have read up on aligning disks and partitions and recreated the raid and lvm setup by the book, but havnt recieved any performance boosts.

When measuring loads with atop during writes directly from the host i can see that the disks and lvm are getting high loads, but measuring while a guest is writing shows the disks at ~20-30% and lvm getting "red" (100%).

The normal tweaks of KVM/host system have been done, setting scheduler to deadline, setting stripe caches for the raid, cache=none on the guests, reflashing the SAS controller card to IT-mode (LSI 2308) and i am out of ideas, here is a pastebin with relevant information about the setup in the hopes of someone noticing something i've done wrong http://pastebin.com/vuykxeVg.

If you need anything else il paste it.

Edits:

This is basicly how the drives, md and lvm is set up, with some changes because i am running 3 disks + spare. http://dennisfleurbaaij.blogspot.se/2013/01/setting-up-linux-mdadm-raid-array-with.html

Screenshots of atop during host and guest writetests (bonnie++)

Host:
http://i.imgur.com/IsTprqA.png

Guest:
http://i.imgur.com/uVmhFCK.png

Best Answer

Not sure that my note covers all the problem but with this storage configuration you could not get proper alignment.

Lets see,

  • You can align partition boundaries according to RAID stripe size, it is OK.
  • You can set file-system optimization parameters accordingly, it is also OK.

  • But in order to align LVM properly you need the RAID stripes to fit LVM extent.

LVM extent size is always power of 2. Thus your RAID stripe size needs to be power of 2. In order to get it you need a number of disks in RAID5 to be equal to 2^N+1 = 3, 5, 9 ...

With 4 disks in RAID5 it is impossible.

Since software RAID5 has no protected write-back cache It may significantly vulnerable with "partial stripe write penalty".

May be you also have another causes restricting write performance but the first I would have done - migration to RAID10. With RAID10 on all 6 disks you are able to get read performance and guest storage capacity comparable to your initial setup ... and no headache with alignment ;).