Performance of software raid 5 in XenServer 5.5

performancexenserver

Does anyone have hands-on experience to share with regards to software raid performance under XenServer 5.5?

I've recently moved from VMware Server 1.x on Ubuntu to XenServer 5.5 hoping for an increase in performance by utilizing paravirtualization technology. Unfortunately I'm seeing very low performance on my md type software raid.

While the XenServer host is seeing numbers above 100 MB/s the paravirtualized guests are unable to get much more than 20 MB/s.

Is this to be expected, or should I look for a problem in my configuration?

edit>

I realize that software raid isn't ideal on a virtualization host, but for a home lab I can't justify a true hardware raid controller, while I still want some level of storage redundancy.

The host is not showing high cpu usage, system load or even excessive I/O wait cycles. This combined with the fact that VMware Server 1.x gave at least twice the I/O performance suggests that there must be some issue with the paravirtualization. Possibly due to lack of certain functionality in my hardware.

Since the full hardware virtualization of XenServer isn't all that great, I guess I'll be going back to VMware Server on Ubuntu, giving me a chance to try out the new 2.x version.

Best Answer

Linux software raid is damn good and it beats low end raid controllers and usually matches performance of mid-end ones.
I recently did some performance tests for a couple virtualization technologies. The disk i/o performance loss in Xen VMs (xenserver 5.5 for that matter ) was about 70%. I used iozone which tests 10+ read/write patterns. The machine had 2x 160G sata-II drives in software raid1. Note that the 70% penalty on speed can vary Depending on the type of disk operations.
One thing you can do in Xenserver is to set higher or lower priority on certain storage resources (click around, should be in there ) that help a little i/o intensive VMs. But that's pretty much it - you want VMs you must pay the price :)

If you want to run linux vms in linux hosts using containers would get you better performance. For example, OpenVZ disk i/o performance loss is around 7%.