Centos – LVM volume group shared between KVM/libvirt host and guests: is this a bad idea

centoskvm-virtualizationlvmsoftware-raidstorage

I have just built a shiny new KVM/libvirt-based virtual machine host, containing 4 SATA II hard drives, and running CentOS 5.5 x86_64.

I have decided to create virtual machine disks as logical volumes in an LVM volume group managed as a libvirt storage pool, instead of the usual practice of creating the disks as qcow images.

What I can't decide on is whether I should create the virtual machine logical volumes in the VM host's volume group, or in a dedicated volume group.

Which method should I choose, and why?


Method 1: Use the VM host's volume group

Implementation:

  • small RAID1 md0 containing the /boot filesystem
  • large RAID10 md1 occupying the remaining space, which contains an LVM volume group vghost. vghost contains the VM host's root filesystem and swap partition
  • create virtual machine disks as logical volumes in vghost as required

Pros:

  • if the VM host's root filesystem runs out of space, I can allocate more space from vghost with relative ease
  • The system is already up and running (but it is no big deal to start over)

Cons:

Depsite the fact that this method seems to work, I can't shake the feeling that this is somehow a bad idea. I feel that:

  • this may somehow be a security risk
  • at some point in the future I may find some limitation with the setup, and wish that I used a dedicated group
  • the system (CentOS, libvirt, etc.) may not really be designed to be used like this, and therefore at some point I might accidentialy corrupt/lose the VM host's files and/or filesystem

Method 2: Use a dedicated volume group

Implementation:

  • same md0 and md1 as in Method 1, except make md1 just large enough to contain for the VM host (eg. 5 to 10GB)
  • large RAID10 md2 occuping the remaining space. md2 contains an LVM volume group vgvms, whose logical volumes are to be used exclusively by virtual machines

Pros:

  • I can tinker with vgvms without fear of breaking the host OS
  • this seems like a more elegant and safe solution

Cons:

  • if the VM host's filesystem runs out of space, I would have to move parts of its filesystem (eg. /usr or /var) onto vgvms, which doesn't seem very nice.
  • I have to reinstall the host OS (which as previously stated I don't really mind doing)

UPDATE #1:

One reason why I am worried about running out of VM host disk space in Method 2 is because I don't know if the VM host is powerful enough to run all services in virtual machines, ie. I may have to migrate some/all services from virtual machines to the host OS.

VM host hardware specification:

  • Phenom II 955 X4 Black Edition processor (3.2GHz, 4-core CPU)
  • 2x4GB Kingston PC3-10600 DDR3 RAM
  • Gigabyte GA-880GM-USB3 motherboard
  • 4x WD Caviar RE3 500GB SATA II HDDs (7200rpm)
  • Antec BP500U Basiq 500W ATX power supply
  • CoolerMaster CM 690 case

UPDATE #2:

One reason why I feel that the system may not be designed to use the host VG as a libvirt storage pool in Method 1 is some behaviour I noticed in virt-manager:

  • upon add, it complained that it couldn't activate the VG (obviously, becuase the host OS has already activated it)
  • upon remove, it refused to do so because it couldn't deactivate the VG (obviously, because the host OS is still using the root and swap LVs)

Best Answer

Well thought-out question!

I'd go with Method 2, but that's more of a personal preference. To me, the Method 2 Cons aren't much of an issue. I don't see the host OS outgrowing its 5-10GB partition, unless you start installing extra stuff on it, which you really shouldn't. For the sake of simplicity and security, the host OS really should be a bare minimal install, not running anything except the bare minimum needed for administration (e.g. sshd).

The Method 1 Cons aren't really an issue either, IMO. I don't think there would be any extra security risk, since if a rooted VM is somehow able to break out of its partition and infect/damage other partitions, having the host OS on a separate VG might not make any difference. The other two Cons are not something I can speak to from direct experience, but I my gut says that CentOS, LVM, and libvirt are flexible and robust enough not to worry about them.

EDIT - Response to Update 1

These days, the performance hit of virtualization is very low, especially using processors with built in support for it, so I don't think moving a service from a guest VM into the host OS would ever be worth doing. You might get a 10% speed boost by running on the "bare metal", but you would lose the benefits of having a small, tight, secure host OS, and potentially impact the stability of the whole server. Not worth it, IMO.

In light of this, I would still favour Method 2.

Response to Update 2

It seems that the particular way that libvirt assumes storage is layed out is yet another point in favour Method 2. My recommendation is: go with Method 2.

Related Topic