First install some requirements. (This might be more than actually needed)
yum -y groupinstall "X Window System"
yum -y groupinstall "Virtualization Client"
yum -y groupinstall "Virtualization"
yum -y groupinstall "Virtualization Platform"
yum -y groupinstall "Virtualization Tools"
yum -y groupinstall "Desktop"
yum -y install xorg-x11-fonts-100dpi
yum -y install xorg-x11-fonts-75dpi
yum -y install xorg-x11-fonts-Type1 xorg-x11-font-utils
yum -y install man
yum -y install emacs
As stated in the question we already have an LVM volume group
[root@server ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 3 0 wz--n- 8.18t 97.90g
First we create and define a libvirt storage pool from that LVM volume group
[root@server ~]# cat /tmp/foobar
<pool type='logical'>
<name>pool0</name>
<target>
<path>/dev/vg0</path>
</target>
</pool>
[root@server ~]# virsh pool-define /tmp/foobar
Pool pool0 defined from /tmp/foobar
[root@server ~]# virsh pool-start pool0
Pool pool0 started
[root@server ~]# virsh pool-autostart pool0
Pool pool0 marked as autostarted
[root@server ~]# virsh pool-list
Name State Autostart
-----------------------------------------
pool0 active yes
Per default libvirt already has a virtual network configured. It is named default. In this example we will redefine that virtual network so that we can use it for PXE install.
[root@server ~]# virsh net-list
Name State Autostart
-----------------------------------------
default active yes
[root@server ~]# emacs /tmp/default.xml
[root@server ~]# cat /tmp/default.xml
<network>
<name>default</name>
<forward mode='nat'/>
<bridge name='virbr0' stp='on' delay='0' />
<ip address='10.0.0.1' netmask='255.255.0.0'>
<tftp root='/var/lib/dnsmasq/tftpboot' />
<dhcp>
<range start='10.0.0.2' end='10.0.255.255' />
<host mac='02:54:00:13:be:e4' name='virt1.example.com' ip='10.0.0.2' />
<host mac='02:52:2c:a3:11:42' name='virt2.example.com' ip='10.0.0.3' />
<bootp file='/pxelinux.0' />
</dhcp>
</ip>
</network>
The MAC addresses, 02:54:00:13:be:e4 and 02:52:2c:a3:11:42 seen above are just some random MAC addresses. (see the serverfault question: how to generate a random MAC address from the Linux command line)
[root@server ~]# virsh net-destroy default
Network default destroyed
[root@server ~]# virsh net-undefine default
Network default has been undefined
[root@server ~]# virsh net-define /tmp/default.xml
Network default defined from /tmp/default.xml
[root@server ~]# virsh net-start default
Network default started
[root@server ~]# virsh net-autostart default
Network default marked as autostarted
[root@server ~]# mkdir /var/lib/dnsmasq/tftpboot
[root@server ~]# ls -lZd /var/lib/dnsmasq/tftpboot
drwxr-xr-x. root root unconfined_u:object_r:dnsmasq_lease_t:s0 /var/lib/dnsmasq/tftpboot
[root@server ~]# yum install syslinux
[root@server ~]# rpm -ql syslinux | grep pxelinux.0
/usr/share/syslinux/gpxelinux.0
/usr/share/syslinux/pxelinux.0
[root@server ~]# cp /usr/share/syslinux/pxelinux.0 /var/lib/dnsmasq/tftpboot/
[root@server ~]# cd /var/lib/dnsmasq/tftpboot/
[root@server tftpboot]# wget -O centos-6-vmlinuz.x86_64 http://ftp.funet.fi/pub/Linux/mirrors/centos/6.0/os/x86_64/images/pxeboot/vmlinuz
[root@server tftpboot]# wget -O centos-6-initrd.img.x86_64 http://ftp.funet.fi/pub/Linux/mirrors/centos/6.0/os/x86_64/images/pxeboot/initrd.img
[root@server tftpboot]# mkdir /var/lib/dnsmasq/tftpboot/pxelinux.cfg
[root@server tftpboot]# cd /var/lib/dnsmasq/tftpboot/pxelinux.cfg
The MAC address 02:54:00:13:be:e4 used above needs the configuration filename 01-02-54-00-13-be-e4. In other words prepend 01-
and convert :
into -
.
[root@server pxelinux.cfg]# emacs 01-02-54-00-13-be-e4
[root@server pxelinux.cfg]# cat 01-02-54-00-13-be-e4
default local
prompt 1
timeout 50
label local
localboot 0
label install
kernel /centos-6-vmlinuz.x86_64
append initrd=/centos-6-initrd.img.x86_64 ks=http://www.example.com/kickstart-files/virt1.example.com.txt device=eth0 ramdisk_size=9216 lang= devfs=nomount
[root@server pxelinux.cfg]# cd
Here we assumed that the kickstart file for virt1.example.com can be downloaded from http://www.example.com/kickstart-files/virt1.example.com.txt
Now we run service libvirtd reload
. This seems to be required for the dnsmasq tftpserver to run properly.
[root@server ~]# service libvirtd reload
Reloading libvirtd configuration: [ OK ]
Now run virt-install to create the KVM guest virt1.example.com with 20 Gb disk space.
[root@server ~]# virt-install --debug --hvm --vnc --name virt1.example.com --os-type=linux --os-variant=rhel6 --pxe --network network=default,model=e1000,mac=02:54:00:13:be:e4 --disk pool=pool0,size=20 --ram 1024 --vcpus=1
Now the graphical program virt-viewer
will pop up an X-window. When you see "boot: " during the boot sequence, type install
.
A note about the virt-install command line options:
Using model=virtio
didn't work for me, but luckily model=e1000
worked just fine.
Best Answer
It is fully up to you according to your requirements in term of storage quantity, performances and redundancy.
You can guess that assigning multiple virtual drives to a single VM is possible. The VM see each virtual drive as a different block (like /dev/vda and /dev/vdb).
Let's say your VMs should have SSD performance for system boot and programs execution, and have a slower but bigger storage to store files (like medias).
You can assemble the two SSDs in a RAID-1 array, install the host system on this array. You can choose: use the whole array for your host system and store VMs SSD drives in a directory (not recommended), or install the host system in a smaller partition (as small as possible but with a margin) and use the remaining space for an other partition, mounted in /mnt/guest-ssd/ (recommended).
You can assemble all HDDs in a single RAID-10 or RAID-5 array, create a single but very large partition on the RAID array, and mount it in /mnt/guest-hdd/ as an example.
Both RAID arrays will benefit of redundancy. The RAID-10 or RAID-5 array made from HDDs will get better r/w performances regarding to a single HDD.
A first advantage of this architecture is: guests' drives are stored as files in two partitions / directories which are /mnt/guests-ssd/ and /mnt/guests-hdd/. The images could be easily transferred for backups or migrations. A second avantage is that there is an abstraction of the real HDDs capacity. Virtual drives in /mnt/guests-hdd/ can be smaller or bigger than 558 GB.
A disadvantage of this scenario is: you don't have a lot of SSD capacity. If SSDs are in a RAID-1 array, you will only get 185 GB as SSD storage for host and guests systems. This is not a lot regarding to the number of VMs you can create with so much RAM. There might be a disproportion between your resources (RAM vs SSD vs HDD) but it depends on your needs. If you want to create multiple VM for storage, they will require small SSD (just for OS + NFS/FTP/...) and large HDD. If you want many VMs with databases (lot of iops) or other disk intensive applications, you should replaces the two SSDs by bigger ones, and probably replace a few HDDs by SSDs. Or use caching solutions as suggested by other people here.
Knowing that you can pass block devices to VMs, and that RAID arrays are presented to the host system as block devices, you can provide a direct access to a RAID array to your VMs. The VM will not have conscience of the RAID mechanism behind this block but this method is less flexible than the previous one: the block's size will be a multiple of HDD size.
If one VM requires medium storage (500 GiB), you can create a RAID-1 array out of 2 HDD and pass this block/array to the VM.
If one VM requires large storage, you can build a RAID-10 array with 4 HDD so it will get a virtual 1.1 TiB drive with redundancy and improved r/w performances (2x faster in comparison to a single HDD).
If one VM requires XL storage, you can build a RAID-10 array with 8 HDD so you will get a 2.2 TiB block with redundancy and improved r/w performances (4x faster in comparison to a single HDD)
You can see that there are more choices to make and more configuration to do. There are very few scenarios that require this kind of setup.
KVM does not manage storage/drives on host. Libvirt allows you to configure storage pools (local, over network, ...) but it will not configure RAID (hardware or software) and it will not build your architecture for you, it will not take decisions instead of you regarding to how you plan your storage, network, nodes and other resources.
You can only have fun with this beast, playing with KVM/libvirt ;)