Well thought-out question!
I'd go with Method 2, but that's more of a personal preference. To me, the Method 2 Cons aren't much of an issue. I don't see the host OS outgrowing its 5-10GB partition, unless you start installing extra stuff on it, which you really shouldn't. For the sake of simplicity and security, the host OS really should be a bare minimal install, not running anything except the bare minimum needed for administration (e.g. sshd).
The Method 1 Cons aren't really an issue either, IMO. I don't think there would be any extra security risk, since if a rooted VM is somehow able to break out of its partition and infect/damage other partitions, having the host OS on a separate VG might not make any difference. The other two Cons are not something I can speak to from direct experience, but I my gut says that CentOS, LVM, and libvirt are flexible and robust enough not to worry about them.
EDIT - Response to Update 1
These days, the performance hit of virtualization is very low, especially using processors with built in support for it, so I don't think moving a service from a guest VM into the host OS would ever be worth doing. You might get a 10% speed boost by running on the "bare metal", but you would lose the benefits of having a small, tight, secure host OS, and potentially impact the stability of the whole server. Not worth it, IMO.
In light of this, I would still favour Method 2.
Response to Update 2
It seems that the particular way that libvirt assumes storage is layed out is yet another point in favour Method 2. My recommendation is: go with Method 2.
I think the performance impact is really minimal for LVM, except if you use snapshots which can have a big impact - best if snapshots are used during backups only. This answer agrees.
For a detailed discussion of issues with LVM, with some mention of XFS, Advanced Format, RAID, etc, see LVM dangers and caveats
Best Answer
The volume (LV) will go into partial mode (see
p
flag inlvs
output) but you may be still able to read and write to the disk unless the missing parts are accessed which will result in I/O errors (I am not saying it is a good idea to continue using filesystem in such state.)Some applications or filesystems may not handle I/O failures well and you may loose some writes which have not made it to the disk but with journalling FS (like ext4) it is unlikely you would get FS corrupted beyond repair.
You will not be able to activate or modify partial logical volume (e.g. resize it) and it is fine. In general you do not want activating it.
The worst thing you could do at this moment is to run fsck. Do not. Not until the volume is back. Otherwise you may as well say good bye to a large part of your data.
If other LVs were added/removed while the disk was missing, you will need to run
vgextend --restoremissing VG PV
which will make the Volume group whole again (seem
flag inpvs
output.)The mounted FS may not fully recover and you may need to umount first, (optionally running fsck now) and mount it back.
You may also want to consider setting up multipath (even with one path), which is able to hide short term outages from the system, as I/O will be queued.