Cpu pinning strategy for kvm / centos7

centos7kvm-virtualizationlibvirtpinning

I am migrating from Xen to Kvm.

In Xen I was able to easily pin host cpus to guest vms, and also to pin host cpus to the "dom0" .

In Kvm I can also easily pin host cpus to guests vms, but, as far as I can I see, nothing prevents application running on the host OS to use these cpus. I want to prevent the case where a program running on the host starves / increases the latency of guests.

I could manually do an elaborate cgroup policy, but maybe am I just missing a setting in libvirt / centos7 ?

Also there is also an "emulatorpin" setting for guests. Should I pin the "emulator" to dedicated host cpus, or should I just restrict it to the guest cpus ? The goal is to limit as much as possible the latency on the guest.

Best Answer

If I understand your question correctly what you want to achieve is to restrict the hypervisor so it can only use a single CPU/core (or a limited number) for it's own processes, interrupt handling and everything. And that all other cores can be assigned by libvirt to guest systems.

Relatively simple is the isolcpus boot parameter which allows you to isolate one or more CPUs from the scheduler. This prevents the scheduler from scheduling any user-space threads on this CPU.

i.e. on your hypervisor in /etc/default/grub set:

GRUB_CMDLINE_LINUX="... quiet isolcpus=0,1"

that ought to prevent any userspace programs on the hypervisor from using cores > 1. Libvirt can then pin virtual servers to the remaining free cores.

I'm not sure if the isolcpus boot parameter also ensures that all interrupts will be restriced to those cores. Otherwise interrupts also have their own affinity property, smp_affinity, that defines the processors that will handle the interrupt request. The interrupt affinity value for a particular interrupt request is stored in the associated /proc/irq/irq_number/smp_affinity file and the default is set with /proc/irq/default_smp_affinity. smp_affinity is stored as a hexadecimal bit mask representing all processors in the system. The default value is f, meaning that an interrupt request can be handled on any processor in the system. Setting this value to 1 means that only processor 0 can handle the interrupt.


The tool to control processor and scheduling affinity for RHEL and CentOS 7 is called tuna