Adding vCPUs to an existing OS install

multiprocessingvirtualizationvmware-esxi

The VM configuration dialog in ESXi 5 warns me that if I change the number of vCPUs after the guest OS is installed, the sky will fall – ahem – it 'might make my virtual machine unstable.'

I know that certain CPU instructions involved in thread serialization will require a LOCK prefix in a multiprocessor system but not in a uniprocessor system (or at least not with a single core). The OS will generally omit LOCKs where they're not needed.

If the OS uses a kernel that omits the LOCKs but uses multiple CPUs, then this would lead to extreme instability and difficult-to-isolate bugs. But if the kernel was designed for one processor then what is it doing using more than one (which it has to do knowingly)? This seems like a completely absurd OS design which I would hope doesn't exist in practice.

A more plausible OS design would be to detect CPUs on boot and pick either the uniprocessor or multiprocessor kernel accordingly. Failing that, the only other sensible design would install the correct kernel, but the uniprocessor kernel would simply never use the other processor and therefore there would be no harm in another CPU other than it not being used at all.

Application software could get into trouble a little easier because it's easy to use multiple threads even on a single-core system, so not paying heed to the fact that it's on a multiprocessor system and not LOCKing (or using the OS' facilities) could cause horrible bugs. But would any serious software have such a poor design as to test uni/multiprocessor status only during installation?

What is the reasoning behind the doomsday warning? On what, if any, OSes or applications should I actually expect problems?

Best Answer

Merely adding or removing vCPUs after installation of the guest OS is not an issue with any version of Linux or Windows that's recent enough to still be vendor supported. This warning dates back to the very early days of VMware and is mostly irrelevant now.

In the early days of Linux, though, the kernel had to be specifically compiled with SMP support, and occasionally UP kernels didn't like running on SMP/NUMA systems, or vice versa. Those days are long since mostly forgotten.

These days Linux kernels are almost always compiled with SMP/NUMA support by default and run fine even with one processor. This has been true for all of 2.6 and most or all of 2.4.

Windows has behaved similarly since Server 2003. I wasn't able to quickly find definitive information on the Internet about how 2000 and NT 4.0 behaved, though I seem to recall from distant memory that they may have had issues when switching from a single CPU to multiple CPU configuration.

If you plan to P2V a very ancient system, though, it's theoretically possible you might run into such issues.