Is virtual machine slower than the underlying physical machine

benchmarkcloud computingperformancevirtualization

This question is quite general, but most specifically I'm interested in knowing if virtual machine running Ubuntu Enterprise Cloud will be any slower than the same physical machine without any virtualization. How much (1%, 5%, 10%)?

Did anyone measure performance difference of web server or db server (virtual VS physical)?

If it depends on configuration, let's imagine two quad core processors, 12 GB of memory and a bunch of SSD disks, running 64-bit ubuntu enterprise server. On top of that, just 1 virtual machine allowed to use all resources available.

Best Answer

The typical experience for a general purpose server workload on a bare metal\Type 1 Hypervisor is around 1-5% of CPU overhead and 5-10% Memory overhead, with some additional overhead that varies depending on overall IO load. That is pretty much consistent in my experience for modern Guest OS's running under VMware ESX\ESXi, Microsoft Hyper-V and Xen where the underlying hardware has been appropriately designed. For 64 bit Server operating systems running on hardware that supports the most current cpu hardware virtualization extensions I would expect all Type 1 hypervisors to be heading for that 1% overhead number. KVM's maturity isn't quite up to Xen (or VMware) at this point but I see no reason to think that it would be noticeably worse than them for the example you describe.

For specific use cases though the overall\aggregate "performance" of a virtual environment can exceed bare metal \ discrete servers. Here's an example of a discussion on how a VMware Clustered implentation can be faster\better\cheaper than a bare metal Oracle RAC. VMware's memory management techniques (especially transparent page sharing) can eliminate the memory overhead almost entirely if you have enough VM's that are similar enough. The important thing in all these cases is that the performance\efficiency benefits that virtualization can deliver will only be realised if you are consolidating multiple VM's onto hosts, your example (1 VM on the host) will always be slower than bare metal to some degree.

While this is all useful the real issues in terms of Server virtualization tend to be centered around management, high availability techniques and scalability. A 2-5% CPU performance margin is not as important as being able to scale efficiently to 20, 40 or however many VM's you need on each host. You can deal with the performance hit by selecting a slightly faster CPU as your baseline, or by adding more nodes in your clusters but if the host can't scale out the number of VM's it can run, or the environment is hard to manage or unreliable then its worthless from a server virtualization perspective.