How to determine actual virtual CPU configuration of ESX image

central-processing-unitmulti-corevmware-esx

We have severe performance issues on an ESX client, but from where I am I cannot directly view the server configuration, yet I want to find out whether they set it up correctly.

Running CPU-Z shows me two processors with each one core, while I have Yorkfield processors and hence expect four cores to show up. Is this the way virtual CPUs show in CPU-Z?

How do I determine, from the client, how ESX has it configured? I understand that ESX can have HT enabled. My system uses SMP heavily, so it should benefit. From what I gather, the ESX is configured for two vCPUs. Is my reading of the data correct?

EDIT: there are basically two causes for the choking CPU. First, the application(s) aren't perfect, we work on that. Second, we don't have enough CPU power. The image comes from a similar native machine and is now on a virtual machine. Performance should be similar, but isn't. Normally we could deal with 500+ sessions per native system. On this system, choking starts around 280 sessions. I know it's a bit short-sighted to blame the CPU config, but after two days intensive monitoring and profiling it all really starts to point in that direction (memory, disk IO, db connection and network are performing fine).

Best Answer

If you are getting 280 sessions on a VM with two vCPU's and 500+ on the same physical box using all 4 cores then you are coming close to the bare metal performance with that VM config. Reconfigure the VM to use 4 vCPU's and performance should come close provided nothing else on the ESX host is consuming significant resources. However for a 4-vCPU VM running on a 4 Core single CPU host the Hypervisor overhead is going to take up some fraction of the overall capacity and that might be quite significant. The ESX hypervisor will only schedule your VM when it can schedule all 4 vCPU's concurrently so anything else running (Hypervisor, Service Console, other VM's) will cause all 4 vCPU's to stall on a setup like this. On this setup if it is possible for you to run your application across two dual vCPU VM's you may find that it scales better even with the added overhead of running an additional Guest OS, the scheduling problem will be a lot easier for the Hypervisor to deal with as you will only stall two cores when other tasks need to be given access to CPU resources.

Each VMware vCPU equates to a single core on a multi-core system, that's how VMware carves up processor resources. The Yorkfield is a Core2 Quad and definitely does not support HT - it has 4 physical cores with no HT cores.

CPU-Z running in a VM will only report the number of vCPU's that are presented to the VM, although it will identify the underlying CPU correctly. Depending on the ESX version and how the VM is configured it can present those as a dual core single processor or as two separate processors but that has no impact on perfromance, it is simply a presentation choice that is used to facilitate certain licensing situations.

Edited to provide more accurate and current data:

The strict co-scheduling point made above was a bit of a red herring. ESX has been using a relaxed co-scheduling mechanism since ESX V3 that allows some leeway (clock drift between vCPU cores) and that has improved with subsequent versions.

It is still generally true that a VM that presents as many vCPU's as there are physical cores will have more trouble being scheduled under load than VM's with fewer vCPU's but it is not as dramatic as it comes across in my original answer. A very detailed explanation of how it actually works can be found in this VMware white paper.