Sql-server – VMWare NUMA Node Boundary Configuration

numasql servervmware-esxi

I have been trying to find out what the best VM configuration would be for our SQL Server 2012 on VMware 5.1 would be.

The VM host(s) has 2 sockets with 4 cores running hyper threading (total of 16 CPUS), we have a total 48 GB Memory on the host(s).

Our SQL Server VM has 8 vCpu's and 32 GB Memory allocated, when looking at the CPU configuration within SQL it is reported that there is only one NUMA node. My concern is that the 32 GB Memory allocation crosses the NUMA boundary and from what I understand this will not give us our best performance.

How best can we configure our SQL VM to best use the NUMA configuration on the Host? One article suggested adding more vCpu's to the VM, another configuring some advanced setting on the VM, and another reducing the amount of Memory allocated to 50% of the host. (24 GB)

Any advice would be greatly appreciated.

Best Answer

You're correct that you typically want to align your guest so it either fits within a single NUMA node (narrow). If you do go wide in your current scenario (by memory or cpu), I agree that you're getting into a one-vm per host configuration.

It's hard to know whether narrow or wide is a better fit for the VM in question without knowing an awful lot about the SQL Server and its bottlenecks. But it is generally true that memory is very powerful at alleviating IO pressure for SQL Servers and is very often helpful -- so I think your long term plan of increasing the amount of memory to be able to keep the guest narrow and give it more memory is sound.

With your version of vSphere you do have the option to make your virtual machine NUMA aware, but it's a very specific configuration setting. It isn't done by simply setting the socket/processor ratio on the VM.

The advanced setting value you are looking for is "numa.vcpu.maxPerVirtualNode", on your server you have two physical sockets each with 4 hyper-threaded cores so set this value to 4. That will cause VMware to allocate 4 Virtual CPU's on each socket.

It's enabled by default for guests with more than 8 vCPUs, so it wouldn't be on by default for yours. You will want to keep all the hosts configured the same way, as migrating to hosts with different NUMA configurations could be bad news.

More info:

Related Topic