How do we quantify the impact of a lower memory speed on a VMware ESXi host

memoryperformance-monitoringvmware-esxi

We have an IBM x3650 M2 server. It has 2 Intel X5570 CPUs (quad-core Nehalem). The current memory configuration is 12x 4GB DIMMs for a total of 48GB, running at 1066MHz.

To increase the amount of memory to 72GB I can populate the 4 free slots with 4GB DIMMs, but this has the side effect of dropping the memory speed down to 800MHz. Our vendor is strongly advising against this, and suggests replacing some of the 4GB DIMMs with 8GB DIMMs in order to maintain the memory speed at 1066MHz.

I've got 4x 4GB DIMMs on loan from the vendor, so I'm looking for a way to quantify the difference.

The box is running ESXi 4.1 and it's in a cluster with 2 other ESXi hosts, both of which are newer X3650 M3 boxes.

What can I do to test whether the change in memory speed has a significant impact? If it matters, I can take this host out of the cluster for the duration of testing.

EDIT: I should've been clearer…. I would like to determine if there's any impact on running VMs, so installing a "real" OS or using memtest86 won't really do much to further my goal.

Best Answer

The impact is greater than if you were running an older CPU with no support for EPT, but realistically the only way to determine if it's going to affect your workload is to actually profile the workload. Don't take the hypervisor out of the equation, because the whole point is to test the setup that you're running, not some hypothetical benchmark figure. Ignore Memtest86+, ignore bare-metal OSes, just find a virtual machine that's representative of a memory-intensive workload in your environment and beat the crap out of it.

Guy is dead-on: most consolidated workloads are bound by the amount of memory in the system and not by any other resource. By decreasing memory contention, the extra memory will probably help you out by enabling you to use more of your memory-intensive VMs' RAM as cache instead of it being ballooned out under pressure.