The behaviour you are seeing is due to the way that Linux allocates memory on a NUMA system.
I am assuming (without knowing) that the 32GB system is non-numa, or not numa enough for Linux to care.
The behaviour of how to deal with numa is dictated by the /proc/sys/vm/zone_reclaim_mode
option. By default, linux will detect if you are using a numa system and change the reclaim flags if it feels it would give better performance.
Memory is split up into zones, in numa system there is a zone for the first CPU socket and a zone for the second. These come up as node0
and node1
. You can see them if you cat /proc/buddyinfo
.
When the zone reclaim mode is set to 1, allocation from the first CPU socket will cause reclaim to occur on the memory zone associated with that CPU, this is because it is more efficient in terms of performance to reclaim from a local numa node. Reclaim in this sense is dropping pages such as clearing the cache, or swapping stuff out on that node.
Setting the value to 0 causes no reclaims to occur if the zone is filling up, instead allocating into the foreign numa zones for the memory. This comes at a cost of a breif lockage of the other CPU to gain exclusive access to that memory zone.
But then it instantly start swaping! after a few secouns: Mem: 66004536k total, 65733796k used, 270740k free, 34250384k buffers Swap: 10239992k total, 1178820k used, 9061172k free, 91388k cached
Swapping behaviour and when to swap is determined by a few factors, one being how active the pages are that have been allocated to applications. If they are not very active, they will be swapped in favour of the busier work occurring in the cache. I assume that the pages in your VMs dont get activated very often.
Best Answer
For that particular model the best performance is reached by using identical DIMMs in EVERY slot - although this isn't necessarily the cheapest or most future-proof option.