QEMU – Memory Usage of Disk Cache Writeback Mode

kvm-virtualizationqemu

I use Proxmox with ZFS ZVOL or LVM-thin for the disk volumes and it is said that using disk cache writeback mode will increase memory usage more than allocated.

Is there a way to check, limit, and increase the memory usage for qemu disk cache writeback mode in Linux KVM?

Best Answer

In writeback mode QEMU/KVM writes through the host's pagecache, basically as any other userspace program. To get informations about pagecache content and activity, you can issue:

[user@localhost ~]$ grep -i "^cache\|dirty\|writeback:" /proc/meminfo
Cached:            84548 kB
Dirty:                 0 kB
Writeback:             0 kB

Examining the output, we can see:

  • Cached: is the amount of memory used for read caching. If you read something in the guest, it will end both in the host memory cache and in the guest own pagecache. Someone see this double caching as a waste of resource; in reality, read-only cache can be immediately discarded if under memory pressure. On the other hand, host's cache often is way bigger then guest's one, so a net performance increment can be obtained by turning on QEMU writeback cache (vs direct access);

  • Dirty: it represents the amount of to-be-written (ie: changed) memory. To reclaim this memory, the system must write out the changes to disks. This means that, depending on the underlying IO subsystem, dirty page reclaim can be slow;

  • Writeback: it is the amount of memory which the system is currently writing to the disks, both due to memory pressure (and dirty page reclaim) and timing (after at most 30s, dirty pages are written back to disk).

To summarize, the wonder of the pagecache is that it is automatically managed by host system in response to memory pressure and other factors (ie: various sysctl entries). As it often cause a net performance increment on the guest system, I set most of my virtual machines with writeback cache.

Some more info can be found here