Given the output of top
shown below (which shows all processes currently active): how is it possible that this virtual server reports ~500M physical memory as used (used – cached, verified by free
), while the sum of 'RES' is much lower than that? Shouldn't the sum of RES
always exceed the used physical memory? What could the memory be in use for and by what?
top - 08:43:23 up 75 days, 5:00, 1 user, load average: 0.08, 0.08, 0.04
Tasks: 29 total, 1 running, 28 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 2400000k total, 713792k used, 1686208k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 200764k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 19232 544 384 S 0.0 0.0 0:00.08 init
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd/201208
3 root 20 0 0 0 0 S 0.0 0.0 0:00.00 khelper/2012080
323 pdns-rec 20 0 169m 2980 1152 S 0.0 0.1 1:58.98 pdns_recursor
560 root 20 0 6160 200 196 S 0.0 0.0 0:00.00 portreserve
578 dbus 20 0 21536 324 284 S 0.0 0.0 0:00.00 dbus-daemon
598 nobody 20 0 17040 2108 880 S 0.0 0.1 4:48.33 openvpn
607 nobody 20 0 16124 1112 724 S 0.0 0.0 0:41.54 openvpn
1347 root 20 0 78732 592 508 S 0.0 0.0 0:12.35 master
1355 postfix 20 0 78984 612 516 S 0.0 0.0 0:02.63 qmgr
1515 root 20 0 9232 564 396 S 0.0 0.0 0:39.69 gam_server
11077 root 16 -4 10648 272 268 S 0.0 0.0 0:00.00 udevd
12912 root 20 0 168m 57m 3040 S 0.0 2.4 0:56.31 puppetd
13187 root 20 0 11308 1224 1220 S 0.0 0.1 0:00.00 mysqld_safe
13295 mysql 20 0 1348m 44m 4232 S 0.0 1.9 246:49.61 mysqld
13391 root 20 0 66608 516 404 S 0.0 0.0 0:00.00 sshd
13411 root 20 0 20468 740 644 S 0.0 0.0 0:00.71 crond
13452 root 20 0 243m 1428 820 S 0.0 0.1 0:00.36 rsyslogd
16087 app 20 0 14896 1096 880 R 0.0 0.0 0:00.00 top
18993 newrelic 20 0 25764 124 84 S 0.0 0.0 0:00.00 nrsysmond
18994 newrelic 20 0 103m 1540 1036 S 0.0 0.1 0:46.22 nrsysmond
23268 postfix 20 0 81356 3384 2508 S 0.0 0.1 0:00.00 pickup
29550 root 20 0 387m 3516 1088 S 0.0 0.1 2:25.48 fail2ban-server
31434 root 20 0 96264 2408 2376 S 0.0 0.1 0:00.00 sshd
31438 user 20 0 96264 816 640 S 0.0 0.0 0:00.25 sshd
31439 user 20 0 105m 1264 1260 S 0.0 0.1 0:00.00 bash
31456 root 20 0 165m 1768 1764 S 0.0 0.1 0:00.00 sudo
31457 root 20 0 138m 1020 1016 S 0.0 0.0 0:00.00 su
31458 app 20 0 13440 3332 1492 S 0.0 0.1 0:00.07 bash
Note: what I'm asking is exactly the reverse of what people are commonly asking.
Requested cat /proc/memoinfo
:
MemTotal: 2400000 kB
MemFree: 1682660 kB
Cached: 202948 kB
Active: 132988 kB
Inactive: 180704 kB
Active(anon): 19668 kB
Inactive(anon): 91076 kB
Active(file): 113320 kB
Inactive(file): 89628 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 4 kB
Writeback: 0 kB
AnonPages: 110744 kB
Shmem: 2568 kB
Slab: 2268056 kB
SReclaimable: 2260740 kB
SUnreclaim: 7316 kB
I believe the virtualization technology is Virtuozzo.
Best Answer
Specifically related to VMs, RAM-usage looks different when you're "outside, looking in" versus "inside, looking around"
Why does a VM report ~500 MB used but
top
shows much less RAM in-use?If the VM is reporting ~500 MB used (in the VM manager, I'm assuming), yet the total of
RES
is much lower, that's expected.A VM (be it a VMware, Virtuozzo container, Hyper-V, or Solaris Zone) consumes more RAM than just what it's processes use. The "overhead" needed to actually run the VM is lumped-in with that metric (virtual device-mappings, virtual device caches (such as network & disk), kernel space, etc)
For completeness sake, I'm going to provide a brief explanation of the "opposite" problem most people ask (which is how I found this question)
Top's "Resident Set" size (
RES
) "bigger" than Physical MemoryResident Set size is calculated by the kernel as the sum of "anonymous" (
MM_ANONPAGES
) pages and "file-backed" (MM_FILEPAGES
) pages. Since a single page might be counted multiple times (attributed to multiple processes), this explains whyRES
intop
is calculated from every running process, it looks like it's consuming more memory than possible.