Linux – Easiest way to see linux memory usage when a process is killed

linuxmemory usage

I've got a build server where dmesg is reporting that it is having to kill processes because it is running out of memory. As the system is running many builds and other processes concurrently, I need to figure out which process or processes are really using too much memory. i.e. I'm not convinced the process being killed is the one hogging the memory.

Ideally I'd like to dump the memory usage at the point when the out of memory killer kicks in, with a full command line of each process. Is there a way to do this? Alternatively, if I can't dump it at that specific point, I plan to set up a cron job to dump memory usage every minute or two, but I still need some help to get the correct output.

The output from smem is pretty good, but it truncates the command line:

PID User     Command                         Swap      USS      PSS      RSS
39090 user   /usr/bin/Xvfb +extension RA     4732      144      148      264
20837 user   -bash                              0      780     1100     2144
21144 user   python /usr/bin/smem               0    12120    12320    13248
19224 user   /opt/atlassian/bamboo_home/        0   234940   235303   237144
12414 user   /usr/java/jdk1.8.0_121/bin/   176128  2249180  2249338  2250428

Is there a way to tell smem to show the full command line? Alternatively a simple way of piping the output to show me what I need? I can pipe into xargs and ps to get the full command line like this:

smem -H -c "pid" | xargs ps

However then I've lost the memory usage values from smem.

Best Answer

If you have the process name, you might find something like the answer here quite useful: Finding Average size of single Apache process for setting MaxClients

You can replace httpd at the beginning of that command with your process name and it will show you the total memory usage for processes with that name on the first line and the average memory usage of those processes on the second line. Hope this helps! :)