(1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it?
To see how much memory you are currently using, run free -m
. It will provide output like:
total used free shared buffers cached
Mem: 2012 1923 88 0 91 515
-/+ buffers/cache: 1316 695
Swap: 3153 256 2896
The top row 'used' (1923) value will almost always nearly match the top row mem value (2012). Since Linux likes to use any spare memory to cache disk blocks (515).
The key used figure to look at is the buffers/cache row used value (1316). This is how much space your applications are currently using. For best performance, this number should be less than your total (2012) memory. To prevent out of memory errors, it needs to be less than the total memory (2012) and swap space (3153).
If you wish to quickly see how much memory is free look at the buffers/cache row free value (695). This is the total memory (2012)- the actual used (1316). (2012 - 1316 = 696, not 695, this will just be a rounding issue)
(2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")?
This article on load average uses a nice traffic analogy and is the best one I've found so far: Understanding Linux CPU Load - when should you be worried?. In your case, as people pointed out:
On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
So, with a load average of 14.00 and 24 cores, your server is far from being overloaded.
Check the VmPeak out of /proc:
$ grep ^VmPea /proc/*/status | sort -n -k+2 | tail
/proc/32253/status:VmPeak: 86104 kB
/proc/5425/status:VmPeak: 86104 kB
/proc/9830/status:VmPeak: 86200 kB
/proc/8729/status:VmPeak: 86248 kB
/proc/399/status:VmPeak: 86472 kB
/proc/19084/status:VmPeak: 87148 kB
/proc/13092/status:VmPeak: 88272 kB
/proc/3065/status:VmPeak: 387968 kB
/proc/26432/status:VmPeak: 483480 kB
/proc/31679/status:VmPeak: 611780 kB
This should show which pid has tried to consume the most VM resources and should point at the source of the usage. If you don't see the massive amount of memory in this list then you need to look at the rest of the numbers in /proc/meminfo.
Best Answer
If you looking for more information about process I/O access and cpu usage maybe you can look iotop. The app provide information about process like top but for Input/Output information. Iotop use information from /proc files , example here for the process 16528.
cat /proc/16528/io
rchar: 48752567
wchar: 549961789
syscr: 5967
syscw: 67138
read_bytes: 49020928
write_bytes: 549961728
cancelled_write_bytes: 0
I Know it's possible to call it in bash mode like top.
iotop -botqqq --iter=3 >> /var/log/iotop
You can look dstat too but it's like top , global for the system not specific for a process.
You don't have thread lock information.
If you look only for java maybe look jconsole it uses ThreadMXBean getThreadCpuTime() function.