(1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it?
To see how much memory you are currently using, run free -m
. It will provide output like:
total used free shared buffers cached
Mem: 2012 1923 88 0 91 515
-/+ buffers/cache: 1316 695
Swap: 3153 256 2896
The top row 'used' (1923) value will almost always nearly match the top row mem value (2012). Since Linux likes to use any spare memory to cache disk blocks (515).
The key used figure to look at is the buffers/cache row used value (1316). This is how much space your applications are currently using. For best performance, this number should be less than your total (2012) memory. To prevent out of memory errors, it needs to be less than the total memory (2012) and swap space (3153).
If you wish to quickly see how much memory is free look at the buffers/cache row free value (695). This is the total memory (2012)- the actual used (1316). (2012 - 1316 = 696, not 695, this will just be a rounding issue)
(2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")?
This article on load average uses a nice traffic analogy and is the best one I've found so far: Understanding Linux CPU Load - when should you be worried?. In your case, as people pointed out:
On multi-processor system, the load is relative to the number of processor cores available. The "100% utilization" mark is 1.00 on a single-core system, 2.00, on a dual-core, 4.00 on a quad-core, etc.
So, with a load average of 14.00 and 24 cores, your server is far from being overloaded.
Top, that is the figure in the %MEM
column, is counting the amount of RSS
memory (Resident Segment Size, basically pages physically in memory that have real data on them) as a percentage of total physical memory in your machine or VPS.
On the other hand, free is counting just that, the amount of physical memory pages that have no data on them, and have not been assigned to buffers, cache or the kernel. In a Unix like operating system, the OS tries hard to keep that number as low as possible by using free pages for disk cache. The only time you'll likely a high value of free memory is just after your machine boots, or if you quit a program that was consuming a large amount of physical memory itself.
Is this memory usage normal ? The short answer is yes. It is typical for Unix programs to allocate (that is ask the OS for) significantly more memory than they would use. If you look at the VSS
column, for the processes listed the total is over 463mb. That is because
- A lot of the memory accounted against each process will be physically mapped to the same library, say
glibc
- The OS generally overcommits memory to the application, on the basis that most applications never come to collect on what they have asked for.
Figure out process memory usage is more an art than a science IMHO, see the discussions on http://lwn.net. My advice is to keep a close eye on iostat -xm
and ensure that your machine is not swapping heavily.
Best Answer
it would be better to use ps with head
The
RSS
field shows physical memory usage in KB.