Top, that is the figure in the %MEM
column, is counting the amount of RSS
memory (Resident Segment Size, basically pages physically in memory that have real data on them) as a percentage of total physical memory in your machine or VPS.
On the other hand, free is counting just that, the amount of physical memory pages that have no data on them, and have not been assigned to buffers, cache or the kernel. In a Unix like operating system, the OS tries hard to keep that number as low as possible by using free pages for disk cache. The only time you'll likely a high value of free memory is just after your machine boots, or if you quit a program that was consuming a large amount of physical memory itself.
Is this memory usage normal ? The short answer is yes. It is typical for Unix programs to allocate (that is ask the OS for) significantly more memory than they would use. If you look at the VSS
column, for the processes listed the total is over 463mb. That is because
- A lot of the memory accounted against each process will be physically mapped to the same library, say
glibc
- The OS generally overcommits memory to the application, on the basis that most applications never come to collect on what they have asked for.
Figure out process memory usage is more an art than a science IMHO, see the discussions on http://lwn.net. My advice is to keep a close eye on iostat -xm
and ensure that your machine is not swapping heavily.
The main parameter for tweaking Apache's memory usage will be MaxClients
. Too low a value and you will run out of available slots to serve client requests. Too many and you will use up all your RAM and begin to use swap space which will kill performance (it may appear to be a server crash).
One way of tuning MaxClients
is to observe the system's memory usage and tweak the setting up/down as needed. If the server begins to swap edit it down. If the server has free memory put it up.
You can also estimate the maximum value by looking at Apache's memory usage. Start top
and press M
to sort processes by memory. You should see something like:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18698 apache 17 0 141m 59m 41m S 0.0 1.6 4:57.46 httpd
18591 apache 17 0 141m 59m 41m S 0.0 1.5 4:54.79 httpd
22917 apache 16 0 141m 57m 39m S 0.0 1.5 4:57.44 httpd
18595 apache 16 0 142m 57m 38m S 0.0 1.5 5:23.43 httpd
18697 apache 16 0 139m 56m 41m S 0.0 1.5 5:09.29 httpd
18735 apache 25 0 141m 56m 38m S 0.0 1.5 5:05.32 httpd
Subtract the RES and SHR columns to get the approximate memory usage per Apache instance. In this case it is around 16MB. If I have 4GB of RAM and wish 3GB of it to be used for Apache my MaxClients setting will be around:
MaxClients = 3000/16 = 188
So, in this case, I might start with a value of 150-200 but I would watch the memory usage and if it ever began to get close to using swap I would decrease MaxClients 10-20%. Also note that the value of 3GB is just a random example. On servers only running Apache I might be able to use almost all of the 4GB. In other cases I might only want 1 or 2GB for Apache save the remaining for other applications, the system or the cache.
Edit: Answering Additional Questions
There are generally no magic values of MaxClients or the other Apache configuration parameters that will make your server suddenly twice as fast. Some servers will appear to run just fine whether MaxClients is 10 or 1000. There are two main cases where the MaxClients setting is "bad":
- Too Low: When MaxClients is too low you will reach a situation where all Apache clients are being used and new connections go into a queue waiting for the next client to become available. If you enable Apache's mod_status you can get a real time view of how many clients are busy at any one point of time. This state is relatively easy to diagnose as the site will become slow during times of high traffic and all clients can be observed to be in use.
- Too High: When MaxClients is too high you will get into the case of exhausting all RAM and begin to use the swap. When this occurs your site's performance will drop to essentially zero (consider the speed difference between RAM and disk). This state can be much more difficult to observe and diagnose as a server will run just fine with a high MaxClients until it experiences a spike in traffic. For example, on a site that gets a few hits an hour I can set MaxClients to 1000, far more than supportable by RAM, but never seen an issue due to Apache only needing to use one or two clients at a time. I'll only spot the problem when I get a spike in traffic, increasing the number of clients used concurrently, until RAM is exhausted and swap space is needed.
While I don't know the details of your server, application, or traffic I can suggest the following configuration values as a starting point. Try them, monitor the server's load and usage, and change settings as needed.
- mod_status: Enable this so you can see Apache's usage. For more advanced statistics install a monitoring application like Zabbix/Nagios so you can track server usage and traffic patterns.
- MaxClients: Set to a value of 100-200. I would start with a lower value if unsure and monitor memory/CPU/Apache usage. This will be the main parameter to tweak.
- MaxRequestsPerChild: This specifies when an Apache client/child will be restarted. There is no wrong value (though very small values may be inefficient) and it will depend on what content your serving. For dynamic content a large non-zero value (say 1000) will stop your httpd processes from eventually becoming too large.
- Other Parameters: While I haven't done thorough benchmarking of the remaining parameters they should have a relatively minor effect unless you set them to very low or very high values. Using the defaults should be fine for the majority of sites. See the Apache Prefork or Worker module documentation for a complete description of the parameters and which is used in each module (there is no point trying to tune a parameter you don't use).
- Benchmarking: As you adjust parameters I would recommend using a benchmarking tool like ab (ApacheBench) or siege to get a quantitative number on your server's capabilities. Relying soley on feel or worse, seeing if it crashes or not, is not a good method to tune a web server's parameters.
Best Answer
CPU load depends on application logic, regarding RAM - RES part is inaccurate and won't show you the real usage of physical memory. I recommend you to test memory with https://raw.github.com/pixelb/ps_mem/master/ps_mem.py and check application logic.