Vps – Httpd processes using more memory over time

apache-2.2centos6vps

I am using a 3gb memory VPS Centos 6 server.If I reboot it I get about 4 or 5 httpd services running and all of them use about 2.5% of memory (86m on the res column from top command).

I am running just one website which is not live yet so I am the only one connecting to it.

However everyday I see that the httpd memory percentage goes up by 0.3 or 0.4 depends. Which means that after 4 or 5 days those httpd processes will be using about 4% of memory (130m on the res column from top command).I do not see any errors in the logs and everything works correctly but if I left the server without rebooting for 2 weeks I will run out of memory.

For example a way to reproduce it will be to use the ab command.For instance if I run:

ab -c 2000 -t 60 http://xxx.xxx.xxx.xxx/

After running it each of httpd services will be using about 0.3 or 0.4 more memory than before running the test.

Again I do not see any errors in the logs.

Is this normal?


I have been doing more testing and research.My values:

KeepAlive Off

<IfModule prefork.c>
StartServers       1
MinSpareServers    1
MaxSpareServers    5
ServerLimit        15
MaxClients        15
MaxRequestsPerChild  2000
</IfModule>

Which seems to be ok and I always have about 500mb of memory to spare(at least when the server is just rebooted).The issue is that the five httpd processes which are always alive keep increasing size so when traffic hits the server and more child processes are created they get the size of the parent httpd process.So if the parent httpd processes are 120mb all the child processes will be 120mb.So it does not matter how small the MaxRequestsPerChild is because a new child process will be created which will take as much memory as the previus one. Any advise?

Best Answer

You don't actually say what web server software you are using. If you are talking about apache though (and it seems likely with a multi-process model) then you should look at the MaxRequestsPerChild directive.

If for example you're running php, ruby or perl apps, that (like most) are not especially careful about memory leaks, then you should probably knock MaxRequestsPerChild down to around 40 or so. What a good value is does vary a bit though. Some application stacks have much more cost associated with restarting processes than others, and some have much more memory leak issues than others. I've set MaxRequestsPerChild anywhere from 5 to 1000 in different circumstances, but It's generally best to start it low and raise it by degrees while it feels safe to do so.

You should expect some increase in memory use after start-up in under normal circumstances, that levels off after a while.

If you did leave your server unattended, and it ran out of memory, then it would likely start using swap, and get horribly slow. Because requests aren't being dealt with quickly, more work would pile up, and it would tend to consume more memory unless limits on numbers of processes prevent that. You want to think a bit about the limits on the numbers of processes, and how much memory you think your server would start using under such circumstances.

You also don't want to have too much swap. If you have a lot of swap, your server will be more or less entirely unresponsive while it slowly consumes its swap memory. Either you'll intervene with a reboot (you're unlikely to get a shell to work), or you'll use all your swap up and the OOM killer will start killing processes. If it comes to this, you'd actually rather the OOM Killer kicked in sooner. Excess swap just makes the downtime longer. The common recommendation to have twice as much swap as RAM is completely inappropriate for most web servers.

Raise your minspareservers and maxspareservers. I'd put maximum up to 15 or so. What's the point of killing them off below that? min should be at least 5.