Heavy Apache memory usage

apache-2.2memory

Recently I've noticed that httpd processes started to consume massive amounts of memory – after some time pretty much using almost all of the 2GB of RAM the server has and I don't have any memory left for other stuff. Here's what top tells me:

26409 apache 15 0 276m 152m 28m S 0 7.4 0:59.12 httpd
26408 apache 15 0 278m 151m 28m S 0 7.4 1:03.80 httpd
26410 apache 15 0 277m 149m 26m S 0 7.3 0:57.22 httpd
26405 apache 15 0 276m 148m 25m S 0 7.3 0:59.20 httpd
26411 apache 16 0 276m 146m 23m S 0 7.2 1:09.18 httpd
17549 apache 15 0 276m 144m 23m S 0 7.0 0:36.34 httpd
22095 apache 15 0 276m 136m 14m S 0 6.6 0:30.56 httpd

It seems to me that each httpd process does not free the memory after handling a request. So they all sit at ~270MB which is BAD. Is there a way for me to know where all the memory goes and why it stays that way? I haven't done any server tweaking lately, so I'm sure it's not me who messed something up (haven't had the problem before).

The server is used to serve PHP apps.

EDIT: Apache is configured with prefork module and MaxRequestsPerChild is set to 4000.

Best Answer

The quick solution is to use MaxRequestsPerChild (number) (for example, 10000) to have the Apache restart each worker after that many requests. That will discard the memory used when it's restarted.

The 276m number isn't how much each process is using though. An explanation of the values shown in 'top' is helpful here:

VIRT: Virtual Image (kb) The total amount of virtual memory used by the task. It includes all code, data and shared libraries plus pages that have been swapped out. (if you are using APC, the memory space used by it will also be included in this value)

RES: Resident size (kb) The non-swapped physical memory a task has used.

SHR: Shared Mem size (kb) The amount of shared memory used by a task. It simply reflects memory that could be potentially shared with other processes.

In 'top', you can add a 'data' column Data: Data+Stack size (kb) The amount of physical memory devoted to other than executable code, also known as the ’data resident set’ size or DRS.

That 'Data' value more closely matches the unique memory being used by that particular process, which is probably not that much. Adding up those 276M and getting a number near to 2GB means that you're double-counting a lot of things.