Mysql – Apache and MySQL taking all the memory? Maximum connections

apache-2.2connectiondedicated-servermemoryMySQL

I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server.

One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3GB of memory instead of the 1GB I have at the moment.

Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the launch, telling myself I'd tweak it depending on the visits pattern later.

I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers.

I'm getting:

# free -m
             total       used       free     shared    buffers     cached
Mem:          1000        944         56          0        148        725
-/+ buffers/cache:         71        929
Swap:         1953          0       1953

Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row.

Also I don't see any swapping:

# vmstat 60
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0  57612 151704 742596    0    0     1     1    3   11  0  0 100  0
 0  0      0  57604 151704 742596    0    0     0     1    1   24  0  0 100  0
 0  0      0  57604 151704 742596    0    0     0     2    1   18  0  0 100  0
 0  0      0  57604 151704 742596    0    0     0     0    1   13  0  0 100  0

And finally, while requesting a page:

top - 08:33:19 up 3 days, 13:11,  2 users,  load average: 0.06, 0.02, 0.00
Tasks:  81 total,   1 running,  80 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.3%us,  0.3%sy,  0.0%ni, 98.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1024616k total,   976744k used,    47872k free,   151716k buffers
Swap:  2000052k total,        0k used,  2000052k free,   742596k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
24914 www-data  20   0 26296 8640 3724 S    2  0.8   0:00.06 apache2            
23785 mysql     20   0  125m  18m 5268 S    1  1.9   0:04.54 mysqld             
24491 www-data  20   0 25828 7488 3180 S    1  0.7   0:00.02 apache2            
    1 root      20   0  2844 1688  544 S    0  0.2   0:01.30 init               
...

So, I'd like to know, experts of serverfault:

  1. Do I really need more RAM at the moment? // Update: I'd really like to understand why they say I'm using all my current RAM and how to justify the RAM bump they recommend.
  2. How do they calculate that for 150 simultaneous connections I'd need 3GB?

Thanks for your help!

Best Answer

Based on the snapshot here, the memory utilization seems fine and an upgrade wouldn't be necessary (and probably wouldn't help).

Free & top both state that there is just ~50 MB or so free, but when accounting for buffers, that number is significantly higher. Linux is going to use any unallocated/unused physical RAM as cache & buffer space. If your applications need more memory, brk & sbrk will yank it from cache. There are performance implications as that happens more frequently, but that's probably not a concern here at all.

Now, it's very possible that you're indeed spiking on occasion during times of high traffic or maintenance script runs. Do you have any cron jobs scheduled that execute large queries as part of a cleanup process or anything? Is there anything that might swallow up RAM temporarily and then release it? Account freeze batch jobs or anything?

All that said, can you describe your problem a bit more? If you're not able to ping the system, I highly, highly doubt it's a memory problem at all. Does your server just go away for a while and then come back? Check your web server logs during that period, is there a period of inactivity for everyone, or just you? If your machine is just falling off of the planet for a period of time, I'd vote that it's something network related.

What about dmesg output? Net device errors?