450 children with an RSS of around 10mb each is over 4GB of potential memory usage. More than enough to cause your c1.small instance to swap. Swapping is almost always a downward spiral for apache servers.
I'd say the next few things I'd be looking to check are
- does the apache error log mention hitting maxclients
- does dmesg or /var/log/messages mention OOM killer at all
- is the server swapping
- is the memory usage growth slow and steady or spiky/rapid-onset
The first two are just looking at txt files. The third you can do cli but graphs will help, and the fourth you need graphs. Setup apache's mod_status (probably already there just uncomment it) and point munin/collectd/cacti at it.
If you confirm the cause is memory exhaustion and swapping there's tons you can do from there. First off you can lower your maxclients to around 150. That'll leave some room for other stuff and the filesystem cache (mysql on here? if so leave more). RSS is a rough metric to extrapolate like this its just all we got. Once you tune that watch the graphs over time and see if you have room to go up or down. From there you can focus on 1.) skinnier apache children (fewer modules, tighten up the php config) 2.) have apache do less (mix of cdn, alternative http servers, and http proxy options) 3.) upgrayyed $$$
(Strange post, so hopefully this won't be as strange a reply).
On the subject of security flaws: it is considered general bad practice to store cgi-bin scripts within the document root of the web server. Even the W3C eludes to it under "Are compiled languages such as C safer ..." in their World Wide Web Security FAQ:
Consider the following scenario. For convenience's sake, you've decided
to identify CGI scripts to the server
using the .cgi extension. Later on,
you need to make a small change to an
interpreted CGI script. You open it up
with the Emacs text editor and modify
the script. Unfortunately the edit
leaves a backup copy of the script
source code lying around in the
document tree. Although the remote
user can't obtain the source code by
fetching the script itself, he can now
obtain the backup copy by blindly
requesting the URL:
http://your-site/a/path/your_script.cgi~
(This is another good reason to limit
CGI scripts to cgi-bin and to make
sure that cgi-bin is separate from the
document root.)
This is not as significant a threat as the ability to write a file within the document root. However, an attacker could obtain the source code of the cgi, devise a directed attack against it, and use it as a stepping stone into the server.
To mitigate this, you can add the following lines to the lighttpd.conf (or some variation therein) to direct cgi-bin to a directory separate from the /var/www/lighttpd document root.
$HTTP["url"] =~ "/cgi-bin/" { cgi.assign = ( "" => "" ) }
alias.url = ( "/cgi-bin/" => "/usr/lib/cgi-bin/" )
This requires both the cgi and alias modules for lighttpd.
Best Answer
It really depends what you want to do and what resources you have. But unless you have a very heavy traffic (which you do not have, 20k/day is fairly light) or very complex/dynamic pages that require a lot of server processing, just pick a server with the required features. Apache is a safe bet, in my opinion, and it is not slow, as it is often implied.
Just as a reference, one web site I manage runs apache on an old dual CPU/2G ram and serves 1+ M files per day w/o breaking a sweat (mostly images -- dynamic pages run on app server).