By default the Nginx source does not define SCRIPT_FILENAME in the fastcgi_params file, so unless the repo you installed Nginx from does that you need to do it yourself.
Check if the following line is in your fastcgi_params file:
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
and if not then add it.
I suspect your varnish cache is not caching anywhere near enough of the hits
here's what I would do in your situation:
Lower php max children to 100 or even 50 (if varnish does its job properly you don't need them)
also remove the max requests line to allow the php processes not to respawn too quickly and thus prevent APC from being cleared too quickly which is also bad
also IF is not good according to nginx - http://wiki.nginx.org/IfIsEvil
I would change this line:
if (!-e $request_filename) {
rewrite ^(.+)$ /index.php?q=$1 last;
}
to:
try_files $uri $uri/ /index.php?$args;
If your version of nginx supports it (pretty certain if your nginx version is > 0.7.51 then it supports it)
you should also look at inserting the w3tc nginx rules direct into your vhost file to enabled proper disk enhanced caching of pages (which is faster than APC caching with nginx)
Take a look at the following varnish vcl which I use for sites - you will need to read through and edit a few things for your website - it also assumes that its only WP sites on the server and only 1 site on the server, it can easily be modified for more sites (take a look at the cookie section)
generic vcl: https://gist.github.com/b7332971a848bcb7ecef
With this config I would argue to remove fastcgi_cache to prevent any possible issues with a cache-chain occurring whereby trying locate any stray stale cache entries is more difficult
also tell w3tc that varnish is at 127.0.0.1 and it will purge it for you ;)
I deployed this to a server on Wednesday evening (with a few domain specific modifications) that was handling 2500 active site visitors it reduced load to less than 1 and the approx number of running php children was around 10-20 (this number does depend on number of logged in users and other factors like cookies) this server did have much more ram but the principle is the same, you should be able to easily handle the number of visitors you get at peaks
Best Answer
Nginx interprets a bunch of headers when used as a reverse proxy to honor HTTP intermediate caches specifications. This means the following headers, if present in your app replies, will change caching behaviour as explained :
However, nginx ships with the
fastcgi_ignore_headers
directive in case you want to turn this off. So what you are looking for is :fastcgi_ignore_headers Cache-Control Pragma;
.