Correct way in new versions of nginx
Turn out my first answer to this question was correct at certain time, but it turned into another pitfall - to stay up to date please check Taxing rewrite pitfalls
I have been corrected by many SE users, so the credit goes to them, but more importantly, here is the correct code:
server {
listen 80;
server_name my.domain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name my.domain.com;
# add Strict-Transport-Security to prevent man in the middle attacks
add_header Strict-Transport-Security "max-age=31536000" always;
[....]
}
I suspect your varnish cache is not caching anywhere near enough of the hits
here's what I would do in your situation:
Lower php max children to 100 or even 50 (if varnish does its job properly you don't need them)
also remove the max requests line to allow the php processes not to respawn too quickly and thus prevent APC from being cleared too quickly which is also bad
also IF is not good according to nginx - http://wiki.nginx.org/IfIsEvil
I would change this line:
if (!-e $request_filename) {
rewrite ^(.+)$ /index.php?q=$1 last;
}
to:
try_files $uri $uri/ /index.php?$args;
If your version of nginx supports it (pretty certain if your nginx version is > 0.7.51 then it supports it)
you should also look at inserting the w3tc nginx rules direct into your vhost file to enabled proper disk enhanced caching of pages (which is faster than APC caching with nginx)
Take a look at the following varnish vcl which I use for sites - you will need to read through and edit a few things for your website - it also assumes that its only WP sites on the server and only 1 site on the server, it can easily be modified for more sites (take a look at the cookie section)
generic vcl: https://gist.github.com/b7332971a848bcb7ecef
With this config I would argue to remove fastcgi_cache to prevent any possible issues with a cache-chain occurring whereby trying locate any stray stale cache entries is more difficult
also tell w3tc that varnish is at 127.0.0.1 and it will purge it for you ;)
I deployed this to a server on Wednesday evening (with a few domain specific modifications) that was handling 2500 active site visitors it reduced load to less than 1 and the approx number of running php children was around 10-20 (this number does depend on number of logged in users and other factors like cookies) this server did have much more ram but the principle is the same, you should be able to easily handle the number of visitors you get at peaks
Best Answer
As documentation said, nginx will keep all active keys and information about data are stored in a shared memory zone, whose
name
andsize
are configured by the keys_zone parameter. As a matter of completeness, lets break down per part/var/cache/nginx
is the place where the actual cache stored. Inside the folder, cache file was binary file but you can easily spot the html tag inside it.levels=1:2
is levels parameter sets the number of subdirectory levels in cache.keys_zone=myCache:8m
was defining shared memory zone named myCache with maximum size 8 MB. It holds all active keys and metadata of the cache. So, whenever nginx checks if a page was cached, it consults the shared memory zone first, then seek the location of actual cache in/var/cache/nginx
if cache exist.max_size
was maximum size of cache e.g. files size on/var/cache/nginx
.inactive=1h
specify maximum inactive time cache can be stored. Cached data that are not accessed during the time specified by theinactive
parameter get removed from the cache regardless of their freshness.How cache validation and deletion works
Taken from nginx mailing lists
Directive proxy_cache_valid specifies how long response will be considered valid (and will be returned without any requests to backend). After this time response will be considered "stale" and either won't be returned or will be depending on proxy_cache_use_stale setting.
Argument inactive of proxy_cache_path specifies how long response will be stored in cache after last use. Note that even stale responses will be considered recently used if there are requests to them.
As I understand, here the pseudocode how nginx works
When request coming
In other process, the cache manager perform this logic
As long as the request and accessed the particular cache, that cache object will still valid until 12h after the object put in cache. After that, cache was considered invalid, so nginx will fetch from backend and reset the valid timer. But if object was inactive (not accessed) more than one hour - even in 12h valid-cache-period -, nginx will delete it because of
inactive
parameter.