I don't think that there is a way to explicitly invalidate cached items, but here is an example of how to do the rest. Update: As mentioned by Piotr in another answer, there is a cache purge module that you can use. You can also force a refresh of a cached item using nginx's proxy_cache_bypass - see Cherian's answer for more information.
In this configuration, items that aren't cached will be retrieved from example.net and stored. The cached versions will be served up to future clients until they are no longer valid (60 minutes).
Your Cache-Control and Expires HTTP headers will be honored, so if you want to explicitly set an expiration date, you can do that by setting the correct headers in whatever you are proxying to.
There are lots of parameters that you can tune - see the nginx Proxy module documentation for more information about all of this including details on the meaning of the different settings/parameters:
http://nginx.org/r/proxy_cache_path
http {
proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
proxy_temp_path /var/www/cache/tmp;
server {
location / {
proxy_pass http://example.net;
proxy_cache my-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
}
}
}
Do you anticipate using Edge Side Includes (ESI)? If so, the Nginx ESI module is broken and has some open bugs. If you use Varnish, output isn't compressed, so you're somewhat stuck using Nginx to do compression of ESI enabled pages. While I work with Python frameworks, Rails is similar.
With your current setup, you could do something like:
Nginx -> Apache -> Passenger -> Rails
Varnish -> Apache -> Passenger -> Rails
Both would drop in front of your existing system. With Nginx, you could also give it direct access to the static files and allow it to serve those without having to proxy through Apache. Using the Location directive, you can slice off portions of your webspace and prevent that from having to go through the proxy.
However, if you wanted to move completely to Nginx, your infrastructure becomes:
nginx -> passenger -> rails (nginx -> uwsgi -> python)
If you add Varnish, you end up with:
varnish -> nginx -> passenger -> rails
unless you use ESI, in which case you end up with:
nginx -> varnish -> nginx -> passenger -> rails
At some point, removing Varnish from the mix becomes quite intriguing. However, recent Varnish releases are still faster than Nginx's caching and you have a lot of control over how you can cache. While both Nginx and Varnish give you quite a bit of control, Varnish's VCL allows you to write C code to do things that neither does out of the box, without touching the daemon's source code. Whether that is useful to you is up to you.
Since you are using Apache currently, I would be more inclined to put Varnish in front unless you are going to migrate to Nginx and remove Apache completely. Varnish in your case is more of a drop-in solution. If you decide that you're going to use ESI in the future, you would need to run both.
Best Answer
Start off with PHP-APC. That's a good start for any site. Override the cache size, and give it like 128M to play with.
Install Memcached, and use that for caching query results.
Install Wordpress's W3 Total Cache plugin, and turn everything on.
Get an Amazon S3 instance with Cloudfront, and configure it as the CDN for your wordpress site.
Configure Varnish as a reverse proxy for your Apache, but remember you'll have to
pass
any requests containing a wordpress login cookie, or you'll end up with an Identity Crisis, where everyone is served logged-in user content.That's it. That's all there really is to it. It's actually deceptively complicated, but those are the basic main steps.