Nginx, Varnish, HAProxy, Webserver – Optimal Ordering

haproxynginxvarnishweb

I've seen people recommend combining all of these in a flow, but they seem to have lots of overlapping features so I'd like to dig in to why you might want to pass through 3 different programs before hitting your actual web server.

nginx:

  • ssl: yes
  • compress: yes
  • cache: yes
  • backend pool: yes

varnish:

  • ssl: no (stunnel?)
  • compress: ?
  • cache: yes (primary feature)
  • backend pool: yes

haproxy:

  • ssl: no (stunnel)
  • compress: ?
  • cache: no
  • backend pool: yes (primary feature)

Is the intent of chaining all of these in front of your main web servers just to gain some of their primary feature benefits?

It seems quite fragile to have so many daemons stream together doing similar things.

What is your deployment and ordering preference and why?

Best Answer

Simply put..

HaProxy is the best opensource loadbalancer on the market.
Varnish is the best opensource static file cacher on the market.
Nginx is the best opensource webserver on the market.

(of course this is my and many other peoples opinion)

But generally, not all queries go through the entire stack.

Everything goes through haproxy and nginx/multiple nginx's.
The only difference is you "bolt" on varnish for static requests.

  • any request is loadbalanced for redundancy and throughput (good, that's scalable redundancy)
  • any request for static files is first hitting the varnish cache (good, that's fast)
  • any dynamic request goes direct to the backend (great, varnish doesn't get used)

Overall, this model fits a scalable and growing architecture (take haproxy out if you don't have multiple servers)

Hope this helps :D

Note: I'll actually also introduce Pound for SSL queries as well :D
You can have a server dedicated to decrypting SSL requests, and passing out standard requests to the backend stack :D (It makes the whole stack run quicker and simpler)

Related Topic