Nginx – Architecting nginx for redundacy

failoverload balancingnginx

This may be a stupid question, but after googling a while I can't find the answer or just don't know how to ask it.

I have a web app running on a server named 'myserver1'. I've brought up 'myserver2' with an identical instance of the web app, and set up replication between the two databases on the two boxes. Now, I'd like to employ nginx to do some load balancing, plus make one server take over if the other keels over.

Most of the nginx documentation is written around a simple scenario like this, but it seems to indicate that you put an nginx server in front of the web servers. That would seem to be another single point of failure. How do you make nginx itself redundant? Can you just run nginx on both web server boxes? If so, where do you point the DNS entry of myapp.mydomain.com?

EDIT: I guess I should add that this is for an internal app with a relatively small user base. My primary concern is that our internal users can still get to it if we lose a server or connectivity to one of the data centers. I just can't see how to do that on nginx without introducing another single point of failure.

Best Answer

The only way to load-balance in nginx is having a single frontend (reverse-proxy) host load-balancing backend servers.

The idea/hypothesis behind this design is that load will happen on the backend only and that your single entrypoint will always be able to cope with whatever amount of traffic it is supposed to deal with, since it simply redirects and never processes anything itself.

What you are talking about is actually failover, not load-balancing. Your concern is the failure of your single entrypoint.

As @coding_hero explained, this has nothing to do with nginx, it is something to be dealt with at the underlying layers (OS/network).

One way of doing it might be read on the following page (old example talking about Debian oldstable though, commands might need to be freshen up): http://linuxmanage.com/fast-failover-configuration-with-drbd-and-heartbeat-on-debian-squeeze.html. Heartbeat is a well-known technology allowing several identical servers to monitor each other, electing a master and failing-over to slaves with needed.

You even have dedicated network hardware doing the same job by rerouting (or maybe reconfiguring routers on-the-fly to reroute?) traffic to the currently elected master.

Related Topic