Nginx – Bottlenecks and Load Balancing

load balancingnginx

Why nginx can keep itself from being a bottleneck when serving as a load balancer. If it becomes the single point bottleneck in some conditions, then is there solution except hardware load balancer?

Best Answer

When you use a software load balancer or caching proxy, that load balancer or caching proxy is a bottleneck – but it's a much wider bottleneck than before.

Assume we have an application server that can only handle 50 requests per second, since it has to query the database and do all kinds of expensive stuff. How can we scale?

If we add a second application server, that won't help. We need a load balancer that distributes load between these application servers. Because the load balancer itself does nothing except passing through connections, let's say that it can handle 1000 requests per second – that's 20× as much, and my actual capacity is 2·50=100 requests per seconds since I have two application servers. I can now add further application servers if the need arises.

In most cases, this is absolutely sufficient, and it's unlikely that you'll push the load balancer to its limits. If you want to scale beyond this load balancer, you cannot add another software load balancer in front of the existing load balancer, and you will have to beef up the hardware (or switch to a balancer with better performance).

If you get beyond the capacity for a single load balancer (e.g. when you hit the millions of daily users range) and if you need to maintain servers in multiple geographic locations, then you can use a DNS server that resolves your domain name to different IP addresses. Content Delivery Networks use this technique to distribute load through a large number of distributed proxies. Instead of going that route yourself, it will likely be much easier to put your servers behind an existing commercial CDN that handles all that for you.

Related Topic