Nginx – How does nginx websocket proxy work

linux-networkingnginxreverse-proxyulimitwebsocket

I'm wondered about how nginx handles tons active websocket connections? There is a lot of limitations, like a number of open files, maximum of 65k TCP connections between IP <-> (IP, port) and so on.

When I'm using nginx as reverse proxy and have, say, 5 nodes in a websocket's upstream, is there up to 65k active TCP connections between nginx and each node? In case if I have more than 300k active clients served by nginx, of course. Or nginx websocket proxy works in another way?

And my another question: which parameters (limits) should I tune to handle so many connections?

Best Answer

Your understanding is correct as far as it goes, but your real limit is going to be file handles. Each socket connection requires a Linux file handle, and the default ulimit on those is 1024. /proc/sys/fs/file-max sets the limit for the entire system. You will need to raise these limits to handle high nginx connection rates.

The NGINX folks have tested up to 50,000 connections on a six-core server:

https://www.nginx.com/blog/nginx-websockets-performance/

The reality is that if you want tens of thousands of connections in the real world, you need to go to multiple reverse proxies behind a round-robin DNS. This is, for example, how Amazon's Elastic Load Balancer works. f you look for something on the AWS cloud like Slack.com and type 'nslookup slack.com', you'll get a long list of IP addresses. Type it again and you'll get a different list with the IP addresses rotated so a different one is at the head of the list. These are Amazon AWS ELB reverse proxies () on a round-robin DNS that forward requests to the actual application servers. The hard part at that point becomes managing registering and deregistering reverse proxies from the DNS as they come and go, or managing IP address takeover if that's what you intend to do. Hard problems, and thus why I use Amazon's solution rather than rolling my own.

Related Topic