You are limiting the bandwith with "rate=33M" why?
You are using the synchronous protocol "C" why?
I usally use protocol "A" and 8 MB buffer. For a Gigabit line and heavy traffic I limit to "rate=90M".
The issue is caused by your extremely large timeout. With a 24-hours timeout and a limit to 1000 concurrent connections, you can clearly expect to fill this with clients disconnecting the dirty way. Please use a more reasonable timeout, from minutes to hours at most, it really makes no sense to use 1 day timeouts on the internet. As DukeLion said, the system is waiting for haproxy to close the connection, because haproxy did not receive the close from the client.
Haproxy being working in tunnel mode for TCP and WebSocket, it follows the usual 4-way close :
- receive a close on side A
- forward the close on side B
- receive the close on side B
- forward the close on side A
In your case, I suppose that side A was the server and side B the client. So nginx closed after some time, socket went to CLOSE_WAIT, haproxy forwarded the close to the client, this socket went to FIN_WAIT1, the client ACKed, passing the socket to FIN_WAIT2 and then nothing happens because the client has disappeared, which is something very common on the net. And your timeout means you want this to remain this way for 24 hours.
After 24 hours, your sessions will start timing out on the client side so haproxy will kill them and forward the close to the nginx side, getting rid of it too. But clearly you don't want this to happen, WebSocket was designed so that idle connection could be reopened transparently, so there is no reason to keep an idle connection open for 24 hours. No firewall will keep it along the way !
Best Answer
This is commonly known as the c10k problem. That page has lots of good info on the problems you will run into.