I'm using HAProxy in production to balance queries to a series of server instances that can only process 1 query at a time (by our own decision). Knowing that I set the maxconn parameter in the server definition line of backend configuration in haproxy.cfg file to 1, but the server still gets queries because I see in our server's log messages like "query rejected, already processing" and also in the HAProxy log queries that return to the client with a 502 http status code.
This is the HAProxy's configuration:
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local6 debug
user haproxy
group haproxy
daemon
stats socket /tmp/haproxy
defaults
log global
mode http
balance roundrobin
option httplog
retries 10
option redispatch
frontend custom 0.0.0.0:50000
backlog 2000
acl p5queue avg_queue(custombe) gt 200
tcp-request content reject if p5queue
default_backend custombe
timeout client 15000
backend custombe
retries 10
option redispatch
timeout queue 600000
timeout connect 1000
timeout server 120000
server custom-server-1 0.0.0.0:50001 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-2 0.0.0.0:50002 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-3 0.0.0.0:50003 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-4 0.0.0.0:50004 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-5 0.0.0.0:50005 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-6 0.0.0.0:50006 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-7 0.0.0.0:50007 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-8 0.0.0.0:50008 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-9 0.0.0.0:50009 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
server custom-server-10 0.0.0.0:50010 weight 1 maxconn 1 check inter 2000 rise 2 fall 1
Someone knows why the server instances get the queries when the maxconn parameter is set to one? I've read StackOverflow's questions where the maxconn is explained and how it works in the different sections that's why I'm asking this now, it shouldn't work like this.
Best Answer
Make certain that there is no other process that has open connections to your service.
Take special note that during
haproxy reload
, there is a time frame in which two haproxy processes will use your resources, and each enforce connection limits on their own. The finishing process will not terminate before all queues are drained. Therefor, it is quite possible that the new process and its clients contend for seats.The best workaround I can think of is to
haproxy
while a finishing process is still running.