Nginx – pass-through load balancer? How is it different from proxy load balancer

google-cloud-platformhaproxyload balancingnginxPROXY

Google Cloud Network load balancer is a pass-through load balancer and not a proxy load balancer. ( https://cloud.google.com/compute/docs/load-balancing/network/ ).

I can not find any resources in general on a pass through LB. Both HAProxy and Nginx seems to be proxy LBs. I'm guessing that pass through LB would be redirecting the clients directly to the servers. In what scenarios it would be beneficial?

Are there any other type of load balancers except pass-through and proxy?

Best Answer

It's hard to find resources for pass-through load balancing because everyone came up with a different way of calling it: pass-though, direct server return(DSR), direct routing,...

We'll call it pass-through here.

Let me try to explain the thing:

Regarding other load balancer types there can't be a definitive list, here are a few examples:

As for the advantages of pass-through over other methods:

  • Some applications won't work or need to be adapted if the addresses on the IP packets is changing, for example the SIP protocol. See the Wikipedia for more on applications that don't play along well with NAT https://en.wikipedia.org/wiki/Network_address_translation#NAT_and_TCP/UDP.

    Here the advantage pass-through is that it does not change the source and destination IPs.

    Note that there is a trick for a load balancer working at a higher layer to keep the IPs: the load balancer spoofs the IP of the client when connecting to the backends. As of this writing no load balancing product uses this method in Compute Engine.

  • If you need more control over the TCP connection from the client, for example to tune the TCP parameters. This is an advantage of pass-through or NAT over TCP (or higher layer) proxy.

Related Topic