Firewall – Why Place Load-Balancer Behind Firewall

firewallload balancing

I'm considering purchasing an F5 load balancing device which will proxy inbound HTTP connections to one of five web servers on my internal network. My assumption was that the F5's external interface would face the Internet and the internal interface would face the internal network where the web servers live. Yet several of the illustrations I'm seeing online place the F5 device behind the firewall This arrangement would cause extra traffic to pass through the firewall and also makes the firewall a single failure point, correct?

What's the rationale behind this configuration?

Best Answer

I think the classical:

Firewall <-> Load Balancer <-> Web Servers <-> ...

is mostly left over from the era of expensive hardware-based firewalls. I've implemented such schemes so they work but makes the whole setup more complicated. To eliminate single points of failure (and e.g. allow upgrades of the firewall) you need to either mesh traffic between 2 firewalls and 2 load balancers (either using layer 2 meshes or proper layer 3 routing).

On public clouds one tends to implement something like:

Load Balancer <-> [ (firewall + web) ] <-layer 2 domain or ipsec/ssl-> [ (firewall + app/db) ]

which is frankly good enough.

  1. If you're using the load balancer to terminate the SSL connection a firewall placed in front of the load balancer only does very basic layer 3 filtering since it's seeing encrypted traffic.
  2. Your F5 already comes with a firewall, which is as good as the filtering rules you put in place.
  3. The defense-in-depth argument is IMHO weak when it comes to layer 3. The attack vectors for web applications are SQL injections, not tripping the firewall to gain root access.
  4. The cores of puny web servers is usually good enough to handle filtering from tcp and up.

Happy to see some discussion on the topic.

Related Topic