Linux – HA Active/Active Squid configuration

high-availabilitylinuxlinux-networkingPROXYsquid

Hi all i would like to ask if someone have any experience into configuring squid within an active/active cluster.

First here is the actual configuration, which i would like to replicate to another host and share a common IP address between them.

So in first place I have 2 Juniper Sg 140 in NSRP configuration, I've configured a policy route which redirects all the HTTP traffic to my SQUID box (both connected to 2 stacked switches via a redundant if, and the policy just adds a next hop), then configured squid to intercept the traffic transparently via the TPROXY (both squid and iptables) directive on the box and send the requests back to the real web servers, thus making a truly transparent content caching proxy which nor the clients and the servers are aware of.

This works quite well with only one box, but i would like to replicate it onto 2 boxes and keep a shared IP, please note that right now im using a bonded interface (4 ethernet ports) as this should be a HA solution.

So if anyone have suggestions on how to achieve this would be appreciated.
Thanks to everyone in advance

Best Answer

Ok so in the end i did it! It was not an easy task and I've been trough literally a nightmare of documentation and some serious amount of caffeine and nicotine :)

Keepalived with VRRP was the ultimate solution; i couldn't achieve full load balancing trough a single IP address so instead I've used 2 Virtual IPs shared trough the 2 boxes. On the juniper side I've created 2 policies for routing and split the ip addresses of web servers in half and for each used one of the VIP as the next hop. Configured squid and iptables to intercept the traffic on both VIP trough the TPROXY directive and everything seems to work fine now.

Will keep this question updated and post the configuration files as others may want to achieve the same, so let's not re-invent the wheel

Related Topic