Getting Squid and TPROXY with IPv6 working on CentOS 7

centos7ipv6routingsquidtransparent-proxy

I'm having trouble getting TPROXY working with Squid and IPv6 on a CentOS 7 server. I was previously using a generic intercept setup with NAT, but it was limited to IPv4 only. I'm now expanding the setup to include IPv6 with TPROXY.

I've been using the official Squid wiki article on the subject to configure everything:

http://wiki.squid-cache.org/Features/Tproxy4

Thus far the TPROXY config appears to be working for IPv4 with no issues. With IPv6 however connections are timing out and not working properly. I'll break down the setup for better understanding.

Note all firewall and routing rules are exactly the same for IPv4, the only difference is inet6 and ip6tables for configuring IPv6 based rules in the examples below.

  • OS and Kernel: CentOS 7 (3.10.0-229.14.1.el7.x86_64)
  • All packages are up to date according to yum
  • Squid Version: 3.3.8 (Also tried 3.5.9)
  • Firewall: iptables/ip6tables 1.4.21
  • libcap-2.22-8.el7.x86_64

IPv6 connectivity is currently through a 6in4 tunnel via Hurricane Electric, this is configured on the DD-WRT router and then the assigned prefix delegated to clients via radvd. The Squid box has several static IPv6 addresses configured.

The Squid box sits within the main LAN which it is serving. Clients that are having traffic on port 80 intercepted (mainly wireless clients) are being pushed to the Squid box via my DD-WRT router with the following firewall and routing rules, adapted from the Policy Routing wiki article and DD-WRT wiki

This appears to be working OK in terms of passing the traffic to the Squid box. One additional rule I had to add on the DD-WRT router in addition to the above was an exception rule for the configured outgoing IPv4 and IPv6 addresses on the Squid box, otherwise I get a crazy loop issue and traffic gets broken for all clients including the main LAN that uses Squid on 3128.

ip6tables -t mangle -I PREROUTING -p tcp --dport 80 -s "$OUTGOING_PROXY_IPV6" -j ACCEPT

On the Squid box I am then using the following routing rules and the DIVERT chain to handle the traffic accordingly. I needed to add additional rules to prevent any errors with the chain already existing during testing. My firewall is CSF, I have added the following to csfpre.sh

ip -f inet6 route flush table 100
ip -f inet6 rule del fwmark 1 lookup 100

ip -f inet6 rule add fwmark 1 lookup 100
ip -f inet6 route add local default dev eno1 table 100

ip6tables -t mangle -F
ip6tables -t mangle -X
ip6tables -t mangle -N DIVERT

ip6tables -t mangle -A DIVERT -j MARK --set-mark 1
ip6tables -t mangle -A DIVERT -j ACCEPT
ip6tables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
ip6tables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

squid.conf is configured for two ports:

http_proxy 3128
http_proxy 3129 tproxy

In addition I am also using Privoxy and had to add no-tproxy to my cache_peer line, otherwise all traffic was unable to be forwarded for both protocols.

cache_peer localhost parent 8118 7 no-tproxy no-query no-digest

I am not using any tcp_outgoing_address directives because of Privoxy, instead I am controlling the outbound addresses through CentOS and the bind order.

sysctl values:

net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.eno1.rp_filter = 0

I am not sure if the rp_filter modifications are needed as the setup works on IPv4 with or without them and produces the same result for IPv6.

SELINUX

SELINUX is enabled on the Squid box, but policies have been configured to allow the TPROXY setup, so its not being blocked (IPv4 working shows this anyway). I have checked with grep squid /var/log/audit/audit.log | audit2allow -a and get <no matches>

#============= squid_t ==============

#!!!! This avc is allowed in the current policy
allow squid_t self:capability net_admin;

#!!!! This avc is allowed in the current policy
allow squid_t self:capability2 block_suspend;

#!!!! This avc is allowed in the current policy
allow squid_t unreserved_port_t:tcp_socket name_connect;

I have also set the following booleans:

setsebool squid_connect_any 1
setsebool squid_use_tproxy 1

Broken IPv6 connectivity

Ultimately, IPv6 connectivity is completely broken for TPROXY clients (LAN clients on port 3128 which use a WPAD/PAC file have fully working IPv6). While it appears the traffic is being routed to the Squid box in some way, no requests over IPv6 via TPROXY are appearing in the access.log. All IPv6 requests both literal IPv6 and DNS, timeout. I can access internal IPv6 clients but again, this traffic isn't logged either.

I did some testing using test-ipv6.com and found that it detected my outgoing Squid IPv6 address but the IPv6 tests either showed as bad/slow or timeout. I temporarily enabled the via header and found the Squid HTTP header was visible, so the traffic is at least getting to the Squid box but not being routed properly once its there.

I've been trying to get this to work for some time and cannot find what the problem is, I've even asked on the Squid mailing list, but have been unable to diagnose the actual issue or solve it. Based on my testing, I'm pretty sure its one of the following areas and the Squid box the problem:

  • Routing
  • Kernel
  • Firewall

Any ideas and additional steps that I can take to get TPROXY and IPv6 working would be greatly appreciated!

Additional information

ip6tables rules:

Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination
DIVERT     tcp      ::/0                 ::/0                 socket
TPROXY     tcp      ::/0                 ::/0                 tcp dpt:80 TPROXY redirect :::3129 mark 0x1/0x1

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination

Chain DIVERT (1 references)
target     prot opt source               destination
MARK       all      ::/0                 ::/0                 MARK set 0x1
ACCEPT     all      ::/0                 ::/0

IPv6 routing table (prefix obscured)

unreachable ::/96 dev lo  metric 1024  error -101
unreachable ::ffff:0.0.0.0/96 dev lo  metric 1024  error -101
2001:470:xxxx:xxx::5 dev eno1  metric 0
    cache  mtu 1480
2001:470:xxxx:xxx:b451:9577:fb7d:6f2d dev eno1  metric 0
    cache
2001:470:xxxx:xxx::/64 dev eno1  proto kernel  metric 256
unreachable 2002:a00::/24 dev lo  metric 1024  error -101
unreachable 2002:7f00::/24 dev lo  metric 1024  error -101
unreachable 2002:a9fe::/32 dev lo  metric 1024  error -101
unreachable 2002:ac10::/28 dev lo  metric 1024  error -101
unreachable 2002:c0a8::/32 dev lo  metric 1024  error -101
unreachable 2002:e000::/19 dev lo  metric 1024  error -101
unreachable 3ffe:ffff::/32 dev lo  metric 1024  error -101
fe80::/64 dev eno1  proto kernel  metric 256
default via 2001:470:xxxx:xxxx::1 dev eno1  metric 1

Best Answer

I realize this is old, and I don't have a full answer to this myself, but, I'm doing something very similar to you, and have nearly identical symptoms.

First: test-ipv6.com appears to have updated itself somewhat recently to be able to handle a new type of error (it was broken earlier this year). Give it a test again.

In my case, it sent me to a URL that describes a problem that I seem to have: Path MTU Detection FAQ. They provide a URL you can use with cURL to do a PMTUD test, and then you can check your traffic using tpcdump or wireshark.

When traffic is TPROXY'd over Squid, the IPv6 Path MTU Detection is not wholly working on your host. (I'm still working on why it's not working on my host, so I have no definitive solution).

A quick description:

  • ICMP is extremely important in IPv6. A lot of people want to block ICMP, and end up causing more harm than good.
  • If a packet is "too large" for your connection, the packet is dropped, and an ICMP type 2 ("Packet too large") message is supposed to be sent to the originating server, asking it to reduce the packet size and resend.
  • If the ICMP message doesn't make it to the server, the server keeps resending the large packet -- which is immediately dropped because it's too large.
  • This has been described as a "black hole" because the packets never reach their destination.

So you may want to make sure your firewall rules are set to accept ICMPv6 messages (see RFC4890 for a list of "needed" ICMP types).

In my case, I'm allowing ICMP messages, and still have the problem. I'm not quite ready to throw in the towel and just reduce my network's MTU (which is the nuclear option).

Related Topic