Iptables port forwarding from load balancer to internal web server

iptablesport-forwarding

I'm having trouble with forwarding 8000 port from my load balancer (the only entry point with external IP address) to a web server (to port 8000) that has internal IP.

So I need XX.XX.XX.XX:8000 -> YY.YY.YY.YY:8000 where XX.XX.XX.XX is external ip and YY.YY.YY.YY is internal.

When logged in to XX.XX.XX.XX via ssh it is possible to do telnet YY.YY.YY.YY 8000 and it connects successfully. Here is iptables commands I launched (on XX.XX.XX.XX):

iptables -F
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

iptables -A PREROUTING -t nat -p tcp --dport 8000 -j DNAT --to YY.YY.YY.YY:8000
iptables -A FORWARD -p tcp -d YY.YY.YY.YY --dport 8000 -j ACCEPT

and here is iptables -L output:

Chain INPUT (policy ACCEPT 13486 packets, 6361K bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination
1      122  6968 ACCEPT     tcp  --  any    any     anywhere             YY.YY.YY.YY         tcp dpt:irdmi

Chain OUTPUT (policy ACCEPT 14248 packets, 8532K bytes)
num   pkts bytes target     prot opt in     out     source               destination

Chain acctboth (0 references)
num   pkts bytes target     prot opt in     out     source               destination

so the rule has been added and there are some packets gone via it. But I'm still getting connection timeout from browser when connecting to XX.XX.XX.XX:8000

Can someone point me to the error? Thanks in advance!

PS. route -n output from load balancer – where I put these rules:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
173.199.160.147 0.0.0.0         255.255.255.255 UH    0      0        0 eth1
173.199.160.146 0.0.0.0         255.255.255.255 UH    0      0        0 eth1
173.199.160.145 0.0.0.0         255.255.255.255 UH    0      0        0 eth1
173.199.160.144 0.0.0.0         255.255.255.255 UH    0      0        0 eth1
96.30.32.0      0.0.0.0         255.255.255.192 U     0      0        0 eth1
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth1
172.16.0.0      0.0.0.0         255.252.0.0     U     0      0        0 eth0
0.0.0.0         96.30.32.1      0.0.0.0         UG    0      0        0 eth1

YY.YY.YY.YY is in reality 172.17.4.10.

ifconfig output, only two interfaces that probably affect this all:

eth0      Link encap:Ethernet  HWaddr 00:25:90:53:1F:A8
          inet addr:172.17.4.163  Bcast:172.19.255.255  Mask:255.252.0.0
          inet6 addr: fe80::225:90ff:fe53:1fa8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:111646 errors:0 dropped:0 overruns:0 frame:0
          TX packets:79057 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:63144265 (60.2 MiB)  TX bytes:11600837 (11.0 MiB)
          Interrupt:217 Memory:fb900000-fb920000

eth1      Link encap:Ethernet  HWaddr 00:25:90:53:1F:A9
          inet addr:96.30.32.10  Bcast:96.30.32.63  Mask:255.255.255.192
          inet6 addr: fe80::225:90ff:fe53:1fa9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:116136 errors:0 dropped:0 overruns:0 frame:0
          TX packets:101365 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:16607792 (15.8 MiB)  TX bytes:89471131 (85.3 MiB)
          Interrupt:233 Memory:fba00000-fba20000

ip_forwarding:

root@load1 [~]# cat /proc/sys/net/ipv4/conf/eth0/forwarding
1
root@load1 [~]# cat /proc/sys/net/ipv4/conf/eth1/forwarding
1
root@load1 [~]# cat /proc/sys/net/ipv4/ip_forward
1

Best Answer

Since routing is apparently enabled, another possible and quite popular misconfiguration would be the default route on 172.17.4.10 not going through 172.17.4.163. In this case, an incoming packet would be correctly DNATed to 172.17.4.10 but the response would be routed out through a different destination, thus getting the "wrong" source IP address.

In general, a good way to really get to know what's going on is running tcpdump

tcpdump -i eth0 -v -n host 172.17.4.10

and trying to connect. The output should be insightful.