Thank you all for your time to answer. Basically what I'm trying to do is to proxy the outgoing/originated traffic of the 2nd container (NOTE: I'm NOT trying to proxy the incoming traffic, so cannot use the Apache mod_proxy or Nginx proxy_pass. These modules works for incoming traffic). 1st container runs a proxy service on port 8080.
As Thierno suggested I can use http_proxy and https_proxy ENV variables to proxy the outgoing traffic, but unfortunately NOT all the applications/services running in your operating system respects these http_proxy and https_proxy ENV variables. There are applications that force skip the proxy settings. That is the reason why I wanted to use iptables to enforce the traffic rules. Thus none of the application/service can skip the proxy.
The mistake I did in the previous settings on the question is, I was trying to route the incoming traffic to port 80 to 8080 of proxy server. Since the 1st container doesn't have any incoming traffic it won't work and it is logically wrong to PREROUTE/POSTROUTE the traffic to achieve what I was looking for. To route the originated/outgoing traffic, we need to use OUTPUT chain of the iptables.
My Solution:
I have used RedSocks with iptables combination to enforce the proxy for the complete outgoing traffic from the server. Here is the iptables configuration I've used:
# Create new chain for RedSocks
root# iptables -t nat -N REDSOCKS
# Ignore LANs and some other reserved addresses
root# iptables -t nat -A REDSOCKS -d 0.0.0.0/8 -j RETURN
root# iptables -t nat -A REDSOCKS -d 10.0.0.0/8 -j RETURN
root# iptables -t nat -A REDSOCKS -d 127.0.0.0/8 -j RETURN
root# iptables -t nat -A REDSOCKS -d 169.254.0.0/16 -j RETURN
root# iptables -t nat -A REDSOCKS -d 172.16.0.0/12 -j RETURN
root# iptables -t nat -A REDSOCKS -d 192.168.0.0/16 -j RETURN
root# iptables -t nat -A REDSOCKS -d 224.0.0.0/4 -j RETURN
root# iptables -t nat -A REDSOCKS -d 240.0.0.0/4 -j RETURN
# Redirect all the http to redsocks local port
root# sudo iptables -t nat -A REDSOCKS -p tcp --dport 80 -j REDIRECT --to-ports 12345
# for https traffic just replace port 80 with 443
# Use all REDSOCKS chain for all the outgoing traffic at eth0
root# sudo iptables -t nat -A OUTPUT -p tcp -o eth0 -j REDSOCKS
Now, configure redsocks to listen to the local port 12345 for the incoming traffic and forward it to the proxy server's IP and port. To do this edit redsocks.conf as like this,
redsocks {
local_ip = 127.0.0.1;
local_port = 12345;
ip = 172.17.0.4;
port = 8080;
type = http-relay;
}
just save the conf and restart the redsocks service. Now all the outgoing traffic originated from the 1st container will be enforced to use the proxy. (NOTE: I've used iptables-persistent to persist the rules over server reboots) Actually I have implemented the same for both http and https traffic by adding another line to the iptables configuration. Although it's not a transparent proxy, it does the job for me.
If anyone have any other alternative solutions to this please suggest.
Two things to bear in mind when working with docker's firewall rules:
- To avoid your rules being clobbered by docker, use the
DOCKER-USER
chain
- Docker does the port-mapping in the
PREROUTING
chain of the nat
table. This happens before the filter
rules, so --dest
and --dport
will see the internal IP and port of the container. To access the original destination, you can use -m conntrack --ctorigdstport
.
For example:
iptables -A DOCKER-USER -i eth0 -s 8.8.8.8 -p tcp -m conntrack --ctorigdstport 3306 --ctdir ORIGINAL -j ACCEPT
iptables -A DOCKER-USER -i eth0 -s 4.4.4.4 -p tcp -m conntrack --ctorigdstport 3306 --ctdir ORIGINAL -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m conntrack --ctorigdstport 3306 --ctdir ORIGINAL -j DROP
NOTE: Without --ctdir ORIGINAL
, this would also match the reply packets coming back for a connection from the container to port 3306 on some other server, which is almost certainly not what you want! You don't strictly need this if like me your first rule is -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
, as that will deal with all the reply packets, but it would be safer to still use --ctdir ORIGINAL
anyway.
Best Answer
When you expose a port using the
ports
section of the docker compose file, you are specifying ahost:container
mapping, so it is expected that 9000 on the container will be reachable through 80 on the host only.You can expose a port directly to other containers https://docs.docker.com/compose/yml/#expose, but you cannot specify the external port number, only the 9000 (not the 80).
Another option is to use the ambassador pattern where you have an "ambassador" that is the go between from a consumer to a provider https://docs.docker.com/articles/ambassador_pattern_linking/
So container B -> container A ambassador -> container A
You could expose port 80 on the ambassador, and then the ambassador could connect to container A's port 9000.
As you build out a more sophisticated infrastructure, you can get more creative with service registries, so containers are locating each other through a service registry rather than simple container links.
As a matter of good practice though, you generally shouldn't specify the external port directly. If you do and you try to run multiple copies of the container on the same docker host, you will get port conflicts. Or if you're running another container that tries to expose the same external port, you will also get port conflicts.