Nginx reverse proxy for Docker 1.13 “Swarm Mode” cluster

dockerdocker-swarmnginxreverse-proxy

I have an existing docker swarm cluster running on two nodes and I want to add nginx for reverse proxying. I am asking this question because I am facing couple of problems that I don't know how to solve.

My first question is regarding running nginx inside the Swarm and being able to access my app containers using their proper names. Firsly, here is the output of docker network ls

6897486e798b        bridge              bridge              local
3c5b72414821        docker_gwbridge     bridge              local
6f762b23ff12        host                host                local
uwy3qfuu4oos        ingress             overlay             swarm
0e867cd5a3bf        none                null                local

Do I need to create another overlay network and set up Nginx to be in that overlay network. I am creating the nginx service in the following way:

docker service create rproxy -p 80:80 --mount type=volume,source=rproxy,target=/etc/nginx --mode=global nginx:alpine

With this command, nginx is being created in all my nodes and I am able to access the default nginx "hello world" screen using browser.

Since, I mounted a volume for nginx configuration, I am able to access nginx configuration from /var/lib/docker/volumes/rproxy/_data. So, I went into /etc/nginx/conf.d, removed default and created a simple vhost:

server {
    listen 80;
    location / {
        proxy_pass http://myapp:80;
    }
}

When I restarted nginx, my server did not start due to nginx error that Host "myapp" does not exist. I know that if I expose any port from myapp service, I will be able to proxy using something like:

proxy_pass http://0.0.0.0:SOME_PORT;

I do not expose any ports but want to access my containers using their respective service names. Is this possible? If yes how should I do it?

My second question is regarding storage of config files. Is there a way to create a single volume in the swarm and access that volume from all nodes? I wouldn't even mind if the volume is stored in the swarm manager server as nginx loads config into memory, which would not affect the performance.

EDIT: I didn't check Docker version when installing, thinking that 1.13 is the latest one. Docker version 17.06.1-ce, build 874a737

Best Answer

Do I need to create another overlay network and set up Nginx to be in that overlay network.

The nginx container and your target applications need to be on the same docker network to communicate from container to container. You may add the nginx container to multiple application specific networks, or you may create one proxy network and attach all applications to that network. From the docker run command you can connect to a single network. For multiple networks the hard way, you can do a docker create and then docker network connect before running a docker start. The easy way would be using a docker-compose.yml file that automates these steps to connect your container to multiple networks.

Is there a way to create a single volume in the swarm and access that volume from all nodes? I wouldn't even mind if the volume is stored in the swarm manager server as nginx loads config into memory, which would not affect the performance.

You can create a volume that connects to a remote nfs server. Here are some examples of the docker commands to use a remote nfs share:

# create a reusable volume
$ docker volume create --driver local \
    --opt type=nfs \
    --opt o=addr=192.168.1.1,rw \
    --opt device=:/path/to/dir \
    foo

# or from the docker run command
$ docker run -it --rm \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo

# or to create a service
$ docker service create \
  --mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
  foo
Related Topic