Do I need to create another overlay network and set up Nginx to be in that overlay network.
The nginx container and your target applications need to be on the same docker network to communicate from container to container. You may add the nginx container to multiple application specific networks, or you may create one proxy network and attach all applications to that network. From the docker run
command you can connect to a single network. For multiple networks the hard way, you can do a docker create
and then docker network connect
before running a docker start
. The easy way would be using a docker-compose.yml file that automates these steps to connect your container to multiple networks.
Is there a way to create a single volume in the swarm and access that volume from all nodes? I wouldn't even mind if the volume is stored in the swarm manager server as nginx loads config into memory, which would not affect the performance.
You can create a volume that connects to a remote nfs server. Here are some examples of the docker commands to use a remote nfs share:
# create a reusable volume
$ docker volume create --driver local \
--opt type=nfs \
--opt o=addr=192.168.1.1,rw \
--opt device=:/path/to/dir \
foo
# or from the docker run command
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
# or to create a service
$ docker service create \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=nfs,volume-opt=o=addr=192.168.1.1,volume-opt=device=:/host/path \
foo
Related to your first problem. While Swarm is indeed easy to setup and allows you to create replicas of your containers and more stuff, volume sharing is not in the batteries included with Swarm. You're right, volumes are not mounted in the manager. Each container will mount a volume on each worker host where it runs, and those volumes are not shared in Swarm.
You should have a look on Docker docs about volume plugins. From the docs, "a volume plugin might enable Docker volumes to persist across multiple Docker hosts". So if in your case you want to share the same volumes between your swarm hosts then you'll have to choose a volume plugin that best fits your environment selecting a plugin from the list.
The alternative to volume plugins as you mentioned could be of course data sharing with NFS, GlusterFS or Ceph where the worker nodes in the Swarm should share the mount point of the volume. I'd recommend you reading this article about volume persistence and volume sharing, while it's dated and not directly related with swarm it has valuable info and talks about the two strategies mentioned: volume plugin and data sharing. Note that in the article Flocker is mentioned, but Flocker was discontinued although it was forked here https://github.com/ScatterHQ/flocker). As ServerFault is not opinion-based I don't include my preferences, I just mention the existing strategies for your problem.
About your second problem. Swarm indeed allows you to interconnect containers located in different worker hosts thanks to the overlay network. I use load balancers and reverse proxy that connects with other containers flawlessly. You create the network in one of your Swarm managers and your worker hosts will be modified so the same network is created and firewall rules are applied. If you're having problems I recommend you to follow the Swarm tutorial so you can see it working or detect a problem in your setup. I use it for troubleshooting.
Best Answer
To do this, all you need is to make it use the same network, then they will be visible. In my case I defined a network called
public
, that is referred to externally by all my stacksFrom there in my docker-compose.yml file I have
To access it, just use the service name.