Docker Nginx Proxy Configuration with Poste.io Mailserver

dockeremail-serverreverse-proxy

I was wondering wether it is possible, and if so, how I would achieve running both nginx-proxy and poste.io mailserver on one dedicated server?

I can run both seperately, but when I try and run both at the same time it says I can't run the latter container because port 443 is already in use by the other.

Now when I only use my nginx reverse proxy I do it to run multiple websites on my server, all of which expose port 80 and 443 along with the proxy itself, this kinda confused me on why I can't run another container doing the same (Yes I know normally 2 processes shouldnt be able to use the same port without some fiddling).

I use the following proxy: https://github.com/jwilder/nginx-proxy

I use https://poste.io for my mailserver

and this is an example of one of my websites docker-compose I run on my server.

application:
build: code
volumes:
    - /websites/domain:/var/www/laravel
    - /docker/webs/domain/logs:/var/www/laravel/storage/logs
tty: true
redis:
    image: redis:alpine
db:
    image: mariadb:10.2
    environment:
        MYSQL_ROOT_PASSWORD: toor
        MYSQL_DATABASE: laravel
        TEST_DB_NAME: laravel_test
        MYSQL_USER: laravel
        MYSQL_PASSWORD: laravel
php:
    build: php7-fpm
    volumes_from:
        - application
    links:
        - db
        - redis
nginx:
    build: nginx
    links:
        - php
    volumes_from:
        - application
        - nginx-proxy
    volumes:
        - ./logs/nginx/:/var/log/nginx
    environment:
        - VIRTUAL_HOST=www.domain.com

Inside my nginx's Dockerfile I expose port 80 and 443

FROM debian:jessie

MAINTAINER Purinda Gunasekara <purinda@gmail.com>

RUN apt-get update && apt-get install -y \
    nginx

ADD nginx.conf /etc/nginx/

ADD *.conf /etc/nginx/sites-enabled/

RUN rm /etc/nginx/sites-enabled/default
RUN rm /etc/nginx/sites-enabled/nginx.conf

# remove the https for local development
#RUN rm /etc/nginx/sites-enabled/*.ssl.conf

RUN echo "upstream php-upstream { server php:9000; }" > 
/etc/nginx/conf.d/upstream.conf

RUN usermod -u 1000 www-data

CMD ["nginx"]

EXPOSE 80
EXPOSE 443

So this is what is getting me confused. Why does docker allow my website to run (Eventhough nginx proxy is already running on port 80 and 443) without problems. But when I try to run my mailserver it's compaining about port 443 already being in use?

Here's the actual error posted by docker

docker: Error response from daemon: 
  driver failed programming external connectivity on endpoint 
  nginx_proxy <containerID>: Bind for 0.0.0.0:443 failed: port is already allocated.

Ideally I would be able to run both this mailserver and my websites on one server, this is because theres only a small hand full of websites going to be hosted and none of which are expected to grow too much in a short period of time.

UPDATED

The websites use volumes from the nginx-proxy, hence why they can run next to it while exposing port 80 and 443 themselves, but when I tried to link the same volume of the nginx-proxy with the mailserver I kept receiving the same error of the ports being in use.

Best Answer

If one docker container already binds to port 443 on one of your interfaces' IP (or 0.0.0.0 meaning all interfaces), other docker containers can't bind to the same IP. Check with netstat, while one 1 container is up:

sudo netstat -nalp64 | grep 443
tcp     0    0    0.0.0.0:443     0.0.0.0:*    LISTEN     26547/docker-proxy

Because Port 443 on 0.0.0.0 is already used by a docker container, new containers can't bind to that IP+Port.

Visualization

  0.0.0.0:443   (Error: Port 443 already in use)
        |               \
+--------------+    +--------------+
|  CONTAINER   |    |  CONTAINER   |
|   172.0.0.2  |    |   172.0.0.3  |
+--------------+    +--------------+

Instead of binding multiple containers to the same Port, you need some software binding to the port, redirecting connections to the appropriate container.

This is most easily done by running a dedicated reverse proxy, which is the only programm binding to the port (443). The reverse proxy's purpose is to forward incoming connections based on the requested HTTP Host.

The reverse proxy can run on the physical host running docker, or inside a docker container.

The reverse proxy can also terminate SSL connections, meaning this nginx instance handles all en/decryption to/from clients, while connections to the backend (containers) are unencrypted.

I don't think this is strictly needed, modern Browsers support SNI so nginx can still forward requests to the appropriate backend without decrypting all traffic. But using a central SSL termination, you just need certificates in one place, and SSL only has to be configured once globally for most use cases.

In order to set up such a reverse proxy with SSL termination

  • Install nginx (reverse proxy) on docker host
  • Define static IPs or Hostnames for containers
  • Make available containers' SSL Certificate + Private Key files to nginx reverse proxy
  • Define nginx upstreams to your Docker Containers within reverse proxy config
  • Define nginx servers ("vhosts") to serve domain names defined by server_name
  • Forward requests to upstreams defined in location by using proxy_pass

Example:

My /etc/nginx/sites-enabled/dockerproxy looks like this:

# gitlab
upstream gitlab
{
    server 172.20.0.2;
}

# docker registry
upstream registry
{
    server 172.20.0.3:5050;
}

# dev.mycompany.org
server
{
    listen 10.10.10.40:80 default;
    listen 10.10.10.40:443 ssl default;
    server_name dev.mycompany.org;

    ssl_certificate         /data/run/certbot/data/live/dev.mycompany.org/fullchain.pem;
    ssl_certificate_key     /data/run/certbot/data/live/dev.mycompany.org/privkey.pem;

    location /
    {
        proxy_pass http://gitlab/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# registry.mycompany.org
server
{
    listen 10.10.10.40:443 ssl;
    server_name registry.mycompany.org;

    ssl_certificate         /data/run/certbot/data/live/registry.mycompany.org/fullchain.pem;
    ssl_certificate_key     /data/run/certbot/data/live/registry.mycompany.org/privkey.pem;
    ssl_session_cache       builtin:1000 shared:SSL:60m;
    ssl_session_timeout     60m;

    client_max_body_size 0;
    chunked_transfer_encoding on;

    location /
    {
        proxy_pass http://registry/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Note the proxy_set_header directives strictly aren't needed, they depend on what the individual backend applications expect.

As you can see, this config tells nginx to:

  • Bind to 10.10.10.40:443
  • Proxy requests for dev.mycompany.org to 172.20.0.2[:80] (IP of gitlab container)
  • Proxy requests for registry.mycompany.org to 172.20.0.3:5050 (IP of registry container)
  • Terminate SSL using the given Certificate files (straigt from a certbot container, in my case)

Visualization

           0.0.0.0:443
                |
    +-----------------------+
    |       nginx           |
    +-----------------------+
        |               |
+--------------+    +--------------+
|   VHOST      |    |   VHOST      |
| web.app1.com |    | web.app2.com |
+--------------+    +--------------+
        |               |
+--------------+    +--------------+
|  CONTAINER   |    |  CONTAINER   |
|   172.0.0.2  |    |   172.0.0.3  |
+--------------+    +--------------+

By defining other upstream and server objects using different server_name directives, you can make available other HTTP(S) Services using the same Interface IP + Port.

Note that the listen 10.10.10.40:443 directive is used multiple times in the nginx config. This is possible because nginx only binds to that IP once, then parses the Host header in clients' requests to determine which server (vhost) will serve this request.

My configuration uses static IPs in the upstream definition, but you can also use containers' hostnames, just make sure they are known beforehand (defined in docker-compose, see https://docs.docker.com/compose/compose-file/#domainname-hostname-ipc-mac_address-privileged-read_only-shm_size-stdin_open-tty-user-working_dir) and resolvable by nginx.

And lastly, don't map the containers'/services' Ports to Host Ports! They don't need to be available to the outside world, only nginx needs to access them.