Docker – Amazon ECS (Docker): binding container to specific IP address

amazon-ecsamazon-web-servicesdocker

I'm playing with Amazon ECS (a repackaging of Docker) and I'm finding there's one Docker capability that ECS does not seem to provide. Namely, I would like to have multiple containers running in an instance, and have requests coming in to IP address 1 map to container 1, and requests coming to IP address 2 map to container 2, etc.

In Docker, binding a container to a specific IP address is done via:

docker run -p myHostIPAddr:80:8080 imageName command

However, in Amazon ECS, there doesn't seem to be a way to do this.

I have set up an EC2 instance with multiple Elastic IP addresses. When configuring a container as part of a task definition, it is possible to map host ports to container ports. However, unlike Docker, ECS does not provide a way to specify the host IP address as part of the mapping.

An additional twist is that I would like outbound requests from container N to have container N's external IP address.

Is there a way to do all of the above?

I've looked through the AWS CLI documentation, as well as the AWS SDK for Java. I can see that the CLI can return a networkBindings array containing elements like this:

{
  "bindIP": "0.0.0.0", 
  "containerPort": 8021, 
  "hostPort": 8021
},

and the Java SDK has a class named NetworkBinding that represents the same information. However, this info appears to be output-only, in response to a request. I can't find a way of providing this binding info to ECS.

The reason that I want to do this is that I want to set up completely different VMs for different constituencies, using different containers potentially on the same EC2 instance. Each VM would have its own web server (including distinct SSL certificates), as well as its own FTP and SSH service.

Thanks.

Best Answer

Here's an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I'm actually implementing it as we speak.

You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.

Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1 to container 1, elb-domain.com/2 to container 2, etc.

Now you are only one step away. Create a reverse proxy server.

In my case we're using nginx, so you can create an nginx server with as many IPs as you'd like, and using nginx's reverse proxying capability you can route your IPs to your ELB's paths, which accordingly route them to the correct container(s). Here's an example if you're using domains.

server {
    server_name domain1.com;
    listen 80;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://elb-domain.com/1;
    }
}

Of course, if you're actually listening to IPs you can omit the server_name line and just listen to corresponding interfaces.

This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your "IPs". Recreating a machine doesn't affect the static IP and you don't have to redo much configuration.

Although this doesn't fully answer your question because it won't allow you to use FTP and SSH, I'd argue that you should never use Docker to do that, and you should use cloud servers instead. If you're using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.