Here's an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I'm actually implementing it as we speak.
You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.
Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1
to container 1, elb-domain.com/2
to container 2, etc.
Now you are only one step away. Create a reverse proxy server.
In my case we're using nginx, so you can create an nginx server with as many IPs as you'd like, and using nginx's reverse proxying capability you can route your IPs to your ELB's paths, which accordingly route them to the correct container(s). Here's an example if you're using domains.
server {
server_name domain1.com;
listen 80;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://elb-domain.com/1;
}
}
Of course, if you're actually listening to IPs you can omit the server_name
line and just listen to corresponding interfaces.
This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your "IPs". Recreating a machine doesn't affect the static IP and you don't have to redo much configuration.
Although this doesn't fully answer your question because it won't allow you to use FTP and SSH, I'd argue that you should never use Docker to do that, and you should use cloud servers instead. If you're using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.
After a discussion with the support team, it turns out that ECS cannot support our current use case.
There is a workaround that solves one of the issues we are facing. That workaround is to create a separate, essential, health-check container and in the same ECS task as the actual application container. The purpose of the health-check container is to monitor the application container to determine when the application has been started completely. If it detects that the application has failed to start, it exits, causing the ECS service to cycle the task. The ELB is then configured to perform its health checks against the health-check container, which will always report that it is up via the relevant port. This workaround will prevent the ECS service from cycling the ECS task due to failed health checks.
However, the ELB will begin routing traffic to the application container immediately. It will do so, even if the application container is not yet ready to receive traffic (for example, because it is still waiting for a cache to load). Currently, there is no way to delay the ELB from sending traffic to the application container, as the ECS service provides no support a grace period. We have managed to workaround this issue by providing messages to our application containers via SQS and only having them pull from the queue when their caches are fully loaded. However, we have future use cases (such as serving web requests) where this is not a feasible option. To this end, I intend to raise a feature request for the grace period.
As an aside, both Kubernetes (http://kubernetes.io/v1.0/docs/user-guide/walkthrough/k8s201.html#application-health-checking) and Marathon (https://mesosphere.github.io/marathon/docs/health-checks.html) already support this option for health checking, if someone reading this is happy not to use a managed service.
Best Answer
Don't dockerize databases. Docker containers are meant to be ephemeral and not hold data. That's not appropriate for databases.
In particular, do not attempt to dockerize a MongoDB cluster. The setup for replica sets is extremely stateful and complex. You have to run many commands in order and only once. If you miss any command or if a node goes off, the entire cluster might dead lock and you'll be forced to start over.
Docker has its own idea of how applications should behave. It would be really difficult to make that work inside Docker.
If you want a cluster with 3 replicas, you should get 3 AWS instances and setup MongoDB on the instances. Search for
mongodb setting up a replica set on <OS>
on Google, that will give you an up to date tutorial.