Here are two ways to solve this;
First option is to add another health check on the host that validates the health and returns HTTP 200s to the ELB if the logic says that you want to keep the host online. The logic there is, of course, up to you. The disadvantage here would be that if App 2 deployed successfully on some hosts all hosts would still be 'healthy' and receiving traffic.
Another option is to use an additional ELB for each application. You can point several ELBs to the same backend EC2 instances and the cost is pretty minor to do so. That way you can health check per application and drop hosts with issues at a per-application level rather than an all-or-nothing approach.
Edit: Please note this is an older answer and is specific to ELB not ALB. ALB supports separate targets on one host natively.
Here's an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I'm actually implementing it as we speak.
You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.
Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1
to container 1, elb-domain.com/2
to container 2, etc.
Now you are only one step away. Create a reverse proxy server.
In my case we're using nginx, so you can create an nginx server with as many IPs as you'd like, and using nginx's reverse proxying capability you can route your IPs to your ELB's paths, which accordingly route them to the correct container(s). Here's an example if you're using domains.
server {
server_name domain1.com;
listen 80;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://elb-domain.com/1;
}
}
Of course, if you're actually listening to IPs you can omit the server_name
line and just listen to corresponding interfaces.
This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your "IPs". Recreating a machine doesn't affect the static IP and you don't have to redo much configuration.
Although this doesn't fully answer your question because it won't allow you to use FTP and SSH, I'd argue that you should never use Docker to do that, and you should use cloud servers instead. If you're using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.
Best Answer
Instead of ELB (Elastic Load Balancer) consider ALB (Application Load Balancer) - it is generally cheaper and more flexible.
Yes you can have a certificate from AWS Certificate Manager and terminate SSL on the ALB. The ALB can then talk to your docker container over plain HTTP (non-SSL). If you use ECS (and you should!) it can register the containers with ALB automatically.
ALB has a concept of Target Groups where you can have different content providers, e.g. different API containers, behind a single load balancer. They will differ by paths, e.g.
/api1/...
and/api2/...
, but will share the same host name. That also means you'll get away with a single ACM certificate.Hope that helps :)