Amazon-web-services – How to understand Amazon ECS cluster

amazon ec2amazon-ecsamazon-web-servicesdeploymentdocker

I recently tried to deploy docker containers using task definition by AWS. Along the way, I came across the following questions.

  1. How to add an instance to a cluster? When creating a new cluster using Amazon ECS console, how to add a new ec2 instance to the new cluster. In other words, when launching a new ec2 instance, what config is needed in order to allocate it to a user created cluster under Amazon ECS.

  2. How many ECS instances are needed in a cluster, and what are the factors?

  3. If I have two instances (ins1, ins2) in a cluster, and my webapp, db containers are running in ins1. After I updated the running service (through http://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html), I can see the newly created service is running in "ins2", before draining the old service in "ins1". My question is that after my webapp container allocated to another instance, the access IP address becomes another instance IP. How to prevent or what the solution to make the same IP address access to webapp? Not only IP, what about the data after changing to a new instance?

Best Answer

These are really three fairly different questions, so it might best to split them into different questions here accordingly - I'll try to provide an answer regardless:

  1. Amazon ECS Container Instances are added indirectly, it's the job of the Amazon ECS Container Agent on each instance to register itself with the cluster created and named by you, see concepts and lifecycle for details. For this to work, you need follow the steps outlined in Launching an Amazon ECS Container Instance, be it manually or via automation. Be aware of step 10.:

By default, your container instance launches into your default cluster. If you want to launch into your own cluster instead of the default, choose the Advanced Details list and paste the following script into the User data field, replacing your_cluster_name with the name of your cluster.

#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
  1. You only need a single instance for ECS to work as such, because the cluster itself is managed by AWS on your behalf. This wouldn't be sufficient for high availability scenarios though:

    • Because the container hosts are just regular Amazon EC2 instances, you would need to follow AWS best practices and spread them over two or three Availability Zones (AZ) so that a (rare) outage of an AZ doesn't impact your cluster, because ECS can migrate your containers to a different host instance (provided your cluster has sufficient spare capacity).
    • Many advanced clustering technologies that facilitate containers have their own service orchestration layers and usually require an uneven number >= 3 (service) instances for a high availability setup. You can read more about this in section Optimal Cluster Size within Administration for example (see also Running CoreOS with AWS EC2 Container Service).
  2. This refers back to the high availability and service orchestration topics mentioned in 2. already, more precisely your are facing the problem of service discovery, which becomes more prevalent even when using container technologies in general and micro-services in particular: