Docker – How to scale up one container using Amazon EC2 Container Service

amazon ec2amazon-ecsamazon-web-servicesdocker

I am new using Amazon ECS and I would like to know how to set up services in order to scale up / down one container easily.

Here is my project architecture:

  • website: container with the website, only serving html pages and javascript/css/images. Listens on 80.
  • api: container with the API developed in NodeJS serving json. Listens on 443.
  • rabbitmq: container with rabbitmq. The api container is linked to it.
  • worker: A container that waits for orders from rabbitmq (it is also linked to it) and process them, and then sends answers back to rabbitmq.

For now, I just created one task definition with all of my containers, and in my cluster I only have one service.
I also have a load balancer on the API (so I can access it from the website via a DNS name).

It works fine, but I want to be able to launch more workers , without launching everything else, and I don't seem to be able to do that right now (correct me if I'm wrong). So I have a few questions:

  • Do I need to create separate task definitions?
  • Do I need to create separate services?
  • If I create a task definition for each container (thus front with website, back with api, broker with rabbitmq and worker with worker), am I still able to link containers together, even though they are not in the same task definition?

Here is my current task definition:

{
  "taskDefinitionArn": "arn:aws:ecs:ap-southeast-2:347930943102:task-definition/Flipendo:4",
  "revision": 4,
  "containerDefinitions": [
    {
      "volumesFrom": [],
      "portMappings": [],
      "command": [],
      "environment": [
      ],
      "essential": true,
      "entryPoint": [],
      "links": [
        "rabbitmq"
      ],
      "mountPoints": [],
      "memory": 2048,
      "name": "worker",
      "cpu": 4096,
      "image": "flipendo/worker"
    },
    {
      "volumesFrom": [],
      "portMappings": [],
      "command": [],
      "environment": [],
      "essential": true,
      "entryPoint": [],
      "links": [],
      "mountPoints": [],
      "memory": 2048,
      "name": "rabbitmq",
      "cpu": 2048,
      "image": "rabbitmq"
    },
    {
      "volumesFrom": [],
      "portMappings": [
        {
          "hostPort": 443,
          "containerPort": 3000
        }
      ],
      "command": [],
      "environment": [
      ],
      "essential": true,
      "entryPoint": [],
      "links": [
        "rabbitmq"
      ],
      "mountPoints": [],
      "memory": 2048,
      "name": "api",
      "cpu": 2048,
      "image": "flipendo/api"
    },
    {
      "volumesFrom": [],
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 3000
        }
      ],
      "command": [],
      "environment": [
        {
          "name": "API_PORT",
          "value": "443"
        },
        {
          "name": "API_ADDR",
          "value": "load balancer dns server"
        }
      ],
      "essential": true,
      "entryPoint": [],
      "links": [
        "api"
      ],
      "mountPoints": [],
      "memory": 1024,
      "name": "website",
      "cpu": 1024,
      "image": "flipendo/website"
    }
  ],
  "volumes": [],
  "family": "Flipendo"
}

Thank you very much.

Best Answer

Do I need to create separate task definitions?

Yes

Do I need to create separate services?

Not necessarily. You can simply run tasks on their own without the "service". But "service" allows association with Load Balancer, Application autoscaling as well as zero downtime deployments.

The only way to "Docker link" your containers is to define them in one task definition like you are currently doing. This way ECS will place all containers on the same instance. Splitting into different tasks means no linking as the containers might be started on different instances.

So if you decide to split them, then each container will have to connect to other containers via "service" urls.

My advise is to

  1. Create ALB/ELB
  2. Split all containers into individual tasks.
  3. Create "services" for all tasks
  4. Associate each service container with ALB/ELB
  5. Update each service config to use DNS:PORT of the ALB/ELB used by each service
  6. Stop using rabitMQ and migrate to SQS.

This way you can scale each "service" individually.

If you decide to stay with rabbitMQ you will have to use ELB for rabbitMQ container and manually associate container port used by rabbitMQ with the ELB.

ALBs will automagically discover container ports used by your services.

See this for more details on ALB and ECS:

https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/