Not sure if this is considered as abandoned question - stumbled upon this while troubleshooting my issue and now adding my solution now that it's resolved.
To update service with new container, you need to:
- upload new container to repository;
- trigger task definition update;
- trigger container update;
- important: make sure service rules allow launching new version of the task.
If service task is not updated to latest version, check "events" tab for errors. For example, maybe ECS was not able to start new version of your service: you only have one ec2 instance in the cluster and the application port is already used on the host. In this case, set "min health/max health" limits to "0%, 100%" - this way, ECS will choose to kill old container before deploying new one. This is also happening over a course of few minutes - don't rush if you don't see immediate feedback.
Below is an example deployment script to update container in a pre-configured cluster and service. Note there is no need to specify versions if you just mean "use latest from the family".
awsRegion=us-east-1
containerName=..
containerRepository=..
taskDefinitionFile=...
taskDefinitionName=...
serviceName=...
echo 'build docker image...'
docker build -t $containerName .
echo 'upload docker image...'
docker tag $containerName:latest $containerRepository:$containerName
docker push $containerRepository:$containerName
echo 'update task definition...'
aws ecs register-task-definition --cli-input-json file://$taskDefinitionFile --region $awsRegion > /dev/null
echo 'update our service with that last task..'
aws ecs update-service --service $serviceName --task-definition $taskDefinitionName --region $awsRegion > /dev/null
Here's an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I'm actually implementing it as we speak.
You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.
Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1
to container 1, elb-domain.com/2
to container 2, etc.
Now you are only one step away. Create a reverse proxy server.
In my case we're using nginx, so you can create an nginx server with as many IPs as you'd like, and using nginx's reverse proxying capability you can route your IPs to your ELB's paths, which accordingly route them to the correct container(s). Here's an example if you're using domains.
server {
server_name domain1.com;
listen 80;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://elb-domain.com/1;
}
}
Of course, if you're actually listening to IPs you can omit the server_name
line and just listen to corresponding interfaces.
This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your "IPs". Recreating a machine doesn't affect the static IP and you don't have to redo much configuration.
Although this doesn't fully answer your question because it won't allow you to use FTP and SSH, I'd argue that you should never use Docker to do that, and you should use cloud servers instead. If you're using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.
Best Answer
The registry url is … blank. Just the same as the docker command line, if you give ECS an image with no repository url:
it will pull it from dockerhub.
(This surprised me too. I spend half an hour searching the interwebs for a url for dockerhub before trying this).