Here's an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I'm actually implementing it as we speak.
You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.
Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1
to container 1, elb-domain.com/2
to container 2, etc.
Now you are only one step away. Create a reverse proxy server.
In my case we're using nginx, so you can create an nginx server with as many IPs as you'd like, and using nginx's reverse proxying capability you can route your IPs to your ELB's paths, which accordingly route them to the correct container(s). Here's an example if you're using domains.
server {
server_name domain1.com;
listen 80;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://elb-domain.com/1;
}
}
Of course, if you're actually listening to IPs you can omit the server_name
line and just listen to corresponding interfaces.
This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your "IPs". Recreating a machine doesn't affect the static IP and you don't have to redo much configuration.
Although this doesn't fully answer your question because it won't allow you to use FTP and SSH, I'd argue that you should never use Docker to do that, and you should use cloud servers instead. If you're using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.
Modify the lunch configuration to restart docker service right after mounting EFS. Then only ECS will use the mounted EFS as volume. Otherwise it will use the original directory (mount will be ignored).
#!/bin/bash
echo ECS_CLUSTER=prodcluster >> /etc/ecs/ecs.config
sudo yum install -y nfs-utils
sudo stop ecs
sudo mkdir /home/ec2-user/web_file_uploads
sudo chmod 777 /home/ec2-user/web_file_uploads
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 fs-abcdef.efs.ap-southeast-2.amazonaws.com:/ /home/ec2-user/web_file_uploads
sudo service docker restart
sudo start ecs
Note: ECS service will stop after restarting docker service as ECS Agent runs inside docker. You need to start ECS Service afterwards.
Best Answer
EDIT: Turns out this was a lot simpler than I was making it.
When you go to create a volume on ECS, it asks you for a name for the volume, and a "source path". When pressed for explanation it will specify that the source path is "The path on the host container instance that is presented to the container for this volume. If omitted, then the Docker daemon assigns a host path for you."
All very well and good, but it turns out that the difference is more than just "specifying a directory" vs "Docker picking a directory for you." This is the difference between a docker volume and a bind mount, and in fact if you
docker inspect
the container you will see that volumes for which you give ECS a "source path" get"Type": "bind"
, whereas volumes that don't specify get"Type": "volume"
.One key difference between bind mounts and volumes is that while bind mounts inherit their ownership from the host filesystem, volumes inherit their ownership from the container filesystem. So the incredibly, frustratingly simple solution to my problem is just to make sure the directory exists in the image with the proper ownership, then create the volume in ECS without specifying a source path.
Incidentally, if your application involves multiple containers sharing the same volume, the volume will derive its permissions from the existing directory structure on whichever container gets up and running first. So you need to make sure that either a) the directory exists on all containers where the volume will be mounted, or b) the container that does have the directory in question is always launched first.
I will leave my original solution below in case it's ever useful to anybody.
Original solution 1: tmpfs mounts
Docker volumes accept a
driver_opts
parameter which works similarly to themount
command on Linux systems. So one option is to use atmpfs
mount, which allows for options that set the owner and group of the resulting files. On ECS, this can be accomplished thusly:This will create a volume owned by user and group 1000 within the container.
The downside of this method is that, being
tmpfs
, it stores files in the host memory. Depending on your use case, this may or may not be acceptable - for me it's not ideal, because I need to store log files which can grow quite large.(Note that the
type
anddevice
parameters underdriverOpts
here are equivalent to thetype
anddevice
parameters for the Linuxmount
command. This took me quite some time and frustration to figure out.)Original solution 2: Matching UID over NFS
NFS simply stores the owner/group of a file as a numeric id. The reason the group was showing up as
xfs
for me was because as part of my redeployment, I'm moving from Ubuntu to Alpine. In both cases I want to usewww-data
for the group, butwww-data
is user/group 33 on Ubuntu and 82 on Alpine. On Alpine, 33 already exists as the "X font server" user, hence,xfs
.I still don't have a perfect solution for a non-persistent, shared "scratch work" directory where I can dump logs while they wait to be sent up to Cloudwatch. I may simply end up using the tmpfs solution and then running logrotate with a very aggressive set of parameters, so that the log files never consume more than a few MB of memory.