I have a Docker image that downloads a large data set, processes it, creating a ton of temporary files in the process and then uploads the final result to S3. I don't need the temp files to be persistent / survive an instance failure, so I want to use instance store for this. For some reason, Docker refuses to use it though.
In my Launch Configuration, I selected an instance type with instance store (c3.8xlarge), then added the two Instance Store volumes as /dev/xvdcz and /dev/sdb.
In the user data, I mount the sdb volume and make it writable:
sudo mkdir /media/storage
sudo mount /dev/sdb /media/storage
sudo chmod o+rw /media/storage
In the task definition, I create a volume named "InstanceStorage" with the source path "/media/storage" and in the container definition, I added a Mount Point for this volume at /storage.
When I now ssh into the instance while the task is running, and I look at the host's /media/storage directory, it's empty. The task logs clearly show that it can write to the /storage directory of the container. It's just much smaller than the Instance Store – 7.8 GB available instead of 300.
I tried to manually run the Docker container and attach the volume:
docker run -it -v /media/storage:/storage --entrypoint /bin/sh my-docker-image:latest
Same behavior: I can write to /storage, the files even survive exiting and restarting the Docker container, but it's way too small and any files I create in /storage inside the container don't exist in /media/storage on the host.
Looking at df -h at both the host and the container, it becomes clear that it mounted /dev/xvda1 instead — the volume that also stores the docker images.
Host:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7,8G 848M 6,9G 11% /
devtmpfs 30G 88K 30G 1% /dev
tmpfs 30G 0 30G 0% /dev/shm
/dev/xvdb 315G 67M 299G 1% /media/storage
Container:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-(...) 9.8G 1.1G 8.2G 12% /
tmpfs 30G 0 30G 0% /dev
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/xvda1 7.8G 848M 6.9G 11% /storage
shm 64M 0 64M 0% /dev/shm
Why would it do that?
Best Answer
I finally found the solution. This script pointed me in the right direction.
You need to restart the Docker service.
Apparently, Docker starts before the user data scripts are executed, and it can only access volumes that were mounted when the service started.
So I added this to the end of my userdata script, and that fixed it: