Purpose of the volumes
key
It is there to create named volumes.
If you do not use it, then you will find yourself with a bunch of hashed values for your volumes. Example:
$ docker volume ls
DRIVER VOLUME NAME
local f004b95d8a3ae11e9b871074e9415e24d536742abfe86b32ffc867f7b7063e55
local 9a148e167e1c722cbdb67c8edc36f02f39caeb2d276e9316e64de36e7bc2c35d
With named volumes, you get something like the following:
$ docker volume ls
local projectname_someconf
local projectname_otherconf
How to create named volumes
The docker-compose.yml
syntax is:
version: '2'
services:
app:
container_name: app
volumes_from:
- appconf
appconf:
container_name: appconf
volumes:
- ./Docker/AppConf:/var/www/conf
volumes:
appconf:
networks:
front:
driver: bridge
This something like above shown named volumes.
How to remove volumes in bulk
When you have a bunch of hashes, it can be quite hard to clean up. Here's a one-liner:
docker volume rm $(docker volume ls |awk '{print $2}')
Edit: As @ArthurTacca pointed out in the comments, there's an easier to remember way:
docker volume rm $(docker volume ls -q)
How to get details about a named volume
Now that you do not have to look up hashes anymore, you can go on it and call them by their … name:
docker volume inspect <volume_name>
# Example:
$ docker volume inspect projectname_appconf
[
{
"Name": "projectname_appconf",
"Driver": "local",
"Mountpoint": "/mnt/sda1/var/lib/docker/volumes/projectname_appconf/_data"
}
]
Sidenote: You might want to docker-compose down
your services to get a fresh start before going to create volumes.
In case you are using Boot2Docker/ Docker Machine, you will have to docker-machine ssh
and sudo -i
before doing a ls -la /mnt/…
of that volume – you host machine is the VM provisioned by Docker Machine.
EDIT: Another related answer about named volumes on SO.
docker-compose ps -q <service_name>
will display the container ID no matter it's running or not, as long as it was created.
docker ps
shows only those that are actually running.
Let's combine these two commands:
if [ -z `docker ps -q --no-trunc | grep $(docker-compose ps -q <service_name>)` ]; then
echo "No, it's not running."
else
echo "Yes, it's running."
fi
docker ps
shows short version of IDs by default, so we need to specify --no-trunc
flag.
UPDATE: It threw "grep usage" warning if the service was not running. Thanks to @Dzhuneyt, here's the updated answer.
if [ -z `docker-compose ps -q <service_name>` ] || [ -z `docker ps -q --no-trunc | grep $(docker-compose ps -q <service_name>)` ]; then
echo "No, it's not running."
else
echo "Yes, it's running."
fi
Best Answer
When you use docker kill, this is the expected behavior as Docker does not restart the container: "If you manually stop a container, its restart policy is ignored until the Docker daemon restarts or the container is manually restarted. This is another attempt to prevent a restart loop" (reference)
If you use docker stop or docker kill, you're manually stopping the container. You can do some tests about restart policies: restarting the docker daemon, rebooting your server, using a CMD inside a container and running an exit...
For example if I kill my container deployed with a restart policy, I see that it exited with code 137 but it is not restarted according to docker ps -a, it remains exited:
But if I restart the daemon...
The container that was set with restart policy, starts again which is what documentation say, so docker kill is not the way you should test the restart policy as it's assumed that you have deliberately stopped the container and Docker wants to have a way to prevent restarting loops, if you kill it, you really want to kill it.
I found the following links valuable that show the same behavior in different versions (so it's not a bug but the expected behavior):