I would like to make a rsync
backup of a Docker volume on a remote host.
This method should have the following advantages:
- rsync files with
user
andowner
permissions that are not granted to the backup host - does not require additional port/s
- rsync any kind of docker volume
This should let me rsync any volume without needing to use root on either the source or destination machine. Although it is a bit involved, I think I'm close to coming up with a robust solution that will be very easy to use.
The idea is to simply use rsync over ssh but instead of letting rsync invoke ssh directly, wrap the ssh command in a script that will:
- ssh into the destination host (just like rsync does)
- start a small alpine container mounting the volume with rsync source data
- pass rsync's command into that new container
The goal is to keep the stdin and stdout streams in-tact so the [sender] and [receiver] rsync process can talk.
This is what I have. I think I'm unsuccessful in keeping the stdin and stderr in-tact.
# prepare the remote host
docker run -it alpine apk add rsync
docker commit `docker ps -q -l` alpine-rsync
On the local machine (or backup server)
./ssh-rdocker
#!/bin/sh
DOCKER_VOLUME=${1?-docker volume}; shift 1
# example rsync parameters
# -l debian ovh rsync --server --sender -nlogDtpre.iLsfxC . /data
if [ "$1" != "-l" ]
then
echo Call this script from rsync
exit 1
fi
shift 1; # -l
user=$1; shift 1
host=$1; shift 1
# Run rsync's command in a remote docker container
set -o xtrace # only for debugging
ssh -l $user $host sh - << SH
set -o xtrace # only for debugging
docker run -i --rm -v $DOCKER_VOLUME:/data:ro alpine-rsync sh - << RS
set -o xtrace # only for debugging
eval "$@"
RS
SH
And the dry-run
.. Even when the set -o xtrace
messages are removed, same results. I'll leave them on for documentation purposes.
$ rsync -a --dry-run -e './ssh-rdocker synapse_files' debian@ovh:/data ./dest
+ ssh -l debian ovh sh -
+ docker run -i --rm -v synapse_files:/data:ro alpine-rsync sh -
+ eval 'rsync --server --sender -nlogDtpre.iLsfxC . /data'
+ rsync --server --sender -nlogDtpre.iLsfxC . /data
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.3]
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [Receiver=3.1.2]
This is an alternative way to wrap everything:
# [...] same as ./ssh-rdocker from above, until:
# Run rsync's command in a remote docker container
ssh -l $user $host "exec \$SHELL -c \"docker run --rm -v $DOCKER_VOLUME:/data:ro alpine-rsync sh -c '$@' \" "
Which appears to suffer from the same issue:
$ rsync -a --dry-run -e './ssh-rdocker synapse_files' debian@ovh:/data ./dest
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.3]
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [Receiver=3.1.2]
What do you think, am I'm I loosing the link between stdin / stdout in the process chain? I must be getting stderr back because I here from both rsync processes.
Once this is fixed, the final step is to wrap the local side in an alpine container too so that the files may be received and written to a local volume with any owner or group permission without error (omitted for brevity).
Best Answer
Only 1 case will work:
So, the single-line
docker run
command just needed-i
(interactive
):Unfortunately it appears that the example with complex escaping is necessary.