Networking Between KVM VM and Docker Container – How to Set Up

debian-stretchdockerkvm-virtualizationlibvirtlinux-networking

On a Debian-Stretch host (connected to physical LAN) I have a new docker installation (v18.09) with one database container (port mapped to the host) and I run KVM/libvirt with some Debian-Stretch VMs. I can access the docker container and the VMs from the LAN (depending on the configuration trough SSH tunnel or direct) but I am struggling to access the docker container from the VMs.

enter image description here

# brctl show
bridge name         bridge id           STP enabled interfaces
br-f9f3ccd64037     8000.0242b3ebe3a0   no      
docker0             8000.024241f39b89   no      veth35454ac
virbr0              8000.525400566522   yes     virbr0-nic

After reading for days, I found one very compelling solution in this post Docker and KVM with a bridge (original) that I did not get to work. The solution suggest to initiate docker with a one-line config daemon.json code to use the KVM "default" bridge. How nice would that be! Is there any hope?

I tried two different configurations for networking between the KVM VMs. In both cases the communication between the VMs and to the LAN+router+cloud is flawless but I just don't know how to get over the fence – to the greener grass… 🙂

Conf 1 – KVM default bridge with NAT: I can ssh to the Debian host and access the docker container port but is there a setup with a direct route?

Conf 2 – macvtap adapter in Bridge mode to the LAN: I can not ping the host LAN IP from the VM although both are connected to the same router. Response from the VM itself is Destination Host Unreachable. Any thought why?

Would it be better to run the docker daemon in a separate VM rather than directly on the Debian host? This way both, the container and the VM, could access the KVM default bridge. But I thought it is kinda strange to run docker in a VM on a KVM host.

Any clear guidance would be appreciated!

Btw, the bridge br-f9f3ccd64037 is a user-defined bridge I created with docker for future inter-container communication. It is not used.

Update:

I just realized that with the first configuration I can simply connect to the docker container by its IP address (172.17.0.2) from the VM guests.

My initial setup was the second configuration because I wanted to RDP into the VMs, which is easier since the macvtap driver connects the VMs directly to the LAN and no SSH link is needed. That's when I could not reach the container.

Best Answer

The solution was as simple as stated in the linked article. I am not sure why my configuration did not change the first time I restarted the docker daemon.

After I found evidence in the Docker daemon documentation for the bridge argument in daemon.json, I gave it another try and the docker daemon picked up the KVM default bridge on startup.

First I created the configuration file /etc/docker/daemon.json as suggested in the documentation with the following content (the iptables line may not even be needed):

{
"bridge": "virbr0",
"iptables": false
}

all that was needed was:

docker stop mysql
systemctl stop docker
systemctl start docker
docker start mysql

And the existing docker container was running on the KVM bridge. The IP address of the container can be checked with:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mysql
192.168.122.2

I am not sure if I can remove the docker0 bridge now, but the container is listed under virbr0 together with the three VMs.

brctl show
bridge name bridge id           STP enabled interfaces
docker0     8000.024241f39b89   no      
virbr0      8000.068ff2a4a56e   yes         veth2abcff1
                                            virbr0-nic
                                            vnet0
                                            vnet1
                                            vnet2