Switch base connectivity between two ESXi hosts in vSphere vCenter with VDS


I have two ESXi Hosts in Hetzner datacenter and because of the Hetzner's subnet allocation policies I should have a router (implemented with CentOS) in each ESXi server to route the subnet IP to the VMs.

Now I want to migrate the ESXi A host VMs to the ESXi B host. I shoud transfer my VMs from host A to host B and After that I have to ask the datacenter staff to change my subnets route from ESXi A to ESXi B so during the transfer process my VMs will become inaccessible after moving to host B because their IP address routed to the host A after transfer complete datacenter staff will change the route to host B so my VMs will be accessible again.

I think I can solve the problem by creating a level 2 network connectivty between host A and B so after my VMs move to host B they can still see the host A router and they do not lost network connectivity.

So I decided to use "vSphere Distributed Switch" to provide level 2 network connectivity (switch based) between this ESXi servers. I create a "vSphere Distributed Switch" but the VMs from host A cannot see the VMs in host B although all the VMs are part of VDS. The VDS do not have any physical interface connected (may be it is the problem) because I'm not sure how to move the physical interfaces from vSwitches to VDS without losing the connection to the host.

Network topology 1
Network topology 2

Best Answer

You don't get connectivity between VMs on different hosts just by connecting them to the same port group. There's no "magic" tunneling the traffic between the hosts over the hypervisor management port or something.

If the VMs are on different hosts, your VDS needs an uplink both on the source and the destination host.

We've migrated both the physical uplink of a vSwitch with VMkernel ports and the VMkernel ports to a VDS and it generally works iirc. Maybe KB 1010614 can help you there.

However, you should try this with a machine that's not productive. As far as I can see, your "ESXi B" would be ideal to test this.

edit: If something goes wrong and you have access to your hosts via KVM switch or something KB 1008127 might help you.

edit2: We generally have two uplinks for redundancy and migrate one uplink from the vSwitch to the VDS, then the vmkernel interface and at last the second uplink. With only one uplink it's tricky... can you get a third IP address from Hetzner for a day or two? You could create a new vmkernel interface on the VDS with "management traffic" enabled, assign your only uplink to the VDS and change your DNS config. After your vCenter resolves your ESXi host to the new IP address it should be able to manage the host again. If you want to use the original IP you could delete the old vmkernel interface, change the IP of the new vmkernel interface- the one on the VDS- and change your DNS again.

However, I'm not sure if this would solve your problem. Try to create a VM port group on the virtual switch where your vmkernel interface is now, the one with an uplink. Do it on both hosts and create a VM connected to this port group on each. If the two VMs can't communicate with each other a VDS won't help you.