Route Google Cloud VPN traffic in a VPC to public internet or internal IPs

google-cloud-platform

We will be having a setup with a Company where they connect their network (A) to our google cloud VPC (B) containing a kubernetes cluster using VPN. They will route all traffic to our network through this tunnel.

In order to test this, we have connected two google VPC networks (one mimicking (A), one (B)) using the Google VPN. The connection works and VMs can ping each other using internal IPs. A's BGP advertises 0.0.0.0/0 and B its subnet. Both have firewall rules that allow ingress traffic for each other's subnets.

The on-premise network uses a public DNS and resolves IPs to public IPs.

I am looking for a way to do two things on our side:

  1. Route the public ip of our cloud services to a local (internal load balancer) IP. So that A can access our k8s cluster in B using the public ip.
  2. Route traffic with destinations on the public internet to the internet, so network A can access the internet through the VPN tunnel.

I have looked into multiple google services (NAT gateways, internal load balancers, routers, …) but (due to lack of experience) can't find a fitting solution for these issues. It this even possible with google cloud native solutions?

Best Answer

After some contact with Google support, they provided me with a working solution. It is in fact not possible using only Google Cloud native tools, but one can make it work.

So the first part is the connection to the internet through the VPN-tunnel:

One needs to set up a custom NAT-Gateway in the project responsible for internet traffic. Instructions on this are provided by Google here, it can even be HA and there is a terraform module and lots of examples provided.

After doing that you need to add some routes:

  1. Add a routes that send all traffic designated at 0.0.0.0/0 to the NAT-gateway VM(s)

  2. Add a LOWER priority route that applies only to the NAT-gateway VM (use instance tags) and make the next hop the default Internet gateway.

If you are advertising 0.0.0.0/0 to the on-premise network, it will now send all traffic through your cloud network to the internet. I want to mention here, that it is not possible anymore to connect to the on-premise network directly, as it uses the NAT-gateway for all traffic (can be bypassed by adding a route to the on-premise network that targets the originating IP).

The second part of this is redirecting traffic that is designated for an external ip to an internal IP (e.g. the kubernetes cluster in my question above):

First of all you need to add an internal load balancer to your cluster.

As all traffic is routed through the NAT-gateway VMs, you can add PREROUTING rule to the iptable manually on the VMs:

sudo iptables -t nat -A PREROUTING -d $EXTERNAL_LOAD_BALANCER_IP -j DNAT --to-destination $INTERNAL_LOAD_BALANCER_IP

If you set $ORIGINAL_DESTINATION to your clusters external load balancer IP and $INTERNAL_LOAD_BALANCER_IP to your internal load balancer ip, all traffic that should target the external load balancer will now land at your internal load balancer. This also means there is no need to add a custom DNS.

Related Topic