Kubernetes Networking – Why a Pod Can’t Connect to Another Network in New Version

google-cloud-platformgoogle-compute-enginegoogle-kubernetes-engine

I have two projects in GCP:

  1. With Kubernetes Nodes v1.8.8-gke.0. and a database outside of Kubernetes but in the default network. All pods can connect to this server and all ports
  2. With Kubernetes Nodes v1.9.7-gke.3 and a database outside of Kubernetes but in the default network. No pod can connect to this server. Traceroute test fails.

Why this Pod can't connect? Ideas?

Thanks.

Best Answer

I reported this issue to google here: https://issuetracker.google.com/issues/111986281

And they said that is an issue in Kubernetes 1.9:

Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.

In the next link is the solution: https://cloud.google.com/kubernetes-engine/docs/troubleshooting#autofirewall

Basically:

First, find your cluster's network: gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"

Then get the cluster's IPv4 CIDR used for the containers:

gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"

Finally create a firewall rule for the network, with the CIDR as the source range, and allow all protocols:

gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network" --network="[NETWORK]" --source-ranges="[CLUSTER_IPV4_CIDR]" --allow=tcp,udp,icmp,esp,ah,sctp

Related Topic