How to SSH with kubectl to GKE Cluster Without External IPs via VPN

google-compute-enginegoogle-kubernetes-engine

I can (for instance) connect to the cluster compute nodes like this:
gcloud compute ssh gke-test-deploy-default-pool-xxxxx --internal-ip

But if I try to set up my kubectl credentials like this:
gcloud container clusters get-credentials test-deploy --internal-ip
It complains:

ERROR: (gcloud.container.clusters.get-credentials) cluster test-deploy
is not a private cluster.

I am able to do non-ssh type commands like kubectl get pods --all-namespaces, but if I do kubectl exec -it rabbitmq-podnumber -n backbone-testdeploy bash I get this error:

Error from server: error dialing backend: No SSH tunnels currently
open. Were the targets able to accept an ssh-key for user
"gke-xxxxxxx"

BTW, the whole point of this is to use Google Cloud NAT on my cluster so that I have a consistent external IP for all pods when connecting to an external service (Atlas) which uses an IP whitelist. I can see the NAT working for the compute instances, but I cannot connect to the pods to check them.

Best Answer

The Master node and the worker nodes are each in different networks, the master is in a Google managed one and the worker nodes are in your VPC. With a standard cluster, the master communicates with the nodes via external IP. With a private cluster, the Master node and the worker nodes are connected via a network peering and communicate via internal IPs.

This causes problems when connecting to the master directly over other peered networks or VPN connections because the network peering routes to the master are not propagated over VPNs and network peering.

For your use case, disable the external master endpoint. Once this is done, when you run the get-credentials command, your kube config will have the internal master endpoint instead of the external one. You will then need to connect to your master (kubectl) from the VPC network (a bastion host or a proxy).

Instead, I recommend leaving the external end point active, use get-credentials without using --internal-ip so that your kube config will use the external end point and thus you can connect from anywhere. To make sure that you master is secure, use Master Authorized Networks to define the external IPs or CIDRs you will be connecting from

I am fairly certain the kubectl exec and logs commands are failing because of how you are getting the credentials.

One last thing worth checking as well, GKE creates firewall rules and routes automatically (they will be called gke-...), these are required to ensure that SSH tunnels from the master to the nodes work properly.