Broken GKE Backend Health Check Default

google-cloud-platformgoogle-kubernetes-enginehealthcheckkubernetes

I have read this and I understand (I think) the differences between Kubernetes livenessProbes, and the GKE LoadBalancer health checks.

My problem is this: I am exposing most of my Kube services via NodePort, which by default creates a new Backend Service in GKE, and also creates a load-balancer health check corresponding to that Backend HTTP service in GKE.

All of these automatically-created health checks assume I have a HTTP status endpoint on /, when I do not. I have a HTTP health check hosted on a different endpoint.

How do I

A) Somehow hint to GKE to create loadbalancer rules based off of rules defined in Kubernetes resources, rather than blindly creating a bunch listening on the wrong route, or

B) Get GKE to NOT automatically create an invalid health check for EVERY NodePort service.

Or is this just an inflexible Google Cloud quirk that I will have to make code changes to work around?

Best Answer

Discovered that the answer is

A. No

B. No

by reading the Kubernetes Ingress Github README.MD:

"..currently we just rely on kubernetes service/pod liveness probes and force pods to have a / endpoint that responds with 200 for GCE."

Related Topic