Nginx – kubernetes load balancer with sticky session always send traffic to one pod

google-cloud-platformgoogle-kubernetes-engineload balancingnginx

I have a problem with my load balancer setup where it always redirect most traffic (like 99%) to one pod. Basically the infrastructure is as shown on this diagram. The objective is I need sticky session to be enabled, whether on nginx or Google load balancer, and my traffic is distributed equally to available pods.

Briefly, I have 2 RCs and 2 Services in my cluster. 1 pod of nginx served behind a Google Loadbalancer (nginx-lb) and another load balancer (app-lb) to balance traffic to 2 app pods. Here's what I thought of the config:

  • nginx-lb: I set the nginx-lb to sessionAffinity: None and externalTrafficPolicy: Local because I am thinking I don't need sticky session now, but I do need to pass through user's IP. At this point all incoming traffic will be treated the same but we are trying to preserve user's IP by setting externalTrafficPolicy: Local.

  • nginx: The nginx itself has enabled ngx_http_realip_module to keep user's IP forwarded but I did not use ip_hash here as I am still thinking we don't need sticky session here yet. Again, just like nginx-lb I am trying to pass all incoming traffic but preserve user's IP. The nginx here is mainly for proxy and SSL handler.

  • app-lb: Then comes to app-lb where I enabled sessionAffinity: ClientIP for sticky session and externalTrafficPolicy: Cluster for load balancing. I believe this is where the actual load balancing by ClientIP happen as this is the only service that has/know 2 pods behind it.

I tested this configuration with ~50ish users running for a day but still redirecting to one pod, while the other pod is idle with low cpu and memory usage compared to the first one.

I'd like to ask with the setup, am I getting right with what I want to achieve? Is there a configuration I am missing? Any input will be highly appreciated.

PS. I re-write the whole question to add more facts from what I have understood, but basically still relevant to original question with different wordings.

Best Answer

This happen, because you are using sessionAffinity: ClientIP, this is the affinity on the service and is ip based, so the service get the ip of your loadbalancer, try to use sessionAffinity: None and if you want to use sticky session use nginx ingress controller

Related Topic