Nginx allows you to specify whether to use proxy_protocol in incoming or outgoing requests, and you're confusing the two.
To use proxy_protocol in incoming connections, you have to add proxy_protocol
to the listen
line like this:
listen 443 ssl proxy_protocol;
To use proxy_protocol in outgoing connections, you have to use the standalone proxy_protocol
directive, like this:
proxy_protocol on;
They are not the same. In a load balancer, incoming connections come from browsers, which do not speak the proxy protocol. You want proxy protocol only in your outgoing requests, to the nginx-ingress in your kubernetes cluster.
Therefore, remove the proxy_protocol
argument from the listen
directive, and it should work.
Additionally, you want use-forwarded-headers: "false"
in your nginx-ingress config. That controls whether to use the X-Forwarded-For
& co. headers in incoming connections (from the point of view of the nginx-ingress, ie outgoing from your load balancer), and you're using proxy protocol in these instead of the headers. With it enabled, your users may be able to spoof IPs by specifying X-Forwarded-For, which can be a security issue. (only if nginx-ingress gives priority to the headers over the proxy protocol, which I'm not sure)
An aside: nginx-ingress itself already load-balances traffic between all pods. With your architecture, you're running two "layers" of load balancers, which is probably unnecessary. If you want to simplify, force nginx-ingress to run on a single node (with nodeSelector
for example) and simply send all your traffic to that node. If you want to keep the load balancer on a dedicated machine, you can join the 4th machine to the cluster and make sure it just runs nginx-ingress (with taints and tolerations).
Also, make sure you're running nginx-ingress with hostNetwork: true, otherwise you may be having yet another layer of balancing (kube-proxy, the kubernetes service proxy)
On host volatile
you appear to have cilium configured in /etc/cni/net.d/*.conf
. It is a networking plugin, one of many available for Kubernetes. One of these files probably contains something like:
{
"name": "cilium",
"type": "cilium-cni"
}
If this is accidental, remove such file. You appear to be already running a competing networking plugin by Project Calico, which is seemingly sufficient. So, re-create the pod calico-kube-controllers in namespace kube-system
, let it succeed, then re-create other pods.
If you intend to use Cilium on that host, go back to the Cillium installation guide. If you re-do it, you'll probably see that /var/run/cilium/cilium.sock has been created for you.
Best Answer
By using a Self-Managed Kubernetes Cluster you have many pros and cons.
As a pro, on a self-managed Kubernetes Cluster, you have control over the management layer. Fully managed Kubernetes services in the cloud don’t allow you to configure the cluster master, because that component is handled by the managed service. When you deploy your own cluster using kubeadm, kubespray or even doing it the hard way, you have full access to the cluster master all the other related management components.
This also adds flexibility of configuration where you can configure the cluster and the nodes the way you want, instead of wrestling with the configuration options supported by a managed service.
You also have more control over the deployment and management of your cluster. For example, you can deploy multiple node pools or choose to have different instance types for different nodes. These options aren’t available with many managed Kubernetes services.
On the other hand, you have the fact that deploying and maintain a self-managed cluster is time-consuming and requires deeper knowledge from the maintainer. Cloud providers have specific teams to take care of these solutions and in general, this adds more reliability to the solutions provided.
There are more pros and cons described in this article. It's from NetApp but part of the post is not specific to it and it worth reading it.