Kubernetes – Modify Kubelet and Control-Plane Configuration with Kubeadm

kubeadmkubernetes

I've installed a kubernetes (v1.20.0) cluster with 3 masters and 3 nodes using kubeadm init and kubeadm join, all on Ubuntu 20.04. Now I need to update the configuration and

  • Add --cloud-provider=external kubelet startup flag on all nodes as I'm going to use vsphere-csi-driver
  • Change the --service-cidr due to network requirements

However I'm not entirely sure what is the proper way of making these changes.

Kubelet

Looking at /etc/systemd/system/kubelet.service.d/10-kubeadm.conf there is a reference to /etc/default/kubelet but it's considered a last resort and recommends updating .NodeRegistration.KubeletExtraArgs instead:

...
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
...

Where is this .NodeRegistration.KubeletExtraArgs and how do I change it for all nodes in the cluster?

control-plane

From what I understand the apiserver and controller-manager are run as static pods on each master and reading their configuration from /etc/kubernetes/manifests/kube-<type>.yaml. My first thought was to make necessary changes to these files, however according to the kubernetes docs on upgrading a kubeadm cluster, kubeadm will:

* Fetches the kubeadm ClusterConfiguration from the cluster.
* Optionally backups the kube-apiserver certificate.
* Upgrades the static Pod manifests for the control plane components.

Because I've changed the manifests manually they are not updated in the ClusterConfiguration (kubectl -n kube-system get cm kubeadm-config -o yaml), would my changes survive an upgrade this way? I suppose I could also edit the ClusterConfiguration manually with kubeadm edit cm ... but this seems error prone and it's easy to forget changing it every time.

According to the docs there is a way to customize control-plane configuration but that seems to be only when installing the cluster for the first time. For example, kubeadm config print init-defaults as the name suggests only give me the default values, not what's currently running in the cluster.
Attempting to extract the ClusterConfiguration from kubectl -n kube-system get cm kubeadm-config -o yaml and run kubeadm init --config <config> fails in all kind of ways because the cluster is already initialized.

Kubeadm can run init phase control-plane which updates the static pod manifests but leaves the ClusterConfiguration untouched, so I would need to run the upload-config phase as well.

Based on the above, the workflow seems to be

  • Extract the ClusterConfiguration from kubeadm -n kube-system get cm kubeadm-config and save it to a yaml file
  • Modify the yaml file with whatever changes you need
  • Apply changes with kubeadm init phase control-plane all --config <yaml>
  • Upload modified config kubeadm init phase upload-config all --config <yaml>
  • Distribute the modified yaml file to all masters
  • For each master, apply with kubeadm init phase control-plane all --config <yaml>

What I'm concerned about here is the apparent disconnect between the static pod manifests and the ClusterConfiguration. Changes aren't made particularly often so it's quite easy to forget that changing in one place also require changes in the other – manually.

Is there no way of updating the kubelet and control-plane settings that ensure consistency between the kubernetes components and kubeadm? I'm still quite new to Kubernetes and there is a lot of documentation around it so I'm sorry if I've missed something obvious here.

Best Answer

I will try to address both of your questions.


1. Add --cloud-provider=external kubelet startup flag on all nodes

Where is this .NodeRegistration.KubeletExtraArgs and how do I change it for all nodes in the cluster?

KubeletExtraArgs are any arguments and parameters supported by kubelet. They are documented here. You need to use the kubelet command with a proper flags in order to modify it. Also, notice that the flag you are about to use is going to be removed in k8s v1.23:

--cloud-provider string The provider for cloud services. Set to empty string for running with no cloud provider. If set, the cloud provider determines the name of the node (consult cloud provider documentation to determine if and how the hostname is used). (DEPRECATED: will be removed in 1.23, in favor of removing cloud provider code from Kubelet.)

EDIT:

To better address your question regarding: .NodeRegistration.KubeletExtraArgs

These are also elements of the kubeadm init configuration file:

It's possible to configure kubeadm init with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed using the --config flag and it must contain a ClusterConfiguration structure and optionally more structures separated by ---\n Mixing --config with others flags may not be allowed in some cases.

You can also find more details regarding the NodeRegistrationOptions as well as more information on the fields and usage of the configuration.

Also, note that:

KubeletExtraArgs passes through extra arguments to the kubelet. The arguments here are passed to the kubelet command line via the environment file

kubeadm writes at runtime for the kubelet to source. This overrides the generic base-level configuration in the kubelet-config-1.X ConfigMap

Flags have higher priority when parsing. These values are local and specific to the node kubeadm is executing on.

EDIT2:

kubeadm init is supposed to be used only once when creating a cluster whenever you use it with flags or a config file. You cannot change the configs by executing it again with different values. Here you will find info regarding kubeadm and it's usage. Once the cluster is setup kubeadm should be dropped and changes be made directly to the static pod manifests.


2. Change the --service-cidr due to network requirements

This is more complicated. You could try to do this similarly like here or here but that approach is prone to mistakes and rather not recommended.

The more feasible and safer way would be to simply recreate the cluster with kubeadm reset and kubeadm init --service-cidr. The option to automatically change the CIDRs was not even expected from the Kubernetes perspective. So in short, the kubeadm reset is the way to go here.

Related Topic