Kubernetes – How to Setup CRI-O with Kubeadm and Kubelet on Kubernetes 1.18.2

kubeadmkubernetes

I am relatively new to Kubernetes and although that I am able to launch the master node (join workers / master nodes) by using the default socket (/var/run/dockershim.sock) I would like to use the cri-o socket (unix:///var/run/crio/crio.sock).

I have been reading any documentation that I was able to find but none it seems to be working for me.

I am running Kubernetes on Centos7.

CRI-O:

# crio version
Version:       1.18.2
GitCommit:     754d46b53595cf2db74d2a73a685d573910b814e
GitTreeState:  clean
BuildDate:     2020-06-25T09:23:58Z
GoVersion:     go1.13.6
Compiler:      gc
Platform:      linux/amd64
Linkmode:      dynamic

Docker:

# docker version
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:46:54 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:45:28 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

I follow the official documentation Container runtimes, but I also found the repo in GitHub which describes a bit different the configuration CRI-O (GitHub).

I tried installing cri-o from source but also from the rpm. Both times the result is the same:

Jun 25 13:31:19 hostname kubelet[23665]: I0625 13:31:19.700722   23665 server.go:417] Version: v1.18.2
Jun 25 13:31:19 hostname kubelet[23665]: I0625 13:31:19.701175   23665 plugins.go:100] No cloud provider specified.
Jun 25 13:31:19 hostname kubelet[23665]: I0625 13:31:19.701208   23665 server.go:837] Client rotation is on, will bootstrap in background
Jun 25 13:31:19 hostname kubelet[23665]: F0625 13:31:19.701323   23665 server.go:274] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jun 25 13:31:19 hostname systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Jun 25 13:31:19 hostname systemd[1]: Unit kubelet.service entered failed state.
Jun 25 13:31:19 hostname systemd[1]: kubelet.service failed.

From the little that I know if I remember correctly this file /etc/kubernetes/bootstrap-kubelet.conf is autogenerated when kubeadm is started.

Configurations that I have applied.

10-kubeadm.conf:

# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably,
# the user should use the .NodeRegistration.KubeletExtraArgs object in the configuration files instead.
# KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

01-log-level.conf:

# cat /etc/crio/crio.conf.d/01-log-level.conf
[crio.runtime]
log_level = "info"

01-cgroup-manager.conf:

# cat /etc/crio/crio.conf.d/01-cgroup-manager.conf
[crio.runtime]
cgroup_manager = "systemd"

kubelet:

# cat /etc/default/kubelet
KUBELET_EXTRA_ARGS=--feature-gates="AllAlpha=false,RunAsGroup=true" --container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m

I can verify that the cri-o socket is working as I can pull the images from my repo:

# kubeadm config images pull --image-repository=my.private.repo --kubernetes-version=v1.18.2 --cri-socket unix:///var/run/crio/crio.sock
W0625 13:53:17.554897   29936 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled my.private.repo/kube-apiserver:v1.18.2
[config/images] Pulled my.private.repo/kube-controller-manager:v1.18.2
[config/images] Pulled my.private.repo/kube-scheduler:v1.18.2
[config/images] Pulled my.private.repo/kube-proxy:v1.18.2
[config/images] Pulled my.private.repo/pause:3.2
[config/images] Pulled my.private.repo/etcd:3.4.3-0
[config/images] Pulled my.private.repo/coredns:1.6.7

I have spend 3 days and I am not able to figure it out. Can someone with more experience provide more info?

Update: adding init command:

kubeadm init \
        --upload-certs \
        --cri-socket=unix:///var/run/crio/crio.sock \ # /var/run/dockershim.sock 
        --node-name=master-prime \
        --image-repository=my.private.repo \
        --pod-network-cidr=10.96.0.0/16 \
        --kubernetes-version=v1.18.2 \
        --control-plane-endpoint=IP:PORT \
        --apiserver-cert-extra-sans=IP \
        --apiserver-advertise-address=IP

Best Answer

It has been some time that I raised this question and I never answered it. I completely forgot.

The problem with me was that I am launching the cluster on offline cluster.

I managed to figured it out and the CRI-O team asked me to documented in case that other would try to do the same thing.

The full configuration and steps can be found in the official GitHub page: Running kubeadm in an off line network

Hope that this helps someone else in future.

Related Topic