We have an in house Kubernetes cluster running on bare-metal, can I set up an NFS server on one of the nodes (either worker or master) in the cluster? If yes do I need to change anything in the cluster?
Kubernetes Storage – NFS Server on a Kubernetes Node
kubernetesnfsstorage
Related Solutions
Make sure you have installed nfs-utils
(rpm-base distros) or nfs-common
(deb-based distros) packages on all Kubernetes nodes.
Following github issue mentioned in the comments and IP address changes in Kubernetes Master Node:
1. Verify your etcd data directory
looking into etcd pod in kube-system namespace
:
(default values using k8s v1.17.0 created with kubeadm),
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
2. Preparation:
- copy
/etc/kubernetes/pki
from Master1 to the new Master2:
#create backup directory in Master2,
mkdir ~/backup
#copy from Master1 all key,crt files into the Master2
sudo scp -r /etc/kubernetes/pki master2@x.x.x.x:~/backup
- On Master2 remove certs with keys that have the old IP address for apiserver and etcd cert:
./etcd/peer.crt
./apiserver.crt
rm ~/backup/pki/{apiserver.*,etcd/peer.*}
- move
pki directory to /etc/kubernetes
cp -r ~/backup/pki /etc/kubernetes/
3. On Master1 create etcd snapshot:
Verify your API version
:
kubectl exec -it etcd-Master1 -n kube-system -- etcdctl version
etcdctl version: 3.4.3
API version: 3.4
- using current etcd pod:
kubectl exec -it etcd-master1 -n kube-system -- etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
- using or using etcdctl binaries:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key snapshot save /var/lib/etcd/snapshot1.db
4. Copy created snapshot from Master1 to Master2 backup directory:
scp ./snapshot1.db master2@x.x.x.x:~/backup
5. Prepare Kubeadm config in order to reflect Master1 configuration:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: x.x.x.x
bindPort: 6443
nodeRegistration:
name: master2
taints: [] # Removing all taints from Master2 node.
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: 10.0.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
6. Restore snapshot:
- using
etcd:3.4.3-0
docker image:
docker run --rm \
-v $(pwd):/backup \
-v /var/lib/etcd:/var/lib/etcd \
--env ETCDCTL_API=3 \
k8s.gcr.io/etcd:3.4.3-0 \
/bin/sh -c "etcdctl snapshot restore './snapshot1.db' ; mv /default.etcd/member/ /var/lib/etcd/"
- or using
etcdctl
binaries:
ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 snapshot restore './snapshot1.db' ; mv ./default.etcd/member/ /var/lib/etcd/
7. Initialize Master2:
sudo kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd --config kubeadm-config.yaml
# kubeadm-config.yaml prepared in 5 step.
- notice:
[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 master2_IP]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master2 localhost] and IPs [master2_ip 127.0.0.1 ::1]
.
.
.
Your Kubernetes control-plane has initialized successfully!
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- After k8s object verification (short example):
kubectl get nodes
kuebctl get pods - owide
kuebctl get pods -n kube-system -o wide
systemctl status kubelet
- If all deployed k8s objects like pods,deployments etc, were moved into your new Master2 node:
kubectl drain Master1
kubectl delete node Master1
Note:
In addition please consider Creating Highly Available clusters in this setup you should have possibility to more than 1 master, in this configuration you can create/remove additional control plane nodes in more safely way.
Related Topic
- Pods stuck with containerCreating status in self-managed Kubernetes cluster in Google Compute Engine (GCE) with an external kube node
- Kubernetes – Fix NFS Volumes Not Mounting After NFS Server Update and Reboot
- Kubernetes Bare Metal – Access Services from Outside Local Network
- Kubernetes Pod OutOfMemory – Immediate Failure After Scheduling
Best Answer
You can setup a
pod
that will act as NFS server.There is a ready image on Docker Hub cpuguy83/nfs-server.
To use it you need to created a service to expose the NFS server to pods inside the cluster:
And a
pod
which will run the image:An example of a
pod
using the NFS volume: