Kube-dns fails open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

google-kubernetes-enginekubernetes

I've managed to setup a 5 node cluster, 2 masters, 3 workers. using the "roll your own" instructions here:
https://kubernetes.io/docs/getting-started-guides/scratch/#preparing-certs

I can run pods np but dns is not functional. As per the documentation, One method is to setup a cluster DNS service:

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/

All fine and dandy, while I realise its the job of the add-on manager to kick start the pod that project provides, I have been firing it up manually, in the interests of debugging

kubectl create -f kube-dns.yaml

Everything is created successfully, and eventually the deployment "spawns" a pod, and the pod the containers etc… however the kube-dns ALWAYS fails with this error:

"Failed to create a kubernetes client: open /var/run/secrets/kubernetes.io/serviceaccount/token"

I understand that this is kubernetes providing a token to the container, but whay I dont get is why it cannot be found.

Especially when it appears the required secrets are in existence:

# kubectl get serviceaccounts -n kube-system
NAME       SECRETS   AGE
default    2         13d
kube-dns   2         29m
bddcbpkbn1:~ # kubectl get secrets -n kube-system
NAME                   TYPE                                  DATA      AGE
default-token-6wnx5    kubernetes.io/service-account-token   2         44m
default-token-94kww    kubernetes.io/service-account-token   2         46m
kube-dns-token-mnbg2   kubernetes.io/service-account-token   2         28m
kube-dns-token-wrs8h   kubernetes.io/service-account-token   2         26m
#

Can anyone offer any suggestions as to why this is failing, or how I can go about diagnosing the issue?

BTW I have disabled the liveness detection parameters to make sure they are not the cause of the problem

Edit: A bit more on this, I can docker inspect the failed container, so have done this and found /var/run/secrets/kubernetes.io/serviceaccount/token is not mentioned anywhere, which leads me to believe that kubelet is not telling docker to mount that volume as the images is clearly expecting.

Further, it stands to reason that the token must have to exist in some form on the worker node in question, which it appears not to. Specifically Im looking in /var/lib/kubernetes/kubelet/pods/07ac2f2b-3969-11e8-906e-caef73f3b003/volumes/kubernetes.io~configmap/ directory, where I can see a kube-dns-config directory, which IS mounted in the container, and IS mentioned in docker inspect output for the relevant container.

So from that at least it seems the problem is with Kubernetes (and potentially kubelet) and not Docker.

Thanks in advance

Best Answer

I found the problem.

I had not included

--admission-control=ServiceAccount

in the apiserver command line.

Related Topic