MicroK8S Helm – Error: Could Not Find a Ready Tiller Pod

helmkubernetesUbuntu

I need learn about Kubernetes, Helm, conjure-up and also need install Eclipe-Che, and to it I did:
On a fresh install of [Ubuntu 18.04.2 Server X64] running as virtual machine inside vmware workstation Im installing MicroK8S and Helm.

Its on a fresh Ubuntu install and the 0nly script block im pasting on terminal is:

sudo apt-get update
sudo apt-get upgrade
sudo snap install microk8s --classic
microk8s.kubectl version
alias kubectl='microk8s.kubectl'
alias docker='microk8s.docker'
kubectl describe nodes | egrep 'Name:|Roles:|Taints:'
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes
sudo snap install helm --classic
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-cluster-rule \
            --clusterrole=cluster-admin \
            --serviceaccount=kube-system:tiller
helm init --service-account=tiller
helm version
helm ls
kubectl get po -n kube-system 

The script block above with each output on terminal is:

myUser@myServer:~$ sudo snap install microk8s --classic
microk8s v1.13.4 from Canonical✓ installed
[1]+  Done                    sleep 10

myUser@myServer:~$ microk8s.kubectl version
Client Version: version.Info { 
    Major:"1", Minor:"13", GitVersion:"v1.13.4", 
    GitCommit:"c27b913frrr1a6c480c287433a087698aa92f0b1", 
    GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z", 
    GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
    The connection to the server 127.0.0.1:8080 was 
      refused - did you specify the right host or port?

myUser@myServer:~$ alias kubectl='microk8s.kubectl'

myUser@myServer:~$ alias docker='microk8s.docker'

myUser@myServer:~$ kubectl describe nodes | egrep 'Name:|Roles:|Taints:'
The connection to the server 127.0.0.1:8080 was 
     refused - did you specify the right host or port?

myUser@myServer:~$ kubectl taint nodes --all \
         node-role.kubernetes.io/master-
The connection to the server 127.0.0.1:8080 was 
     refused - did you specify the right host or port?

myUser@myServer:~$ kubectl get nodes
The connection to the server 127.0.0.1:8080 was 
        refused - did you specify the right host or port?

myUser@myServer:~$ sudo snap install helm --classic
helm 2.13.0 from Snapcrafters installed

myUser@myServer:~$ kubectl create serviceaccount tiller \
              --namespace kube-system
Error from server (NotFound): namespaces "kube-system" not found

myUser@myServer:~$ kubectl create clusterrolebinding \
             tiller-cluster-rule \
             --clusterrole=cluster-admin \
             --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created

myUser@myServer:~$ helm init --service-account=tiller
Creating /home/myUser/.helm 
Creating /home/myUser/.helm/repository 
Creating /home/myUser/.helm/repository/cache 
Creating /home/myUser/.helm/repository/local 
Creating /home/myUser/.helm/plugins 
Creating /home/myUser/.helm/starters 
Creating /home/myUser/.helm/cache/archive 
Creating /home/myUser/.helm/repository/repositories.yaml 
Adding stable repo with URL: 
   https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/myUser/.helm.
Tiller (the Helm server-side component) has been 
        installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an 
        insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with 
        the --tiller-tls-verify flag.
For more information on 
   securing your installation see: 
   https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

myUser@myServer:~$ helm version
Client: &version.Version { 
   SemVer:"v2.13.0",
   GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", 
   GitTreeState:"clean"}
Error: could not find tiller

myUser@myServer:~$ helm ls
Error: could not find tiller

myUser@myServer:~$ kubectl get po -n kube-system 
No resources found.

As you can see its also refusung connection on 127.0.0.1:8080 too and with the help of @aurelius I improved script above but as you can see its yet giving the same error:

Error: could not find a ready tiller pod

And I did the fix described in stackoverflow as you can see above.

There is an issue opened on Github pointing to fix above and closing as solved but it doesn't solve the problem.

There is one telling the problem is with snap version of LXD that dont integrate with conjure-up, he tell to install LXD from apt packages and his full explanation is here: https://askubuntu.com/a/959771.
I will try it to see if it works too and come back here.

Best Answer

All was need is:

helm repo update

The full set of commands here :

# Ensure there disk space to install all
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo dpkg-reconfigure tzdata
sudo snap remove lxc
sudo snap remove lxd
sudo apt-get remove --purge lxc 
sudo apt-get remove --purge lxd 
sudo apt-get autoremove
# can throw error, ensure each purgue/uninstall above
sudo apt-add-repository ppa:ubuntu-lxc/stable
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
sudo apt-get install tmux lxc lxd zfsutils-linux 
df -h => 84% Free, 32G
{ SNAPSHOT - beforeLxdInit }
lxd init
    ipv6:none
ifconfig | grep flags
sudo sysctl -w net.ipv6.conf.ens33.disable_ipv6=1  
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1  
sudo sysctl -w net.ipv6.conf.lxcbr0.disable_ipv6=1  
sudo sysctl -w net.ipv6.conf.lxdbr0.disable_ipv6=1  
time sudo snap install conjure-up --classic
{ SNAPSHOT - beforeConjureUp }
conjure-up => CHOICE = { microk8s }
alias kubectl='microk8s.kubectl'
#------------------------------------
# not necessary enable all but its a test
microk8s.enable storage
microk8s.enable registry    
microk8s.enable dns dashboard ingress istio metrics-server prometheus fluentd jaeger
#------------------------------------
time sudo snap install helm --classic
helm init
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm search
# Before update the repo it throw an error:
helm version
    Error: could not find a ready tiller pod 
# Then update the repo:
helm repo update
# After update the repo it was OK:
helm version
    Client: &version.Version { 
            SemVer:"v2.13.0", 
            GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6",
            GitTreeState:"clean"
        }
    Server: &version.Version { 
            SemVer:"v2.13.0", 
            GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", 
            GitTreeState:"clean" 
        }
#------------------------------------
helm install stable/mysql
df -h | grep sda {
    Filesystem:/dev/sda2,
    Size:40G,  
    Used:12G,  
    Avail:26G, 
    Use%:31% 
    Mounted-on:/
    }
{ SNAPSHOT - afterFixErrorBeforeEclipseChe }
#------------------------------------
========================================================================================================================
# Looks like it added a messy OverlayFS
df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            1.9G     0  1.9G   0% /dev
    tmpfs           393M  2.5M  390M   1% /run
    /dev/sda2        40G   12G   26G  31% /
    tmpfs           2.0G     0  2.0G   0% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    /dev/loop0       91M   91M     0 100% /snap/core/6350
    tmpfs           393M     0  393M   0% /run/user/1000
    tmpfs           100K     0  100K   0% /var/lib/lxd/shmounts
    tmpfs           100K     0  100K   0% /var/lib/lxd/devlxd
    /dev/loop1      110M  110M     0 100% /snap/conjure-up/1045
    /dev/loop2      205M  205M     0 100% /snap/microk8s/492
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M     0   64M   0% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M  4.7M   60M   8% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
    shm              64M  4.7M   60M   8% /var/snap/microk8s/common/run/containerd/io.containerd.grpc.v1.cri/sandboxes$
    overlay          40G   12G   26G  31% /var/snap/microk8s/common/run/containerd/io.containerd.runtime.v1.linux/k8s.$
========================================================================================================================

kubectl run eclipseche --image=eclipse/che-server:nightly
    deployment.apps/eclipseche2 created
    ------------------------------------
    # Cant found a way to follow the advise below, cant find the equivalent syntax
    kubectl run --generator=deployment/apps.v1 
    is DEPRECATED and will be removed in a future version. 
    Use 
    kubectl run --generator=run-pod/v1 
    or 
    kubectl create instead

kubectl get pods
    NAME                                      READY   STATUS    RESTARTS   AGE
    brown-hyena-mysql-75f584d69d-rbfv4        1/1     Running   0          72m
    default-http-backend-5769f6bc66-z7jb4     1/1     Running   0          91m
    eclipseche-589954dc99-d4bxm               1/1     Running   0          6m13s
    nginx-ingress-microk8s-controller-p88nm   1/1     Running   0          91m

kubectl get svc
    NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    brown-hyena-mysql      ClusterIP   10.152.184.38   <none>        3306/TCP   74m
    default-http-backend   ClusterIP   10.152.184.99   <none>        80/TCP     93m
    kubernetes             ClusterIP   10.152.184.1    <none>        443/TCP    99m

microk8s.kubectl describe pod eclipseche-589954dc99-d4bxm | grep "IP:"
    IP:  10.1.1.54

sudo apt-get install net-tools nmap

nmap 10.1.1.54 | grep open
    8080/tcp open  http-proxy
Related Topic