Kubernetes cluster ip not answering

clusterkubernetesservice

We've setup a kubernetes cluster with 3 masters and 3 workernodes. Then we've installed the kubernetes-dashboard which failes because it can't connect to kubernetes (api-server). It's looking for localhost:8080 but it's not reachable.
When executing env in a busybox I receive:

KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.2.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.2.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.2.0.1
KUBERNETES_SERVICE_HOST=10.2.0.1
KUBERNETES_SERVICE_PORT=443

So I would expect that kubernetes should be available on 10.2.0.1:443 but it doesn't answer. (Connection refused)

The bind-address is 0.0.0.0 (which is secured by ssl auth) the insecure-bind-address is unset (which means it's bound to 127.0.0.1).
Within the documentations I can see that the unsecured port (8080) is exposed to the cluster-network. But I can't see it. If I execute kubectl get services I see:

NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.2.0.1     <none>        443/TCP   1d

Do I have to take some more actions to expose 8080 there and/or make kubernetes available on these ports?

Best Answer

Connecting to localhost:8080 is the default behavior when a Kubernetes client isn't configured with a specific location of an apiserver. Typically, the dashboard connects to the apiserver using the "In cluster credentials" that are added to the pod via a service account.