Kubernetes – making Nodeport accessible on all nodes

kubernetes

I'm running a Kubernetes bare metal install and I'm trying to make my test nginx application (simply created with kubectl create deployment nginx --image=nginx) visible remotely from all nodes. The idea being I can then use a bare metal HAProxy installation to route the traffic appropriately.

From everything I've read this configuration should work and allow access via the port across nodes. Additionally, performing a netstat does seem to show that the nodeport is listening on all nodes –

user@kube2:~$ netstat -an | grep :30196
tcp6       0      0 :::30196                :::*                    LISTEN

My service.yaml file –

apiVersion: v1
kind: Service
metadata:
  name: test-svc
  namespace: default
spec:
  type: NodePort
  externalTrafficPolicy: Cluster
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 443
    targetPort: 443
    protocol: TCP
    name: https
  selector:
    app: nginx

My node networking configuration –

kube1 - 192.168.1.130 (master)
kube2 - 192.168.1.131
kube3 - 192.168.1.132

My service running –

user@kube1:~$ kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP                      18m   <none>
test-svc     NodePort    10.103.126.143   <none>        80:30196/TCP,443:32580/TCP   14m   app=nginx

However, despite all the above, my service is only accessible on the node it is running on (kube3/192.168.1.132). Any ideas why this would be or am I just understanding Kubernetes?

I'd had a look at load balancers and ingress but what doesn't make sense is if I routed all traffic to my master to distribute (kube1), what if kube1 went down? Surely I need a load balancer to target my load balancer?!

Hope someone can help!

Thanks,
Chris.

Best Answer

If you want to expose service to outside cluster use service type either LoadBalancer or ingree. However is you use LoadBalancer approach has its own limitation. You cannot configure a LoadBalancer to terminate HTTPS traffic, virtual hosts or path-based routing. In Kubernetes 1.2 a separate resource called Ingress is introduced for this purpose. Here is example of LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-app
  name: nginx-svc
  namespace: default
spec:
  type: LoadBalancer  # use LoadBalancer as type here
  ports:
    - port: 80
  selector:
    app: nginx-app

$ kubectl get services -l app=nginx-app -o wide
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP                                                                  PORT(S)        AGE       SELECTOR
nginx-svc   LoadBalancer   <ip>   a54a62300696611e88ba00af02406931-1787163476.myserver.com   80:31196/TCP   9m        app=nginx-app

Post that test url

$curl a54a62300696611e88ba00af02406931-1787163476.myserver.com
Related Topic