I've been working on setting up a Kubernetes cluster for while now. My setup features two nodes and one master. All three machines are running on a Proxmox cluster and have 2 virtual network interfaces. One of the interfaces (listed below) is in a bridged network with the other machines. The other one is exposed to an internal network.
The network set-up looks like this for the bridged interfaces:
Network: 10.10.0.0
Broadcast: 10.10.255.255
Netmask: 255.255.0.0
kubernetes-master IP: 10.10.0.1
kubernetes-worker01 IP: 10.10.0.2
kubernetes-worker02 IP: 10.10.0.3
All servers can talk to each other without any issues. I haven't set-up any kind of firewall yet.
root@kubernetes-master:~/manifests# kubectl get nodes
NAME STATUS AGE
10.10.0.2 Ready 5d
10.10.0.3 Ready 5d
I have a hello world nodeJS app that provides an HTTP server on port 8080 and displays "Hello world" when queried. It's set up like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-node-deployment
spec:
replicas: 4
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: kubernetes-master:5000/hello-node:v1
ports:
- containerPort: 8080
Then I created a new service that should expose the deployment via NodePort.
apiVersion: v1
kind: Service
metadata:
name: hello-node-service
labels:
app: hello-node
spec:
ports:
- port: 8080
protocol: TCP
selector:
app: hello-node
type: NodePort
After starting both the service and deployment:
root@kubernetes-master:~/manifests# kubectl describe service hello-node-service
Name: hello-node-service
Namespace: default
Labels: app=hello-node
Selector: app=hello-node
Type: NodePort
IP: 10.100.0.88
Port: <unset> 8080/TCP
NodePort: <unset> 30862/TCP
Endpoints: 192.168.0.22:8080,192.168.0.23:8080,192.168.0.89:8080 + 1 more...
Session Affinity: None
No events.
root@kubernetes-master:~/manifests# kubectl get pods --selector="app=hello-node" --output=wide
NAME READY STATUS RESTARTS AGE IP NODE
hello-node-deployment-815057587-0w896 1/1 Running 0 24m 192.168.0.89 10.10.0.2
hello-node-deployment-815057587-62d2b 1/1 Running 0 24m 192.168.0.23 10.10.0.3
hello-node-deployment-815057587-d6t4z 1/1 Running 0 24m 192.168.0.90 10.10.0.2
hello-node-deployment-815057587-k7qcx 1/1 Running 0 24m 192.168.0.22 10.10.0.3
After that the master can't contact any of the nodes on the provided node port (10.10.0.2:30862, 10.10.0.2:30862). The connection hangs and doesn't succeed.
If I connect to the node via ssh I can successfully query the service by directly talking to the pod:
root@kubernetes-worker02:~# curl http://192.168.0.22:8080
Hello World!
Am I missing something here? Is this the expected behavior or is my setup broken?
Best Answer
Kubernetes requires more than just the nodes being able to talk to each other. It also requires a network (or routing table) so pods can talk to each other. It's essentially another network just for the pods (often called an overlay/underlay network) that allows pod on nodeA to talk to pods on nodeB.
From the looks of it you don't have pod networking set up. You can implement overlay networking a multitude of ways (which is one reason it's so confusing). Read more about the networking requirements here.
With only 2 nodes I would recommend you actually set up what I like to call "no SDN Kubernetes" and just manually add pod routes to each node. It would require you to do 2 things.
I have details on how to do it on my blog post I wrote about the subject.
Unfortunately, setting up the pod networking is only going to get you 1/2 of the way there. In order to implement automatic NodePort services you also need to install the kube-proxy. The job of the kube-proxy is to watch for what port a service starts on and then route that port to the correct service/pod inside the cluster. It does this via IP tables and is mostly automatic.
I couldn't find a very good example of deploying kube-proxy manually (usually it's handled via your deployment tool) Here's an example of a DaemonSet the kubeadm tool should automatically create to run the kube-proxy on every node in the cluster.
One other resources that might be useful to go through is Kubernetes the Hard Way. It's not directly applicable to running in VMs on proxmox (it assumes GCE or AWS) but it shows you the bare minimum steps and resources needed to run a functioning Kubernetes cluster.