Due to the fact that you have already connected OpenVPN
Node to the Kubernetes cluster using ClusterIP
services, which are managed by kube-proxy, it is recommended to route network packets via iptables
. Now it's time to configure kube-proxy
for transferring all requests to internal CNI network
via OpenVPN
Node:
kube-proxy — kubeconfig=./kube-config/config.yaml — bind-address=xx.xx.xx.xx — cluster-cidr=yy.yy.yy.yy/cc — proxy-mode=iptables — masquerade-all
xx.xx.xx.xx - your OpenVPN node IP address
yy.yy.yy.0/cc - Cluster CIDR
Ensure that OpenVPN
Pod is configured to connect the Kubernetes network:
push “route yy.yy.0.0 255.255.0.0”
To create routes from your Node services to the OpenVPN
gateway, consider using Site-to-site routing via OpenVPN
explained in this Article.
503 Service Temporarily Unavailable
I'm getting this error in two cases:
- service mentioned in ingress does not exist
- service does exist, but there is no pod matched by service selector.
Nginx Ingress controller is able to access service without necessity to specifying type=NodePort
for a service.
I've tested a configuration that is quite close to yours and it works fine with the service type=ClusterIP
.
Miniube version is v0.30.0 (ingress addon enabled)
Ingress service is configured as NodePort because we have to access it from host machine:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/part-of: kube-system
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/part-of: kube-system
Here is the log of experiment:
I've created two deployments for frontend and api and checked if it's running:
$ kubectl run template-frontend --image=hashicorp/http-echo --labels=app=template,type=frontend -- -listen=:80 -text="Frontend"
$ kubectl run template-api --image=hashicorp/http-echo --labels=app=template,type=api -- -listen=:80 -text="API"
$ kubectl get pods -o wide
I've exposed them via ClusterIP service and checked their addresses:
$ kubectl expose deployment template-frontend --port=80
$ kubectl expose deployment template-api --port=80
$ kubectl get svc -o wide
I've checked accessibility of pods via services using their ClusterIPs:
$ kubectl run ubuntu --rm -it --image ubuntu --restart=Never --command -- bash -c 'apt-get update && apt-get -y install curl less net-tools && bash'
root@ubuntu:/# curl http://10.96.101.51
API
root@ubuntu:/# curl http://10.107.165.156
Frontend
I've applied ingress.yaml file to the cluster:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: template-ingress
labels:
app: template
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: template.example.com
http:
paths:
- path: /
backend:
serviceName: template-frontend
servicePort: 80
- path: /api
backend:
serviceName: template-api
servicePort: 80
Now I need to check the IP address of minikube node:
$ minikube ip
192.168.99.100
and the service node port:
$ kubectl get svc --all-namespaces | grep ingress
Usualy the port number in the range of 30000 and 33000
kube-system nginx-ingress NodePort 10.99.220.242 <none> 80:32462/TCP,443:32318/TCP 1h app.kubernetes.io/name=nginx-ingress-controller,app.kubernetes.io/part-of=kube-system
Finally I check if the pods are able to serve requests via ingress:
$ curl -H "Host:template.example.com" http://192.168.99.100:32462/api/
API
$ curl -H "Host:template.example.com" http://192.168.99.100:32462/
Frontend
Best Answer
You don't need to set up such separate Pod.
The Kubernetes Ingress by default does not support TCP or UDP services. But for example,
ingress-nginx
controller provides a mechanism to support TCP or UDP on different ports. You can expose TCP or UDP ports by modifying ConfigMaps.For this reason, this Ingress controller uses the flags '--tcp-services-configmap' and '--udp-services-configmap' to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format:
<namespace/service name>:<service port>:[PROXY]:[PROXY]
Check additional info here.
Such ConfigMap should already be available before deploying the Ingress Controller.
So, try to:
1. Create a ConfigMap with the following TCP service configuration.
2. Point Ingress controller to this ConfigMap using the
--tcp-services-configmap
flag in the configuration like this:3. Expose port 22 in the Service defined for the Ingress like this:
You can define any number of ports that can be exposed using this method.
There is another option for those who are using an ingress-nginx helm chart. Most of the configuration is already done, and you just need to specify your ports in tcp section like this:
where
2222
is the exposed port and22
is the service port.