503 Service Temporarily Unavailable
I'm getting this error in two cases:
- service mentioned in ingress does not exist
- service does exist, but there is no pod matched by service selector.
Nginx Ingress controller is able to access service without necessity to specifying type=NodePort
for a service.
I've tested a configuration that is quite close to yours and it works fine with the service type=ClusterIP
.
Miniube version is v0.30.0 (ingress addon enabled)
Ingress service is configured as NodePort because we have to access it from host machine:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: kube-system
labels:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/part-of: kube-system
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: nginx-ingress-controller
app.kubernetes.io/part-of: kube-system
Here is the log of experiment:
I've created two deployments for frontend and api and checked if it's running:
$ kubectl run template-frontend --image=hashicorp/http-echo --labels=app=template,type=frontend -- -listen=:80 -text="Frontend"
$ kubectl run template-api --image=hashicorp/http-echo --labels=app=template,type=api -- -listen=:80 -text="API"
$ kubectl get pods -o wide
I've exposed them via ClusterIP service and checked their addresses:
$ kubectl expose deployment template-frontend --port=80
$ kubectl expose deployment template-api --port=80
$ kubectl get svc -o wide
I've checked accessibility of pods via services using their ClusterIPs:
$ kubectl run ubuntu --rm -it --image ubuntu --restart=Never --command -- bash -c 'apt-get update && apt-get -y install curl less net-tools && bash'
root@ubuntu:/# curl http://10.96.101.51
API
root@ubuntu:/# curl http://10.107.165.156
Frontend
I've applied ingress.yaml file to the cluster:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: template-ingress
labels:
app: template
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: template.example.com
http:
paths:
- path: /
backend:
serviceName: template-frontend
servicePort: 80
- path: /api
backend:
serviceName: template-api
servicePort: 80
Now I need to check the IP address of minikube node:
$ minikube ip
192.168.99.100
and the service node port:
$ kubectl get svc --all-namespaces | grep ingress
Usualy the port number in the range of 30000 and 33000
kube-system nginx-ingress NodePort 10.99.220.242 <none> 80:32462/TCP,443:32318/TCP 1h app.kubernetes.io/name=nginx-ingress-controller,app.kubernetes.io/part-of=kube-system
Finally I check if the pods are able to serve requests via ingress:
$ curl -H "Host:template.example.com" http://192.168.99.100:32462/api/
API
$ curl -H "Host:template.example.com" http://192.168.99.100:32462/
Frontend
According to the official gcloud documentation:
VPC networks only support IPv4 unicast traffic. They do not support broadcast, multicast, or IPv6 traffic within the network: VMs in the VPC network can only send to IPv4 destinations and only receive traffic from IPv4 sources.
It is possible to create an IPv6 address for a global load balancer
Please read this article about ipv6 support and dual-stack configurations
In azure:
IPv6 for Azure Virtual Network is currently in public preview. This preview is provided without a service level agreement and is not recommended for production workloads.
You can find more information here
Discussion about ipv6 support on github
In addition to work the cluster with ipv support the cluster should have dual-stack implementation supporting IPv4 and IPv6 for both pods and services.
As an example please take a look here and here and here kubeadm-dind-cluster
At the moment probably Amazon provide the biggest IPv6 support
Best Answer
You don't have to use NodePort and you don't have to use external load balancer. Just dedicate some of your cluster nodes to be loadbalancer nodes. You put them in a different node group, give them some labels:
mynodelabel/ingress: nginx
, and than you host an nginx ingress daemonset on that node group.Most important options are:
and
Optionally you can taint your loadbalancer nodes so that regular pods don't work on them and slow down the nginx.