I'm sorry about the complexity! I'm not an expert on Compute Engine firewalls, but I expect that you're correct about the limitations of the source tags to only work for internal traffic.
The Kubernetes team is aware that coordinating multiple clusters is difficult, and we're beginning to work on solutions, but unfortunately we don't have anything particularly solid and usable for you yet.
In the meantime, there is a hacky way to load balance traffic from one cluster to the other without requiring the Google Cloud Load Balancer or something like haproxy. You can specify the internal IP address of one of the nodes in cluster B (or the IP of a GCE route that directs traffic to one of the nodes in cluster B) in the PublicIPs field of the service that you want to talk to. Then, have cluster A send its requests to that IP on the service's port, and they'll be balanced across all the pods that back the service.
It should work because there's something called a kube-proxy running on each node of the kubernetes cluster, which automatically proxies traffic intended for a service's IP and port to the pods backing the service. As long as the PublicIP is in the service definition, the kube-proxy will balance the traffic for you.
If you stop here, this is only as reliable as the node whose IP you're sending traffic to (but single-node reliability is actually quite high). However, if you want to get really fancy, we can make things a little more reliable, by load balancing from cluster A across all the nodes in cluster B.
To make this work, you would put all of cluster B's nodes' internal IPs (or routes to all the nodes' internal IPs) in your service's PublicIPs field. Then, in cluster A, you could create a separate service with an empty label selector, and populate the endpoints field in it manually when you create it with an (IP, port) pair for each IP in cluster B. The empty label selector prevents the kubernetes infrastructure from overwriting your manually-entered endpoints, and the kube-proxies in cluster A will load balance traffic for the service across cluster B's IPs. This was made possible by PR #2450, if you want more context.
Let me know if you need more help with any of this!
TL;DR Google Container Engine running Kubernetes v1.1 supports loadBalancerIP
just mark the auto-assigned IP as static first.
Kubernetes v1.1 supports externalIPs:
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
So far there isn't a really good consistent documentation on how to use it on GCE. What is sure is that this IP must first be one of your pre-allocated static IPs.
The cross-region load balancing documentation is mostly for Compute Engine and not Kubernetes/Container Engine, but it's still useful especially the part "Configure the load balancing service".
If you just create a Kubernetes LoadBalancer on GCE, it will create a network Compute Engine > Network > Network load balancing > Forwarding Rule pointing to a target pool made of your machines on your cluster (normally only those running the Pods matching the service selector). It looks like deleting a namespace doesn't nicely clean-up the those created rules.
Update
It is actually now supported (even though under documented):
- Check that you're running Kubernetes 1.1 or later (under GKE edit your cluster and check "Node version")
- Under Networking > External IP addresses you should have already some Ephemeral marked as pointing to your cluster's VM instance (if not or unsure, deploy once without
loadBalancerIP
, wait until you've an external IP allocated when you run kubectl get svc
, and look up that IP in the list on that page). Mark one of them as static, let's say it External Address is 10.10.10.10
.
- Edit your LoadBalancer to have
loadBalancerIP=10.10.10.10
as above (adapt to the IP that was given to you by Google).
Now if you delete your LoadBalancer or even your namespace, it should preserve that IP address upon re-reploying on that cluster. If you need to change the cluster, some manual fiddling should be possible:
- Under “Network load balancing” section, “Target pools” tab, click “Create target pool” button:
- Name:
cluster-pool
(or any other name)
- Region: Select the region of one of your clusters
- Health Check: Optional, if you wish
- Select existing instance groups: Your Kubernetes cluster
- Under “Network load balancing” section, “Forwarding rules” tab, click “Create forwarding rule” button:
- Name:
http-cross-region-gfr
(or any other name)
- Region: Select the region of one of your clusters
- External IP: Select
loadbalancer-ip-crossregion
you just reserved
- Target pool: Select
cluster-pool
you just created
Best Answer
Getting the client IP when using Network Load balancer with Kubernetes is a known limitation. You can refer to this issue bug for updates and workarounds.