I was able to connect to Cloud SQL Postgres by creating a VPC-native cluster as suggested by @patrick-w.
My VPC subnetwork creation was modified to include two secondary ranges:
gcloud compute networks subnets create stg-vpc-us-central1 \
--network stg-vpc \
--region us-central1 \
--range 10.10.0.0/16 \
--secondary-range stg-vpc-us-central1-pods=10.11.0.0/16,stg-vpc-us-central1-services=10.12.0.0/16
And my cluster creation command was modified to enable ip-alias
, and added details of the secondary ranges to use.
gcloud -q container clusters create cluster-1 \
--zone us-central1-a \
--num-nodes 3 \
--enable-ip-alias \
--network stg-vpc \
--subnetwork stg-vpc-us-central1 \
--cluster-secondary-range-name stg-vpc-us-central1-pods \
--services-secondary-range-name stg-vpc-us-central1-services
After some contact with Google support, they provided me with a working solution. It is in fact not possible using only Google Cloud native tools, but one can make it work.
So the first part is the connection to the internet through the VPN-tunnel:
One needs to set up a custom NAT-Gateway in the project responsible for internet traffic. Instructions on this are provided by Google here, it can even be HA and there is a terraform module and lots of examples provided.
After doing that you need to add some routes:
Add a routes that send all traffic designated at 0.0.0.0/0 to the NAT-gateway VM(s)
Add a LOWER priority route that applies only to the NAT-gateway VM (use instance tags) and make the next hop the default Internet gateway.
If you are advertising 0.0.0.0/0 to the on-premise network, it will now send all traffic through your cloud network to the internet. I want to mention here, that it is not possible anymore to connect to the on-premise network directly, as it uses the NAT-gateway for all traffic (can be bypassed by adding a route to the on-premise network that targets the originating IP).
The second part of this is redirecting traffic that is designated for an external ip to an internal IP (e.g. the kubernetes cluster in my question above):
First of all you need to add an internal load balancer to your cluster.
As all traffic is routed through the NAT-gateway VMs, you can add PREROUTING rule to the iptable manually on the VMs:
sudo iptables -t nat -A PREROUTING -d $EXTERNAL_LOAD_BALANCER_IP -j DNAT --to-destination $INTERNAL_LOAD_BALANCER_IP
If you set $ORIGINAL_DESTINATION
to your clusters external load balancer IP and $INTERNAL_LOAD_BALANCER_IP
to your internal load balancer ip, all traffic that should target the external load balancer will now land at your internal load balancer. This also means there is no need to add a custom DNS.
Best Answer
As per the official documentation1:
There is a feature request to get this implemented2.
My suggestion is to use Cloud SQL Proxy3, so the on-prem communicates with the proxy with the standard database protocol used by your database and then the proxy uses a secure tunnel to communicate with its companion process running on the server.
This official documentationp4 may serve you well.