Compute Engine https load balancing with google managed ssl cert

google-cloud-platformgoogle-compute-engineload balancing

Good morning, I would need help to understand the configuration of HTTPS load balancing.

At the moment I have only one vm instance (debian lemp stack with nginx and no apache) loaded in compute engine, with a dns zone set. Everything works fine but I have no load balancing set yet.

About the https front end load balancing more or less, for me, is clear but for the https back end I have some doubts.

At the moment the default_ssl.vhost nginx conf file is set in this way:`server {
listen 443 http2 ssl;
#listen [::]:443 ssl;
server_name _;
include /jet/etc/nginx/conf.d/document_root.settings;

  ssl_certificate "/jet/etc/letsencrypt/live/odisseo.io/fullchain.pem";
  ssl_certificate_key "/jet/etc/letsencrypt/live/odisseo.io/privkey.pem";



# ssl params


  include /jet/etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /jet/etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    # Load configuration files for the default server block.
    include /jet/etc/nginx/conf.d/*.inc;
    include /jet/etc/nginx/sites-enabled/*;

}`

my questions are: once I have finished to configure the back end of https load balancing with self-managed certificates by google on compute engine, how should I modify the default_ssl.vhost nginx conf file ?
Because I read that the certificates, once the self-managed procedure is successful, can be eliminated.
Have I to configure a proxy on the file default_ssl.vhost nginx conf file ?
if yes, how the whole file should be configured ?

Last question is: under network cloud dns in compute engine, at he moment, I have a record type A with the static ip address of the vm instance, once I have finished to configure front-end and back end https load balancing will I have to change this ip address with the new static front-end IP address ?

thanks in advance for the help.

Best Answer

Once you have SSL working on the Google Load Balancer level you can safely remove any SSL settings from your nginx configuration. Similarly - once your GLB is up, you have to update your DNS settings to point at the load balancer's public IP.

The load balancer will effectively work as your point of entry for your users and terminate SSL. From there you can add many GCE instances to your load balancer by making them a part of a load balanced group. This page explains the concepts pretty clearly.

I'd encourage you to use Terraform to manage GCP resources, it's easier to wire things up and manage your resources in a declarative way