Architecture – Blue Green Deployment With Single Database

Architecturedeploymentdevops

I am trying to analyze blue green deployment strategy to provide zero downtime upgrades for our customers. We have an enterprise java based monolithic web application that uses Oracle as the data store. There are a few challenges that I would like to get expert inputs

  1. With every database object types (procedures, tables, indexes, constraints etc) we need special handling, most of them are stated here. This ultimately requires fine code handling for each and every use case.

  2. With blue green deployments, how do we handle active user web sessions? Do we detect session close events?

The implementation effort for such an approach needs time (approximately an year as the post suggests) and fine care to make sure we get it right. Are there simpler and better alternatives to this?

Thanks!

Best Answer

Load Balancer

For the second point, use a suitable load balancer. You can shift a portion of the traffic across, first to prove the new deployment, then to slowly ramp it up. It also allows a quick backout strategy.

Database versioning

There are two ways to go about this:

  1. Make the Authoritive Data-form (the SQL schema) backward compatible between Green/Blue versions (any older version of the schema/procs/etc can be ditched/deleted/dumped).
  2. Move the Authoritative source of the data elsewhere, and treat the SQL schema as a dynamically updated view. Simply rebuild that view on each deployment with all procs/schema/etc have a version prefix of some kind.

In the first method, it might make sense to distance your application from the data-representation and move all data transfer via the stored procedures. This ensures a clean interface that can be ported forward. This allows the migration path to be like:

  1. Create new types/tables, and add-only new columns to current tables, perform any cleanup of the old version.
  2. Deploy stored-procedures that can handle new and old data structures. Read all, write new.
  3. Perform data migration and massaging essential to the new deployment.
  4. Swap traffic to freshly deployed application.
  5. Perform any extra data migration and massaging to port to the new data structure.
  6. Either wait for a newer deployment, or deploy stored-procedures that can handle the new data structures only.
  7. if not waiting perform cleanup of old data, stored procs, tables, etc...

If this sounds too complex, or performs manipulation on live data is too dangerous then moving the definition of what is authoritive to another data source may be what you need. With this it no longer matters what is in the sql data store (unless it is storing the event log in which case except that part). Deployment now looks like:

  1. Create new schema, stored procs, types, etc under new namespace or with a prefix.
  2. Scan the event log from the earliest record, update the schema as is relevant.
  3. Once mostly caught up, slowly switch traffic to the new deployment.
    • users on the old deployment can still interact with users on the new deployment, both are reading the event log and receiving updates.
  4. When happy that everyone is on the new deployment, and no rollback is required, stop the old system.
  5. delete the old schema, procs, etc...

Your application will have to double enter the data, first on the event log, then it will need to read the event log and update the associated table data. On the plus side this will allow several versions of the product to work together for a time at the cost of extra work.

Related Topic