I'm converting an app to run on AWS Fargate and was going to use the Parameter Store to hold configuration values. Most of the information I've seen about it revolves around injecting Parameter Store values into the container as environment variables. Seems simple enough, but got me wondering about why you wouldn't instead use the AWS SDK to pull the values into your application from the code at app startup. You could then update values in the store and reload them in the app, provided you had a mechanism for restarting the app/updating values on the fly. That seems to be an advantage in that you don't need to re-deploy to change configuration values. Aside from the fact that you need some extra code to handle this, is there a major downside to this approach compared to using task definitions to inject environment variables into the container?
AWS Parameter Store – Approaches for Getting Variables into App
awsconfiguration-managementdevops
Related Solutions
Possibly there is no one good answer to this. It seems that you need to store this data somewhere safe, as it will be needed for disaster recovery purposes one day. This applies equally to properties files and scripts that set environment variables.
- With the source code (in SVN/GIT etc) is a really bad idea, as this data will contain production database passwords and the like.
- Your corporate nightly backup may be sufficient, but it is unlikely to keep a readily-accessible history of change.
- The data needs to be versioned separately to the consuming software. In our current system, a change of configuration leads to a new application build, and this is just plain wrong.
We are currently looking at solutions to this problem, and are leaning towards a code repository with restricted access. This repository would contain cofiguration data only. Do others have experiences to share?
The configuration values are fetched at runtime from environment variables or configuration files directly.
It sounds like you already have the possibility of decoupled deployments because you say that you pass in configuration at runtime. It sounds like your process is the thing that's getting in the way of decoupled config deployments. Configuration is only coupled to the software if it's part of the artifact at build time and cannot be changed at runtime. I think you're conflating repositories with deployments. Not everything that's in the same repository must be deployed together, and things that are in separate repositories can be deployed together.
This would be for patches for instance where the application version x.y will fetch the latest configuration for x.y.*.
What do you mean when you say application version x.y
will fetch configuration for x.y.*
? The application shouldn't be fetching config. The config should be passed in by whatever starts the application (through environment variables or command line arguments).
I think it's pretty important to keep the "developer's" config with the code. Your application code should have a config that is appropriate for a developer to clone the repository, build, and run the software on their local system. I think the dev, test, and prod configs should not be in the same repository for a couple reasons:
- Security - Even though you've encrypted sensitive information in the config, it's still not a great idea to keep this in VCS with the code. E.g. you wouldn't want to keep it in an open-source repository. Such a practice also encourages the key to be shared among developers so that anybody on the team can update the config when it changes.
- Flexibility - Your codebase should be designed to run with any configuration, it shouldn't try to predict how a specific production deployment will be configured. There can be infinite variations of the config, why should your code repository contain specific ones? If you want to create more environments you'll end up with a proliferation of config files.
- Separation of Responsibility - In many companies the personnel who write the software are not the same people who set up the infrastructure it will run on. I think it's easier for the team that manages the infrastructure and deployment of software to manage the configuration of the software. Instead I think the software engineers should create and maintain the description of the configuration and the operations team should manage which hosts/databases/passwords etc. get filled in.
The natural way to do this is to move the configuration values into a dedicated service that the applications will fetch them from, a REST API, backed by a database for instance
I don't see this as much of an improvement of your current setup. Indeed it is much more complex. Now you have to configure your original system to talk to the configuration service. Also running the application locally would now require running an additional service. In the spirit of dependency-injection, software should not configure itself--configuration should be passed in. If you were to develop a service like you're describing it would be better to fetch the configuration from the service then start the application with the fetched config instead of the application fetching its own config.
I am a big proponent of keeping the code and configuration together in one repository. I just can't really quantify the upsides. Here are a few I can think of.
- Keeping configuration isolated from other applications'. Other application's configuration changes won't affect each other. (Changing a seemingly unshared configuration)
There is nothing stopping you from keeping applications' configurations separate from each other and certainly deploying one configuration shouldn't necessitate redeploying all configurations.
- Review of code and configuration at the same time in the same merge request.
I think this is odd. Does this mean before the code is reviewed, you must decide the production configuration? If my change requires a new database, do I have to go to the infrastructure team, create all databases for all environments, fill in the hostnames, usernames, and passwords? Again if the description of the configuration is in the source repository then that can be reviewed without dealing with the production configuration.
Best Answer
I found a couple articles that, while focused on serverless, have some good answers IMO and seem to lean towards allowing a way to change variables at runtime.
https://hackernoon.com/you-should-use-ssm-parameter-store-over-lambda-env-variables-5197fc6ea45b
https://dev.to/hoangleitvn/07-best-practices-when-using-aws-ssm-parameter-store-6m2