Intuitively it seems appropriate that the development environment (and all test environments) be as close to the production build as possible. Are there any documented arguments in support of this intuition?
Development Environment – Arguments for Matching Development and Production Environments
development-environment
Related Solutions
The answer: Money
I don't care what the actual reason is. Money MUST be at the root of all of your reasoning, especially when dealing with management.
If we both sat in a room for 2 hours, we could come up with dozens of reasons why it is better to have multiple environments.
Here's the problem: If the reasons are not based on money, then none of them matter.
Programmers are not hired to be smart. They're not hired to be creative. They're hired to increase revenue -- either by earning money or saving money. If you're not doing either one of those, you'd better get your resume together.
When looking at it from that standpoint, the answer is simple:
Having only one environment increases our downtime and results in lost revenue. Multiple environments allows us to protect our profits by giving our users a front-end that is just as reliable and dependable as our company.
Repeat it every day.
There are some great comments below that add some real value to this answer, so I'll mention them:
Karl Bielefeldt had a great point when he mentioned that Cost/Benefit analysis is an important factor. An economist might refer to it as the opportunity cost of pursuing multiple environments. While it may be surprising to hear, there are scenarios where multiple environments may not be the answer! If the website of your company is a very minor addition, then unexpected downtime may actually be the more cost effective way of doing business. This doesn't sound like the position you are in, but it is worth mentioning.
BlairHippo had a good point in that you should feel free to make it seem like a catastrophe (and if you lose your data, it is!). Liability is a great tool for persuading managers, but still for the same reason--lawsuits are expensive. Avoiding them saves money.
As an addendum, I found this article to be quite good. It doesn't directly answer your question, but enables you to recognize how programmers are viewed to management, which in turn, leads to this answer. Good read.
There not only is not a fixed standard, but there really isn't a fixed pattern. The dependencies between what you are building and the scale at which you can afford to replicate it are going to dictate what this has to look like from one sort of project to another.
I have worked with as few as one environment and as many as 13.
In the sequence you describe I would usually see them named them something like
- local or dev if you don't use dev in the next step
- dev or integration if this is the first deploy after merges
- test or QA
- uat or acceptance or QA if you didn't use QA in step 3
- pre-prod, staging or performance if it a performance step for final sign-off
- prod
My advice would be to agree on the names, purposes and criteria to enter and leave each for every product or per project then when you realize you need a 7th environment or only need 5 in one case for some other reason in the future discuss again with the team.
If you have team members that are getting hung up on the semantics of the names you can always just drop the names and refer to them as prod minus six through prod minus one with one manager that simply refused to let his QA staff test on an environment that was not named "QA"
If you are looking to name the servers themselves I usually suggest naming them by who's authority they are under. Usually this goes something like:
- dev machines can be manipulated by developers
- QA machines cannot be manipulated by developers but are also not monitored by production support
- prod machines are prod support's business
most people end up using those sorts of names as prefixes or suffixes so that you have a chain like "devsqllweb" "qasqlweb" "prodsqlweb" or something like that.
Best Answer
It's one of the founding principles of continuous delivery that your integration tests and manual tests need to run in a "production-like" environment in order to have any assurance of a stable release. The more production-like your testing and staging environments are, the more confident you can be, up to and including the point of daytime fire-and-forget releases.
That being said, your development environment does not need to be the same as production, and it definitely should not have production data - privacy leaks, ad-hoc updates, all sorts of problems there. Integration happens after your code leaves the development environment (specifically, in your CI environment, assuming you have one), and most teams don't run integration tests locally, so mirroring production in dev won't be that helpful since your code and unit tests are generally going to abstract away any environmental dependencies (assuming that you've designed them correctly).
It is, however, useful to use the same deployment scripts for both local/dev and test/staging/prod, because it adds another layer of testing to the deployment itself and helps you refine your process. But it doesn't need to be the same. It's not really cost-effective to buy an Oracle license for every single dev box, for example, so don't count on perfect consistency.