You should generally upgrade dependencies when:
- It's required
- There's an advantage to do so
- Not doing so is disadvantageous
(These are not mutually exclusive.)
Motivation 1 ("when you have to") is the most urgent driver. Some component or platform on which you depend (e.g. Heroku) demands it, and you have to fall in line. Required upgrades often cascade out of other choices; you decide to upgrade to PostgreSQL version such-and-so. Now you have to update your drivers, your ORM version, etc.
Upgrading because you or your team perceives an advantage in doing so is softer and more optional. More of a judgment call: "Is the new feature, ability, performance, ... worth the effort and dislocation bringing it in will cause?" In Olden Times, there was a strong bias against optional upgrades. They were manual and hard, there weren't good ways to try them out in a sandbox or virtual environment, or to roll the update back if it didn't work out, and there weren't fast automated tests to confirm that updates hadn't "upset the apple cart." Nowadays the bias is toward much faster, more aggressive update cycles. Agile methods love trying things; automated installers, dependency managers, and repos make the install process fast and often almost invisible; virtual environments and ubiquitous version control make branches, forks, and rollbacks easy; and automated testing let us try an update then easily and substantial evaluate "Did it work? Did it screw anything up?" The bias has shifted wholesale, from "if it ain't broke, don't fix it" to the "update early, update often" mode of continuous integration and even continuous delivery.
Motivation 3 is the softest. User stories don't concern themselves with "the plumbing" and never mention "and keep the infrastructure no more than N releases behind the current one." The disadvantages of version drift (roughly, the technical debt associated with falling behind the curve) encroach silently, then often announce themselves via breakage. "Sorry, that API is no longer supported!" Even within Agile teams it can be hard to motivate incrementalism and "staying on top of" the freshness of components when it's not seen as pivotal to completing a given sprint or release. If no one advocates for updates, they can go untended. That wheel may not squeak until it's ready to break, or even until has broken.
From a practical perspective, your team needs to pay more attention to the version drift problem. 2 years is too long. There is no magic. It's just a matter of "pay me now or pay me later." Either address the version drift problem incrementally, or suffer and then get over bigger jolts every few years. I prefer incrementalism, because some of the platform jolts are enormous. A key API or platform you depend on no longer working can really ruin your day, week, or month. I like to evaluate component freshness at least 1-2 times per year. You can schedule reviews explicitly, or let them be organically triggered by the relatively metronomic, usually annual update cycles of major components like Python, PostgreSQL, and node.js. If component updates don't trigger your team very strongly, freshness checks on major releases, at natural project plateaus, or every k releases can also work. Whatever puts attention to correcting version drift on a more regular cadence.
Your life will be easier if you only build from persistent branches. ie develop and master.
You are already doing this with your dev builds, if you follow gitflow you should build from the master branch after merging the release branch. Not the release branch pre merge.
Subsequent changes to the release are done on hotfix branches and again built on master once the hotfix has been merged in.
I would further advise not pushing your dev build further than the dev environment. All proper testing should be done against the same binaries that will be released. Not the same source code.
So I would go
- finish feature, merge to develop
- build develop and deploy to DEV env.
- run dev testing of new features in DEV environment until you are happy that they are complete, all automated tests pass
- finish more features....
- create release branch, if required
- do final changes for release, set version, add docs etc
- merge release into master
- build master and push to TEST
- test release
- bug fixes in hotfix branches, build from master
- push the same build to UAT once you have passed test
- perform UAT, bug fixes in hotfix branches
- push same build to live
Now obviously the downside of this is if you normally find lots of bugs during testing, your master branch has lots of 'bad' commits
The upside is you test the exact version of the software you deploy.
If you have lots of automated tests and they all pass in DEV then you shouldn't get any bugs in master.
If you are doing manual testing, then you are probably better off pushing the DEV build to TEST and doing your bug fixes as feature branches
Best Answer
It's hard to give an exact answer without further knowledge of the architecture of the system. However, I'd suggest building and releasing the artifact that more or less contains the interface of service B first.
The interface should not have any logic in it that needs testing. It also shouldn't have any dependencies to other artifacts of your system yet to be released. So, it should be easy to change the interface and to release a new version of it.
After the interface has been released, the next version of the client can be developed and built against the already released version of the service interface. There's no need to have the service implementation released yet. Actually, even the service itself could be developed only after releasing the interface. Since there are no direct dependencies between the services themselves, promoting services A and B can be done in any order or at the same time, after exploratory testing or so.
This is dependency inversion principle applied to packaging and configuration level. The service A does not directly depend on the implementation of service B. They both depend on the interface of the service B.