First I'd like to make a little summary of the different workflows that you've looked into and you think are not suitable for the kind of development you're working on:
Centralized (Source): Pretty much like SVN workflow but now on a distributed environment. Every developer works on a personal copy of master
and pushes changes to origin/master
directly or via pull request.
Feature branch (Source): Well, that. Every developer working on a particular feature should work on a specific branch dedicated to that feature only. This feature branch should be created from master
or from another feature branch. Eventually everything gets merged back to master
.
Gitflow (Source): Two main branches track a project history, develop
and master
. Another 3 branches called hotfix
, release
and feature
hold changes made directly to master
for fixing critical production bugs, change version number and other details prior to a release or work on a particular feature just like Feature branch, respectively.
GitHub flow (Source): Developers create a feature
branch off of master
. Changes are pushed via pull request. Changes accepted into master
get deployed immediately by GitHub bot Hubot.
For the development part of your question, the answer depends on the background of your team. Do they come from an SVN environment? Then you should go with the centralized approach since it's the one that resembles SVN most. Do they feel comfortable working with Git? Then perhaps you shouldn't try to adapt your team's workflow to any of those but implement your own, crafted to suit your needs which if I understood well are development flexibility and fast deployment.
I also think you should focus on improving the latter. How is your deployment pipeline composed of? In "Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation" the authors identify possible causes for infrequent deployments, some of which are:
- The deployment process is not automated.
- Testing takes a long time.
- People don't understand the build/test/deploy process.
- Developers are not being disciplined about keeping the application working by making small, incremental changes, and so frequently break existing functionality
Do any of those sound like something you could improve? Perhaps you could tell us a little bit more about how you and your team handle this part of the project.
When a code change A is rejected, don't think necessarily in terms of "rollback feature A by undoing the merge using SCCS". Think in terms of "adding a new change to the code which fixes the defects found by QA".
Depending on the issue, this can mean:
fixing a bug immediately, but leaving the code of A in master
disabling the functionality added by A (by deactivating a feature entry point, maybe by utilizing a "feature toggle"), until the issues can be fixed, but still leaving the related code in master
adding changes to the code which effectively remove the code changes of A from the master - but in a manner this does not interfer with later changes like B
The third bullet point can often, but not necessarily, mean to let the developer (and not the QA team by some magic automatism) use the SCCS and revert the code changes of A. If this rollback shows no collision with B, fine, if there is a collision, there might be some manual work involved (comparable to the manual work when A and B were integrated one after another).
And yes, picking the right strategy is a manual decision, and it will involve some time and manual work - this is not a fully "automated way", but it does also not need "complex manual intervention" every time it happens. In reality, the range of an issue fix or rollback goes from "trivial" to "complex". When aiming for continous delivery, however, one should strive to have the trivial rollbacks happen much more frequently than the complex ones.
The best approach for this is probably to pick features for parallel development in a way it makes it unlikely they need to touch the same part of the code base. Furthermore, there should be as many automated tests as possible. So the devs can run these tests during feature development before their code reaches "master". That should prevent frequent show-stoppers during QA.
Now, let us apply these recommendations to your scenario:
As long as feature A is not ready, feature B cannot be tested because it also contains the rejected code from feature A.
But why not? When A and B were eligible for parallel development, they should be mostly independent from each other and so eligible for independent testing, which means B can in most cases be tested even if the issues with A are not fixed. Lets say QA reports a serious bug from A which was missed by the automated tests. The fact the automated tests did not fail should mean at least the application should not be completely unusable, so they can start with the tests of B. They know master cannot be delivered to production as long as the issue with A is not fixed, but during the test of B, there should be enough time to fix or rollback A in the one of the 3 ways I mentioned above.
Of course, one has to make a decision what the quickest way is to remove the issue with A and make the code "production-ready" again. So when the test of B is done, a fix for the issue with A should be already available (or at least soon), so QA can approve that the issue is gone now (either with the functionality from A now or not).
Looking at the timeline, after each test cycle your "master" can switch between two main states: either it has "known defects", or there are "no known defects any more". Whenever it reaches the state "no known defects", you can deploy to production. To make continous deployment work, one has to plan the feature slices in a way the state "no known defects any more" is reached as frequent as possible. The key here is to
make the individual feature slices like "A" and "B" from above as small as possible
pick feature slices for parallel development as independent as possible
plan for reverting or disabling a feature slice in a very quick and smooth way in case it contains a bug which prevents the delivery to production
In a bigger team (maybe a dozen or more developers), when you notice you reach the state "no known defects" too seldom because of the constant checkins to master, you can also consider to use an additional "pre-production" (or "staging") branch where only features are integrated which were approved by QA on the master branch. However, this does not come for free, you need another person doing the integrations on "pre-production", another QA step on "pre-production" to check the integrations itself did not introduce a bug, and additional administrative overhead for managing the pre-production environment.
Best Answer
A simple workflow.
There are countless ways to do this, do what fits your workflow the best.