Well, the direct answer to your question would be Mu I'm afraid - there's just not enough details to make an informed guess whether you should or not quit trying.
The only thing I am pretty positive about is that level of agility should be driven by customer / market needs (which you gave no info about).
- For example, as a user of IDE I am perfectly happy to upgrade to new version once or maybe twice a year and I am never in a hurry to do that. Ie if their release cycle is 3 months (12 weeks) then I am perfectly happy with that.
On the other hand, I can easily imagine, say, financial trading company go bankrupt if it takes more than a month for their software to adapt to market changes - 12 weeks test cycle in this case would be a road to hell. Now - what are your product needs in this regard?
Another thing to consider is what level quality is required to serve your customer / market needs?
- Case in point - in a company I once worked we found we need some new feature in a product licensed from some software vendor. Without this feature we suffered rather strongly, so yes, we really wanted them to be agile and to deliver update within a month.
And yes, they appeared to be agile and yes they released that update in a month (if their QA cycle is 12 weeks then they likely just skipped it). And our feature worked perfectly well - guess we should have been perfectly happy? no! we discovered a showstopper regression bug in some functionality that worked just fine before - so we had to stick-n-suffer with older version.
Another month passed - they released another new version: our feature was there but same regression bug was there too: again, we didn't upgrade. And another month and another.
In the end we were able to upgrade only half year later so much for their agility.
Now, let's look a little closer into these 12 weeks you mention.
What options did you consider to shorten QA cycle? as you can see from above example, simply skipping it might not give you what you expect so you better be, well, agile and consider different ways to address it.
For example, did you consider ways to improve testability of your product?
Or, did you consider brute-force solution to just hire more QA? However simple it looks, in some cases this is indeed the way to go. I've seen the inexperienced management trying to fix product quality problems by blindly hiring more and more senior developers where just a pair of average professional testers would suffice. Pretty pathetic.
The last but not the least - I think one should be agile about very application of agile principles. I mean, if the project requirements aren't agile (stable or change slowly), then why bother? I once observed top management forcing Scrum in projects that were doing perfectly well without. What a waste it was. Not only there were no improvements in their delivery but worse, developers and testers all became unhappy.
update based on clarifications provided in comments
For me, one of the most important parts of Agile is having a shippable release at the end of each sprint. That implies several things. First, a level of testing must be done to ensure no showstopping bugs if you think you could release the build to a customer...
Shippable release I see. Hm. Hmmm. Consider adding a shot or two of Lean into your Agile cocktail. I mean, if this is not a customer/market need then this would mean only a waste of (testing) resources.
I for one see nothing criminal in treating Sprint-end-release as just some checkpoint that satisfies the team.
- dev: yeah that one looks good enough to pass to testers; QA: yeah that one looks good enough for the case if further shippable-testing is needed - stuff like that. Team (dev + QA) is satisfied, that's it.
...The most important point that you made was at the end of your response in terms of not applying agile if the requirements are not agile. I think this is spot on. When we started doing agile, we had it dialed in, and the circumstances made sense. But since then, things have changed dramatically, and we are clinging to the process where it may not make sense any longer.
You got it exactly right. Also from what you describe it looks like you got to the state (team/management maturity and customer relationship) allowing you to use regular iterative model development instead of Scrum. If so then you might be also interested to know that per my experience in cases like that regular iterative felt more productive than Scrum. Much more productive - there was simply so much less overhead, it was simply so much easier to focus on development (for QA to respectively focus on testing).
- I usually think of it in terms of Ferrari (as regular iterative) vs Landrover (as Scrum).
When driving on a highway (and your project seem to have reached that highway) Ferrari beats the hell out of Landrover.
It's the off-road where one needs jeep not sports car - I mean if your requirements are irregular and/or if the teamwork and management experience are not that good, you'll have to choose Scrum - simply because trying go regular will get you stuck - like Ferrari will stuck off-road.
Our full product is really made up of many smaller parts that can all be upgraded independently. I think our customers are very willing to upgrade those smaller components much more frequently. It seems to me that we should perhaps focus on releasing and QA'ing those smaller components at the end of sprints instead...
Above sounds like a good plan. I worked in such a project once. We shipped monthly releases with updates localized within small low-risk components and QA sign-off for these was as easy as it gets.
- One thing to keep in mind for this strategy is to have a testable verification that change is localized where expected. Even if this gets as far as to bit-by-bit file comparison for components that didn't change, go for it or you won't get it shipped. Thing is, it's QA who is responsible for release quality, not us developers.
It is tester's headache to make sure that unexpected changes didn't slip through - because frankly as a developer I've got enough other stuff to worry about that is more important to me. And because of that they (testers) really really need solid proof that things are under control with release they test-to-ship.
What you are describing isn't Agile by definition (Agile Manifesto) it is Waterfall with daily status meetings. Agile means easily adapting to change, if there is no interactive feedback loop with the product owner and thus the customers, then what change is occurring?
Agile is about rapid failures, through constant communication with the product owner/customers. It is better to fail sooner than later, less work is done, and less is "lost". And you don't get stuck with the argument, that "we don't have time to do it correctly, since we spent so much time doing it wrong, we just need to continue on this same path, even though it leads to failure".
Sounds like your managment is doing "SCRUM, but ..." where the "but" is where they throw out all the SCRUM stuff that they don't understand or agree with and just do things the same haphazard waterfall way as always, but with new shiny buzzword names to it all.
In SCRUM the daily stand up is NOT about delivering status to management, it is to force developer interaction, so you know what your fellow team members are doing and can help each other out and not duplicate work. If it takes more than 45 seconds per person you are doing it wrong. It is about transparency for the team, if one person is giving the same status multiple days on something that should be a single days worth of work, the team can resolve the persons problem sooner than later.
If you aren't testing each others code as it is written, then you aren't doing it correctly either. Testing should be embedded into the process not an after thought. QA should be included in the planning sessions and give estimates on how long things will take to test.
If you are not meeting Sprint commitments and rolling things over, you aren't doing it correctly. Sprints are about commitments if you are committing to too much work, stop doing that, there is no way you can introduce any predictibility or repeatability if you can't accurately commit to deliverables.
Best Answer
Yes. A Sprint is timeboxed, and the next sprint starts right after the previous one's timebox ends. This provides the cadence and rhythm of a Scrum sprint.
Reviews and Retrospectives are held at the end of a sprint. Sprint Planning is done at the beginning of a Sprint. These Scrum Events bookend the actual Sprint, as opposed to occurring 'in-between' Sprints.
Paying off technical debt is part of the Stories to be completed within the Sprint. Ideally, technical debt should not accrue very much if we do most of the XP practices (i.e.: test first, refactor mercilessly).
Of course in the real world, debt does accrue. A good strategy is to allot some time for refactoring while User Stories are being decomposed into subtasks. For example, a User Story from the Product Backlog is written such that the value to the business is stated, but as we decompose it into constituent tasks, we correctly estimate that existing code must be changed for our new feature to be integrated in. At the end of the Sprint, developers get their value in better code, and business gets their value in working software.
I would invite the development team to review their coding practices, if there may be some improvements so that quality can be built-in. Perhaps pair programming? Code reviews? Senior members taking more time to mentor others? Test-Driven-Design to minimize bugs and to make sure they are not over-designing/gold-plating?
How about the Sprint Retrospectives? Are team problems being raised and attempts at improvement being tried? Are Daily Scrum standups devolving into old-school Project Manager "status reports"? Does the Product Owner respect the Development team's decision regarding the capacity of Stories to be done in a Sprint?
There are a lot of possible ways Scrum can be tweaked to optimize performance. As long as the team keeps to Scrum's core pillars of Transparency (people are upfront with what's happening), Inspection (both work and the work process is being evaluated fairly but realistically), and Adaptation (doing something to improve a situation), each iteration should sort itself out eventually to a better one.
Further reading from the official Scrum Guide