Agile Development Challenges with Long QA Cycles

agileqa

Someone in my company recently proposed changes to our core product that our managers feel should trigger what I guess my company considers a full QA cycle (i.e. testing the entire product suite from the ground up). Apparently our QA takes 12 weeks to do a full QA cycle for our product. My problem with this is that we are trying to do Agile (although mostly half-assed in my opinion) development. We will do a whole set of sprints and then do a release, which QA will take forever to go through I guess. The question is really, if our QA is going to take 12 weeks to do their job, shouldn't we just give up trying to do Agile? What the hell is the point of trying to do Agile in a situation like this?

Best Answer

Well, the direct answer to your question would be Mu I'm afraid - there's just not enough details to make an informed guess whether you should or not quit trying.

The only thing I am pretty positive about is that level of agility should be driven by customer / market needs (which you gave no info about).

  • For example, as a user of IDE I am perfectly happy to upgrade to new version once or maybe twice a year and I am never in a hurry to do that. Ie if their release cycle is 3 months (12 weeks) then I am perfectly happy with that.
     
    On the other hand, I can easily imagine, say, financial trading company go bankrupt if it takes more than a month for their software to adapt to market changes - 12 weeks test cycle in this case would be a road to hell. Now - what are your product needs in this regard?

Another thing to consider is what level quality is required to serve your customer / market needs?

  • Case in point - in a company I once worked we found we need some new feature in a product licensed from some software vendor. Without this feature we suffered rather strongly, so yes, we really wanted them to be agile and to deliver update within a month.
     
    And yes, they appeared to be agile and yes they released that update in a month (if their QA cycle is 12 weeks then they likely just skipped it). And our feature worked perfectly well - guess we should have been perfectly happy? no! we discovered a showstopper regression bug in some functionality that worked just fine before - so we had to stick-n-suffer with older version.
     
    Another month passed - they released another new version: our feature was there but same regression bug was there too: again, we didn't upgrade. And another month and another.
     
    In the end we were able to upgrade only half year later so much for their agility.

Now, let's look a little closer into these 12 weeks you mention.

What options did you consider to shorten QA cycle? as you can see from above example, simply skipping it might not give you what you expect so you better be, well, agile and consider different ways to address it.

For example, did you consider ways to improve testability of your product?

Or, did you consider brute-force solution to just hire more QA? However simple it looks, in some cases this is indeed the way to go. I've seen the inexperienced management trying to fix product quality problems by blindly hiring more and more senior developers where just a pair of average professional testers would suffice. Pretty pathetic.

The last but not the least - I think one should be agile about very application of agile principles. I mean, if the project requirements aren't agile (stable or change slowly), then why bother? I once observed top management forcing Scrum in projects that were doing perfectly well without. What a waste it was. Not only there were no improvements in their delivery but worse, developers and testers all became unhappy.


update based on clarifications provided in comments

For me, one of the most important parts of Agile is having a shippable release at the end of each sprint. That implies several things. First, a level of testing must be done to ensure no showstopping bugs if you think you could release the build to a customer...

Shippable release I see. Hm. Hmmm. Consider adding a shot or two of Lean into your Agile cocktail. I mean, if this is not a customer/market need then this would mean only a waste of (testing) resources.

I for one see nothing criminal in treating Sprint-end-release as just some checkpoint that satisfies the team.

  • dev: yeah that one looks good enough to pass to testers; QA: yeah that one looks good enough for the case if further shippable-testing is needed - stuff like that. Team (dev + QA) is satisfied, that's it.

...The most important point that you made was at the end of your response in terms of not applying agile if the requirements are not agile. I think this is spot on. When we started doing agile, we had it dialed in, and the circumstances made sense. But since then, things have changed dramatically, and we are clinging to the process where it may not make sense any longer.

You got it exactly right. Also from what you describe it looks like you got to the state (team/management maturity and customer relationship) allowing you to use regular iterative model development instead of Scrum. If so then you might be also interested to know that per my experience in cases like that regular iterative felt more productive than Scrum. Much more productive - there was simply so much less overhead, it was simply so much easier to focus on development (for QA to respectively focus on testing).

  • I usually think of it in terms of Ferrari (as regular iterative) vs Landrover (as Scrum).
     
    When driving on a highway (and your project seem to have reached that highway) Ferrari beats the hell out of Landrover.
     
    It's the off-road where one needs jeep not sports car - I mean if your requirements are irregular and/or if the teamwork and management experience are not that good, you'll have to choose Scrum - simply because trying go regular will get you stuck - like Ferrari will stuck off-road.

Our full product is really made up of many smaller parts that can all be upgraded independently. I think our customers are very willing to upgrade those smaller components much more frequently. It seems to me that we should perhaps focus on releasing and QA'ing those smaller components at the end of sprints instead...

Above sounds like a good plan. I worked in such a project once. We shipped monthly releases with updates localized within small low-risk components and QA sign-off for these was as easy as it gets.

  • One thing to keep in mind for this strategy is to have a testable verification that change is localized where expected. Even if this gets as far as to bit-by-bit file comparison for components that didn't change, go for it or you won't get it shipped. Thing is, it's QA who is responsible for release quality, not us developers.
     
    It is tester's headache to make sure that unexpected changes didn't slip through - because frankly as a developer I've got enough other stuff to worry about that is more important to me. And because of that they (testers) really really need solid proof that things are under control with release they test-to-ship.