Someone in my company recently proposed changes to our core product that our managers feel should trigger what I guess my company considers a full QA cycle (i.e. testing the entire product suite from the ground up). Apparently our QA takes 12 weeks to do a full QA cycle for our product. My problem with this is that we are trying to do Agile (although mostly half-assed in my opinion) development. We will do a whole set of sprints and then do a release, which QA will take forever to go through I guess. The question is really, if our QA is going to take 12 weeks to do their job, shouldn't we just give up trying to do Agile? What the hell is the point of trying to do Agile in a situation like this?
Agile Development Challenges with Long QA Cycles
agileqa
Related Solutions
First thing to know is that at the end of the iteration, you don't have to release. The objective is to have a potentially shippable increment of the software, not necessarily to release it. Product owner should decide when to release.
Increase Collaboration
When you decide to release, it is quite obvious that the support is being informed and/or trained with the new changes. The support should be involved in the process. They should also be informed which bug has been fixed so they can inform the customers in the support ticket.
Depending on your situation it may be a good idea to invite one or more support team member to the spring planning meeting. It's also a good idea to invite them to the sprint review meeting as well. Try it to see if it works for you.
Write a Definition of Done
Each feature your developers build should comply to a Definition of Done you write and maintain that matches your organization & product specificities. Here is an example of DoD:
- Code build, committed in the repos
- Unit test coverage 80%
- Technical documentation completed (just enough)
- End user documentation completed (just enough)
- Reviewed & approved by another developer
- Fully tested
- What's new file updated
The concept of Definition of Done alone is a strong company anti-procrastination technique. It forces you to advance, ... to ship.
Once a feature is "done", you have everything to release it already. Including what is needed by your support team.
Support Is Useful For Developers
I personnaly love support. It's the best source of strategic information for your software. It is better than any market study. This is why I think having developers in the support helps you to build on quality. Remember the expression Throw it over the wall?
I also think the product owner should be involved.
I would be hesitant to discard Waterfall across the board so quickly.
Although it is a flawed model for actually building software systems, it's not a bad teaching model to instruct on good practices for each stage of the lifecycle. Regardless of the process model that you apply to the project, you still perform requirements engineering, system architecture and design, implementation, testing, release, and maintenance (including refactoring and enhancement). The difference is how these phases are organized and conducted, but all of the activities still happen.
I'd argue that your transition from Waterfall to Scrum in the middle of the project is not the best idea. A key to Scrum's success is a long-running project. The first three to five sprints are the team settling in on a velocity, learning the process, and going through team development. Although you are doing through the motions, it's not really Scrum at that point. In addition, trying to create an exclusively Scrum-based curriculum is probably a bad idea as Scrum as not a silver bullet - it's better to teach best practices rather than a single methodology. In the workforce, not all projects are going to use Scrum. In fact, in some environments, Scrum would be detrimental to the success of the project.
You've already found problems with Scrum in an academic setting, and some of them are hard to adequately address.
The non-issue in your list of incompatibilities is that estimating is difficult. Yes, it is. But the only way to get better at estimating is to estimate and compare actuals against estimates. Students should be estimating size, time, and effort using various means (story points, source lines of code, hours, pages, person-hours) early so that they are more prepared to do so after graduating and entering the workforce.
The need for documentation is something that can be addressed from both the perspective of the professor and the perspective of the students. The Lean approaches tell us that documentation that doesn't add value to either the team or the customer is wasteful (in terms of time and cost). However, some documentation is needed to achieve some objectives of both the students and the professor (the customer/client) for various purposes. Overall, it sounds like an opportunity to teach process tailoring and quantitative project management (which does have a role even in agile methods).
With respect to Scrum meetings and scheduling, there are two ideas that come to my mind. The first is that this indicates that Scrum might not be the best process to use in an academic setting. There is no singular "best process model" for software projects, with factors such as schedule, staffing, visibility, and experience of the development team (among others).
Overall, I'd suggest emphasizing good practices, process tailoring, and process improvement over single methodologies. This will let you be the most effective to everyone taking the courses, and expose them to a variety of process methodologies and understand what the best practices are for a given set of conditions.
Since you're working to build a university curriculum, I'll give a high level overview of how the software engineering curriculum at the university I attended fit together.
The was an introductory software engineering went through the project in a waterfall model, with the lectures during each phase corresponding to different ways to conduct the activities of that phase. The teams progressed through the phases at the same rate. Having those clearly defined boundaries made fit well into the teaching model for a group of people with no to minimal experience working on teams to build software. Throughout the course, references were made to other methodologies - various agile methods (Scrum, XP), Rational Unified Process, Spiral Model - with regards to how their advantages and disadvantages.
In terms of the activities, there were specific courses to discuss requirements engineering, architecture and design (two courses - one focusing on detailed design using object-oriented methods and one focusing on system architecture), a number of courses focusing on designing and implementing various classes of systems (real-time and embedded systems, enterprise systems, concurrent systems, distributed systems, and so on), and software testing.
There are also three courses dedicated to software process. Software Engineering Process and Project Management that focuses on best practices for managing a software project with respect to multiple methodologies. A second process course teaches measurements, metrics, and process improvement (emphasizing CMMI, Six Sigma, and Lean). Finally, there's a process course that teaches agile software development (Scrum, Extreme Programming, Crystal, DSDM discussed) using a project carried out using the Scrum methodology.
The capstone project was a two-quarter project that was performed for a sponsoring company and run entirely by the student project team, with guidance from both the sponsors and a faculty advisor. Every aspect of how to conduct the project is up to the students, within any constraints set forth by the sponsors. The only university-mandated deadlines were an interim presentation half way (10 weeks) into the project, a final presentation at the end, and a quad poster presentation shortly before the end. Everything else was up to the sponsor and team to agree to.
Best Answer
Well, the direct answer to your question would be Mu I'm afraid - there's just not enough details to make an informed guess whether you should or not quit trying.
The only thing I am pretty positive about is that level of agility should be driven by customer / market needs (which you gave no info about).
On the other hand, I can easily imagine, say, financial trading company go bankrupt if it takes more than a month for their software to adapt to market changes - 12 weeks test cycle in this case would be a road to hell. Now - what are your product needs in this regard?
Another thing to consider is what level quality is required to serve your customer / market needs?
And yes, they appeared to be agile and yes they released that update in a month (if their QA cycle is 12 weeks then they likely just skipped it). And our feature worked perfectly well - guess we should have been perfectly happy? no! we discovered a showstopper regression bug in some functionality that worked just fine before - so we had to stick-n-suffer with older version.
Another month passed - they released another new version: our feature was there but same regression bug was there too: again, we didn't upgrade. And another month and another.
In the end we were able to upgrade only half year later so much for their agility.
Now, let's look a little closer into these 12 weeks you mention.
What options did you consider to shorten QA cycle? as you can see from above example, simply skipping it might not give you what you expect so you better be, well, agile and consider different ways to address it.
For example, did you consider ways to improve testability of your product?
Or, did you consider brute-force solution to just hire more QA? However simple it looks, in some cases this is indeed the way to go. I've seen the inexperienced management trying to fix product quality problems by blindly hiring more and more senior developers where just a pair of average professional testers would suffice. Pretty pathetic.
The last but not the least - I think one should be agile about very application of agile principles. I mean, if the project requirements aren't agile (stable or change slowly), then why bother? I once observed top management forcing Scrum in projects that were doing perfectly well without. What a waste it was. Not only there were no improvements in their delivery but worse, developers and testers all became unhappy.
update based on clarifications provided in comments
Shippable release I see. Hm. Hmmm. Consider adding a shot or two of Lean into your Agile cocktail. I mean, if this is not a customer/market need then this would mean only a waste of (testing) resources.
I for one see nothing criminal in treating Sprint-end-release as just some checkpoint that satisfies the team.
You got it exactly right. Also from what you describe it looks like you got to the state (team/management maturity and customer relationship) allowing you to use regular iterative model development instead of Scrum. If so then you might be also interested to know that per my experience in cases like that regular iterative felt more productive than Scrum. Much more productive - there was simply so much less overhead, it was simply so much easier to focus on development (for QA to respectively focus on testing).
When driving on a highway (and your project seem to have reached that highway) Ferrari beats the hell out of Landrover.
It's the off-road where one needs jeep not sports car - I mean if your requirements are irregular and/or if the teamwork and management experience are not that good, you'll have to choose Scrum - simply because trying go regular will get you stuck - like Ferrari will stuck off-road.
Above sounds like a good plan. I worked in such a project once. We shipped monthly releases with updates localized within small low-risk components and QA sign-off for these was as easy as it gets.
It is tester's headache to make sure that unexpected changes didn't slip through - because frankly as a developer I've got enough other stuff to worry about that is more important to me. And because of that they (testers) really really need solid proof that things are under control with release they test-to-ship.