Unit-testing – Coding and testing in the same sprint

agileqascrumtestingunit testing

How is testing handled within the same sprint as coding, if all or most of the coding is not done until the end of the sprint? (I'm referring to the "soup-to-nuts" development and testing of a single PBI within a sprint.)

Most of the answers I've seen online involve QA automation, but even that isn't really possible since you generally need a functional UI to record or create automated tests from. I only have storyboards that continue to evolve as I develop features and discover new requirements.

In my case, I am developing a new desktop application. Desktop apps don't generally lend themselves to automated testing very well. I have some automated unit tests, but they are not the manual functional/integration tests that a QA professional would perform.

So, where I'm at now is that my sprint ends tomorrow, I still have coding to finish, and my QA folks have nothing to test yet, and no idea how to test whatever I'd give them without me holding their hands.

I'm sure I'm not the first person to have this dilemma.

In the past, I've done a pipeline: in the current sprint the test team tests the features that have been implemented during the previous sprint. At my current job, the PM refers to this approach as "waterfall", and as such, unacceptable.

Best Answer

If you don't test a User Story (US) and verify that the acceptance criteria are met this story is not done. If its not done this US goes to the next sprint of course. And if all your US are in this state you sprint has ended with no value added to the project. From the client point of view I cannot distinguish this from the entire development team going on vacation.

One of the lean principles (agile doesn't end with scrum) says "quality is built in". Something is only done if it meets the quality criteria you define. This is crucial to have a real agile process, ending spring with zero value or separate testing from development are symptoms of a big problem.

There are a lot of things you can do:

  • automation is key to success. At least at unit test level, and some other practices like CI are important too. This is not enough, but if done well these types of testing result in few or no bugs discovered in manual testing (usually minor UI things). If you have dedicated QA people they can be the ones who automate the acceptance testing, and some of this automation can start before you finish a sprint.

  • Look at the size of your User Stories. If you have a US that is finished the two first days of the sprint the third day a QA person can test it. In my opinion having small (SMART) user histories one of the most important things to success in agile development, and a lot of people seems to no t realize this.

  • Collaboration between tester and developers is another key part of success. In my previous project when a US its finished by a developer a QA person do "pair testing" with the developer (can be manual, can be via launching some automated, or better, both), this works pretty well.