I have heard that when testing stories developed in the current sprint, no issues are raised they are not completed (i.e. definition of done was not fulfilled) and that you should only raise issues when you find another, unrelated issue etc. While it does make sense not to raise issues on something that developers still can work on till the end of the sprint, I fail to find some "best practices". Is that a correct approach?
Agile – Should Defects Be Raised During Sprints or Noted in Stories?
agilescrumtesting
Related Solutions
If testing is separate from development, you have two -- separate -- scrum teams. It is a bad idea to have one hand work to the other.
Your developers must write their own tests, separate from this other team. You must treat this other "test" team as your customers.
In a sprint ... when do you release for testing ?
When the sprint is done. Totally done. That means you've done your own unit testing and are sure that it works. After your development team is done, you release it to other teams for "testing" or "deployment" or whatever else happens in the organization.
I guess my question is how to write stories that are implementable and testable.
That varies from team to team. The BA is part of the development team. You have to work on that as a team (BA plus developers) to get the right amount of detail. It's a team effort to get the right information from BA to the rest of the team.
How important it is to have automated UI testing for Scrum.
Essential. Completely required for any UI development. The developers must do all testing themselves before it is given to the "test team". If there's a UI, they must test it. If there's no UI, then automated UI testing isn't required. Testing is required, and a UI must be tested. Automated testing is the current best practice.
Bottom line. A separate "test" team and a BA who writes every little detail is not an optimal organization for Scrum. Scrum means you have to rethink your organization as well as your processes.
The need of Scrum isn't to complete every story at the end of every sprint. In fact, that might not happen, especially early in a project when you are still settling on a velocity. The idea is to have a potentially shippable (working, tested, and documented) product that adds business value to the customer at the end of every iteration.
The first thing to do is to improve your definition of done. Done should mean that the story is complete. That means it has been designed, implemented, unit tested, integrated and passed system and acceptance tests. When a story is done, that means the feature or capability that the store represents is ready for delivery to the customer. The Scrum Alliance has some articles on what a good definition of done looks like, how to know when you are done with a story, and a methodology for creating a good definition of done.
Next, you should probably begin to work on your estimation and planning. It shouldn't be a continual struggle to incorporate the stories into each release. It begins with having appropriate and accurate estimates as to the size of the story, which often uses story points to provide an estimation of the relative effort required to complete the story. Once you have estimates, you need to use these estimates to plan. If you find yourself finishing 10 story points every sprint, don't pull more than 10 story points into the next sprint. If you pulled 10 story points into one sprint, but only finished 8 of them, only include 8 in the next sprint. If you finish early, you can either pull in more story points or use the additional time to reduce technical debt by refactoring, writing new automated tests, or improving existing tests (as a few examples).
Rather than starting the next sprint immediately, make sure you have time for your retrospective and planning meetings. Use this as an opportunity to determine what was problematic or needs improvement, and then implement corrections for the process in future sprints.
I think that doing this will resolve some of your problems, and make others easier to fix, or at least bring out more information. Some of the problems are also related to organizational culture, such as the entire team participating in testing or not using pair programming. Eventually, you are going to want to get the entire team into testing (and other activities), since Scrum is centered around a team that has cross-functional skills. Improved estimation will also help you identify if your stories are too large. As far as pair programming - it's a non-issue, since it's not required for Scrum and might not be of use to your team.
Best Answer
It depends on your process. In my world, a defect is something that escapes your quality practices. But its up for you to define exactly what those are.
Some processes have a clear separation between the development team and the test / quality team. This is the world that I live in now. The development team is responsible for unit and integration tests and the quality team is responsible for deliverable-level (application and system level) regression and acceptance tests. In this environment, any issue that was not found by the development team in their testing would be recorded as a defect as it escaped the development team's quality activities - code reviews and tests.
The agile methods promote a highly integrated cross-functional team. In this environment, the quality activities also also integrated. Different people may be leading different activities, but everyone's involved every step of the way. Because of that, there's not always a clear hand-off from a development team to a quality team. As such, I would consider a defect to be something that makes it through to the end of the iteration and into the release.
However, something to consider is communication. In my first example, I may note an issue that I find in testing that is minor, but don't have the necessary path to fix in the development cycle. I may log a defect to communicate that I found an issue and allow it to be dispositioned. The project lead or quality team may say that it's actually a big deal (to the customer or from a product quality perspective) and demand it get fixed in the development cycle, or it may be planned for a later release.
In the end, you do need to do the right thing to enable you to understand your product quality and communicate the current state of the development effort to the appropriate stakeholders.