Agile – Acceptance tests done first…how can this be accomplished

agiledevelopment-processtddtesting

The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process.

A common part of Agile is also TDD, which states that tests are done first.

My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem.

I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification.

One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though.

The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen….they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them.

Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about.

So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas…

Best Answer

first, get rid of the split between 'testers' and 'developers'. Everyone tests

second, in TDD, the developers code the tests before they code the feature/story

What you have described is not TDD [it may be Scrum though; Scrum is a project-management methodology independent of the development methodology; Scrum is not relevant to your problem]

Scenarios where automated testing is impossible are exceedingly rare. Scenarios where automated testing is difficult, expensive, or inconvenient are much more common - but it is precisely these scenarios where automated testing is needed the most.

From the vague description, I assume the software is drawing something on the screen. If what is being drawn is determined by data, formulas, or functional steps, then at least write automated tests that test to the level of the data/formulas/functional steps. If the screen output is deterministic (the same steps result in the same drawing output every time) then test manually once, take a screenshot, and let future tests compare the output to the verified screenshot. If the screen output is nondeterministic and not governed by data, formulas, or functional steps, then you're in that rare area where automated testing may be impossible. But I doubt it.

I'm guessing that the only reason testing has not been automated so far is that the developers don't care about it. In TDD, the developers do the testing, so they feel the pain of the boring repetition of testing the same 62-step process a hundred times until all the bugs are gone. Nothing will get an automated testing solution developed faster than making the developers do the testing.