We have a web application that we want to start running regression tests on, and one of the things I'm supposed to look for when choosing an alternative is a tool that has a recorder. However, I get the general feeling that it is frowned upon, and that writing tests in code is preferred. What are the eventual disadvantages or advantages of using a record and playback tool for regression testing?
Testing – Advantages and Disadvantages of Using Record and Playback for Regression Testing
testing
Related Solutions
To be honest, even a test plan template that had been used successfully by other agile teams might not work well for your team - but seeing what other people are doing is useful for getting ideas about different approaches.
I've also been thinking about the same issue for a while now. My approach so far has been pragmatic: I was working in a small team, and was initially the only tester to 6 developers. Creating documentation instead of testing would have been a very poor choice. Creating documentation instead of testing, so that the developers could run the tests: another very poor choice, IMHO.
Currently, I will add a page to our wiki for each story, and that will hold a set of test ideas, used as a basis for exploratory testing sessions. If necessary, I will also add setup information there. I would prefer to keep that separate, in order to keep it as a resource that can be updated more easily, but at the moment it goes onto the same page. (I generally don't like mixing the "how" and the "what", it makes it harder to see "what" you're doing if you have to pick it out of pages of "how"). We don't have a template for those pages - I don't feel we've needed it yet. When we do, I will add one, and then tweak it as we learn more. At the moment, it works for me to give an overview of what areas we'll look at when testing, and being on the wiki, anyone can add items to it if they feel something is missing.
I have considered setting up a low tech testing dashboard, but at the moment, I believe that our whiteboard is sufficient for us to see how stories are progressing at this point - though as the team grows, we may want to revisit that.
You also wanted to know what other Agile testers are doing - here are a few blog posts that I think you'll find useful:
I very much like Marlena Compton's description of how she uses a wiki for testing at Atlassian: http://marlenacompton.com/?p=1894
Again, a light-weight approach, keeping test objectives tied to the story. She uses a testing dashboard for a high level view of what features are in a release. The feature links to a page of test objectives, which are sorted under different headings - function, domain, stress, data, flow, and claims. This gives a "at-a-glance" view of what areas/types of tests you have planned, and you can see instantly if one area has a lot fewer tests. This may be an approach you'd find useful.
Trish Khoo also has some interesting things to say on using a wiki, more on the level of structuring the individual tests (they've moved to using a Gherkin-style "Given, When, Then" format for their tests): http://ubertest.hogfish.net/?p=243
Elizabeth Hendrickson's blog post about specialised test management systems is a little off-topic, but you may find some useful points raised: http://testobsessed.com/2009/10/06/specialized-test-management-systems-are-an-agile-impediment/
When my team implemented automated UI testing a lot of great things happened.
First, the QA team became much more efficient at testing the application as well as more proficient with the application. The lead QA said that he was able to bring new QA members up to speed quickly by introducing them to the test suites for the UI.
Second, the quality of QA tickets that came back to the Dev team were better. Instead of 'Page broke when I clicked Submit button' we got the exact case that failed so we could see what was input into the form. The QA team also took it a step further by checking all cases that failed and tested other scenarios around that page to give us a better view of what happened.
Third, the QA team had more time. With this extra time, they were able to sit in on more design meetings. This in turn allowed them to be writing the new test suite cases at the same time as the Devs were coding those new features.
Also, the stress testing that the test suite we used was worth it's weight in gold. It honestly helped me sleep better at night knowing that our app could take pretty much anything thrown at it. We found quite a few pages that bucked under pressure that we were able to fix before go live. Just perfect.
The last thing that we found was that with some tweaks by the QA team, we could also do some SQL injection testing on our app. We found some vulnerabilities that we were able to get fixed up quickly.
The setup of the UI test suite took a good amount of time. But, once it was there it became a central part of our development process.
Best Answer
I would not recommend record and play for regression testing for following reasons
I would say its like hand coding a website even though you have a WYSIWYG editor at your disposal. For learning and beginning with a tool and see its constructs you can use record and play but to make stable, maintainable and standard based tests you should hand code it taking help of record functionality if required.
The advantages would be