Isolating test data in acceptance tests

acceptance-testingdatatesting

I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example.

Testing creating a row i'll delete everything where column A = foo and column B = bar
Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar.

Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected.

These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

Best Answer

It personally sounds as if your Acceptance tests have encompassed properties of Integration tests and that you are trying to "kill two birds with one stone" as the saying goes.

In the traditional Waterfall model a single Acceptance test should determine if a single requirement has been met. If developing based on a strict SRS document, you may find that even basic input validation is explicitly defined and by the nature of Acceptance testing it needs to be manually tested to be verified.

In the Agile model however the Acceptance Tests verify a single user story, a high level test to verify a user high level stakeholder business need. Typically in the Agile model such fine grained control and specification over concerns like input validation should be understood, unless that validation is unique or specific to a business need.

Simply put in any case, in the example where you wish to verify a duplicate record is not entered into a database is far too low level for a user story, and one would argue is a waste of valuable time to elevate to the importance of an Acceptance Test. Quality assurance or the tester for that feature should be able to verify that the high level requirement has been met with no obvious defects.

Your tests need to be split up:

Automated Unit Tests

For your lowest level tests, typically written and performed by the developer to verify all of the functionality of a specific component or application layer apart, independent of other areas of the application, and reproducible to run multiple times.

Integration Tests

These tests like unit tests can verify situations like creating a new Person record, across all system layers, verifying the integration of all application dependencies while verifying that creation a new Person record, or preventing a duplicate Person record from being created is occurring correctly.

The Case for Integration Tests

One of the most valuable aspects of these kinds of tests is that their are various strategies to not only Automate these tests, one can also use database transactions to make them run in parallel and not conflict, and also this database transaction can be Rolled Back when the test is over reverting your database back to a clean slate and making them reproducible.

I gather that based on your question that most of your "Acceptance Tests" aren't terribly interesting and would be better automated. This isn't to say that Acceptance testing shouldn't occur, but it should be done on a much higher level to where these issues you bring up no longer matter.

Related Topic