A good integration test strategy

integration-teststesting

I'm getting started on a project wherein I want to have pretty thorough test coverage, and I have the luxury of driving the test strategy. I've settled on a workable plan for unit testing, and I've also settled on using Gherkin to describe features and a port of Cucumber to run the scenarios as end-to-end acceptance tests.

The problem is that I sense that there is a gap in between those two layers. I can test all my units in isolation, and I can test that my features work, but I can think of other things that I'm going to want to test.

I'm also coming from a different project with (poorly-implemented) automated tests that are a very brittle and are a maintenance nightmare, the goal of these tests being to mostly replace manual regression testing. Writing more maintainable tests is a must, but at a higher level I'm not sure that our tests are the right ones.

As an example, given a web application, say there's a form to add an event with start and end dates. As an end-to-end test, we can validate that you can, in fact, add an event. But if your start date is after your end date, then you get an error message, and I wouldn't think that how a trivial user input error is handled belongs in a feature file. On the other hand, there seems to be a pretty strong belief that unit testing the UI isn't worth it; instead, one should do automated integration testing.

So what do I do for this code?

Do I unit test the components related to error messages in general, as well as that this form is going to show them, and skip automating that they actually appear?
Do I do the above and then automate that just one error message somewhere shows as intended, and assume the rest will work?
Do I try to automate every different potential failure case for every form?

This hits the middle ground of integration testing, of which I am leery. Based on my experience, maintaining a large number of integration tests does not seem worth the value. On the other hand, there is functionality above the unit level and below the feature level that I would ideally like tested, both in the UI and outside of it. And I'm concerned about what kind of confidence automated regression can bring if it isn't hitting everything.

I am perfectly willing to write integration tests, but in the context of when to write tests, what tests to write, and how many to write, what is a good approach to addressing this problem?

Best Answer

It all comes down to being cost-effective and pragmatic. Because you are in charge of testing for this project, be careful not to use every type of testing approach or change things that have worked in the past, or introduce a bunch of new tools or practices just for the sake of doing so. I am not saying that is your plan, but just wanted to state that.

If the GUI contains a lot of logic, which it shouldn't, then it must be tested. Ideally the GUI contains very little logic (or none, but realistically that is challenging to do). If it does contain a lot of logic, see if it is feasible or cheaper to refactor the GUI code versus creating and maintaining a large base of slow GUI tests. If the GUI contains little to no logic, then you can maybe get away with not testing it at all, or very few very basic tests to make sure the "connections" are all good, as mentioned above. Also since there are only a few, you don't have to worry about their performance and the maintenance overhead isn't as high.

You should integration test the rest of your system (everything exception GUI) at least, though. The system should be designed in such a way that it doesnt require a GUI to drive it, it could be an (automated) command line interface, web service interface, etc.

There should be the highest number of unit tests, then a smaller number of these "core" integration tests, and very few large/ GUI tests.

Related Topic