Unit-testing – Is it sufficient to use acceptance and integration tests instead of unit test

bddintegration-teststddtestingunit testing

Short introduction to this question. I have used now TDD and lately BDD for over one year now. I use techniques like mocking to make writing my tests more efficiently. Lately I have started a personal project to write a little money management program for myself. Since I had no legacy code it was the perfect project to start with TDD. Unfortunate I did not experience the joy of TDD so much. It even spoiled my fun so much that I have given up on the project.

What was the problem? Well, I have used the TDD like approach to let the tests / requirements evolve the design of the program. The problem was that over one half of the development time as for writing / refactor tests. So in the end I did not want to implement any more features because I would need to refactor and write to many test.

At work I have a lot of legacy code. Here I write more and more integration and acceptance tests and less unit tests. This does not seem to be a bad approach since bugs are mostly detected by the acceptance and integration tests.

My idea was, that I could in the end write more integration and acceptance tests than unit tests. Like I said for detecting bugs the unit tests are not better than integration / acceptance tests. Unit test are also good for the design. Since I used to write a lot of them my classes are always designed to be good testable. Additionally, the approach to let the tests / requirements guide the design leads in most cases to a better design. The last advantage of unit tests is that they are faster. I have written enough integration tests to know, that they can be nearly as fast as the unit tests.

After I was looking through the web I found out that there are very similar ideas to mine mentioned here and there. What do you think of this idea?

Edit

Responding to the questions one example where the design was good,but I needed a huge refactoring for the next requirement:

At first there were some requirements to execute certain commands. I wrote an extendable command parser – which parsed commands from some kind of command prompt and called the correct one on the model. The result were represented in a view model class:
First design

There was nothing wrong here. All classes were independent from each other and the I could easily add new commands, show new data.

The next requirement was, that every command should have its own view representation – some kind of preview of the result of the command. I redesigned the program to achieve a better design for the new requirement:
Second design

This was also good because now every command has its own view model and therefore its own preview.

The thing is, that the command parser was changed to use a token based parsing of the commands and was stripped from its ability to execute the commands. Every command got its own view model and the data view model only knows the current command view model which than knows the data which has to be shown.

All I wanted to know at this point is, if the new design did not break any existing requirement. I did not have to change ANY of my acceptance test. I had to refactor or delete nearly EVERY unit tests, which was a huge pile of work.

What I wanted to show here is a common situation which happened often during the development. There were no problem with the old or the new designs, they just changed naturally with the requirements – how I understood it, this is one advantage of TDD, that the design evolves.

Conclusion

Thanks for all the answers and discussions. In summary of this discussion I have thought of an approach which I will test with my next project.

  • First of all I write all tests before implementing anything like I always did.
  • For requirements I write at first some acceptance tests which tests the whole program. Then I write some integration tests for the components where I need to implement the requirement. If there is a component which work closely together with another component to implement this requirement I would also write some integration tests where both components are tested together. Last but not least if I have to write an algorithm or any other class with a high permutation – e.g. a serializer – I would write unit tests for this particular classes. All other classes are not tested but any unit tests.
  • For bugs the process can be simplified. Normally a bug is caused by one or two components. In this case I would write one integration test for the components which tests the bug. If it related to a algorithm I would only write a unit test. If it is not easy to detect the component where the bug occurs I would write an acceptance test to locate the bug – this should be an exception.

Best Answer

It's comparing oranges and apples.

Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different.

I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them:

Integration tests:

Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified.

Acceptance tests:

An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify.

Unit tests:

For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test.

Behaviour tests:

Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful.

These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value.

Related Topic