Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
You've laid out good arguments for and against unit testing. So you have to ask yourself, "Do I see value in the positive arguments that outweigh the costs in the negative ones?" I certainly do:
- Small-and-fast is a nice aspect of unit testing, although by no means the most important.
- Locating-bug[s]-easier is extremely valuable. Many studies of professional software development have shown that the cost of a bug rises steeply as it ages and moves down the software-delivery pipeline.
- Finding-masked-bugs is valuable. When you know that a particular component has all of its behaviors verified, you can use it in ways that it was not previously used, with confidence. If the only verification is via integration testing, you only know that its current uses behave correctly.
- Mocking is costly in real-world cases, and maintaining mocks is doubly so. In fact, when mocking "interesting" objects or interfaces, you might even need tests that verify that your mock objects correctly model your real objects!
In my book, the pros outweigh the cons.
Best Answer
The easiest metric is to ask, "when was the last time this integration test legitimately failed?" If it has been a long time (there have been a lot of changes) since the integration test failed, then the unit tests are probably doing a good enough job. If the integration test has failed recently, then there was a defect that was not caught by the unit tests.
My preference would generally be to increase the robustness of the integration tests, to the point where they can be reliably run unattended. If they take a long time to run, then run them overnight. They are still valuable even if they are only run occasionally. If these tests are too fragile or require manual intervention, then it may not be worth the time spent in keeping them running, and you may consider discarding those that succeed most often.