None of us actually feel bad when not writing unit tests at the same time as the actual code.
This is the point you need to address. The culture of your team needs to change such that not writing tests during the sprint (or whatever unit of time you use) becomes just as much a code smell as hard-coding values. Much of that involves peer pressure. Nobody really wants to be viewed as substandard.
Do the tests yourself. Visibly berate yourself when you don't do them. Point out where a "good" programmer would've caught that bug if they'd written unit tests. Nobody wants to be bad. Make it that this undesireable behavior is bad and people will follow.
Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
Best Answer
Above all, you need to have and analyse combined (total) coverage. If you think of it, this is the most natural way to properly prioritize your risks and focus your test development effort.
Combined coverage shows you what code is not covered by tests at all, ie is most risky and need to be investigated first. Separate coverage reports won't help here, as these don't let you find out if the code is tested somehow else or not tested at all.
Separate coverage analysis also can be useful, but it would better be done after you're done with combined analysis and preferably would also involve results of analysing combined coverage.
Purpose of separate coverage analysis differs from combined one. Separate coverage analysis helps to improve design of your test suite, as opposed to analysis of combined coverage which is intended to decide on tests to be developed no matter what.
"Oh this gap isn't covered just because we forgot to add that simple unit (integration) test into our unit (integration) suite, let's add it" -- separate coverage and analysis is most useful here, as combined one could hide gaps that you would want to cover in particular suite.
From above perspective, it is still desirable though to also have results of combined coverage analysis in order to analyse trickier cases. Think of it, with these results, your test development decisions could be more efficient due to having information about "partner" test suites.
"There's a gap here, but developing a unit (integration) test to cover it would be really cumbersome, what are our options? Let's check combined coverage... oh it's already covered elsewhere, that is, covering it in our suite isn't critically important."