Look into using Hypersonic, or another in-memory DB for unit testing. Not only will the tests execute faster, but side-effects aren't relevant. (Rolling back the transactions after each test also makes it possible to run many tests on the same instance)
This will also force you to create data mockups, which is a good thing IMO, as it means something that occurs on the production database can't inexplicably start failing your tests, AND it gives you a starting point for what a "clean database install" would look like, which helps if you need to suddenly deploy a new instance of the application with no connection to your existing production site.
Yes, I do use this method myself, and yes it was a PITA to set up the FIRST time, but it's more than paid for itself in the lifetime of the project I'm about to complete. There are also a number of tools that help with the data mockups.
Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
Best Answer
The separation is not unit versus integration test. The separation is fast versus slow tests.
How you organize these tests to make them easier to run is really up to you. Separate folders is a good start, but other annotations like traits or
[Fact]
can work just as well.I think there is a fundamental misconception here about what constitutes an integration test and a unit test. The beginning of Flater's answer gives you the differences between the two (and yes, sadly, I'm going to quote an answer already on this question)
Flater said:
And some supporting literature from Martin Fowler:
(emphasis, mine). Later on he elaborates on integration tests:
(emphasis, his)
With regard to the "narrower scope" of integration testing:
(emphasis, mine)
Now we start getting to the root of the problem: an integration test can execute quickly or slowly.
If the integration tests execute quickly, then always run them whenever you run unit tests.
If the integration tests execute slowly, because they need to interact with outside resources like the file system, databases or web services then those should be run during a continuous integration build, and run by developers on command. For instance, right before code review run all of the tests (unit, integration or otherwise) that apply to the code you have changed.
This is the best balance between developer time and finding defects early on in the development life cycle.