Integration vs. unit tests
You should keep your unit tests and your integration tests completely separated. Your unit tests should test one thing and one thing only and in complete isolation of the rest of your system. A unit is loosely defined but it usually boils down to a method or a function.
It makes sense to have tests for each unit so you know their algorithms are implemented correctly and you immediately know what went wrong where, if the implementation is flawed.
Since you test in complete isolation while unit-testing you use stub and mock objects to behave like the rest of your application. This is where integration tests come in. Testing all units in isolation is great but you do need to know if the units are actually working together.
This means knowing if a model is actually stored in a database or if a warning is really issued after algorithm X fails.
Test driven development
Taking it a step back and looking at Test Driven Development (TDD) there are several things to take into account.
- You write your unit test before you actually write the code that makes it pass.
- You make the test pass, write just enough code to accomplish this.
- Now that the test passes it is time to take a step back. Is there anything to refactor with this new functionality in place? You can do this safely since everything is covered by tests.
Integration first vs Integration last
Integration tests fit into this TDD cycle in one of two ways. I know of people who like to write them beforehand. They call an integration test an end-to-end test and define an end to end test as a test that completely tests the whole path of a usecase (think of setting up an application, bootstrapping it, going to a controller, executing it, checking for the result, output, etc...). Then they start out with their first unit test, make it pass, add a second, make it pass, etc... Slowly more and more parts of the integration test pass as well until the feature is finished.
The other style is building a feature unit test by unit test and adding the integration tests deemed necessary afterwards. The big difference between these two is that in the case of integration test first you're forced to think of the design of an application. This kind of disagrees with the premise that TDD is about application design as much as about testing.
Practicalities
At my job we have all our tests in the same project. There are different groups however. The continuous integration tool runs what are marked as unit tests first. Only if those succeed are the slower (because they make real requests, use real databases, etc) integration tests executed as well.
We usually use one test file for one class by the way.
Suggested reading
- Growing object-oriented software, guided by tests This book is an extremely good example of the integration test first methodology
- The art of unit testing, with examples in dot.net On unit testing, with examples in dot.net :D Very good book on principles behind unit-testing.
- Robert C. Martin on TDD (Free articles): Do read the first two articles he linked there as well.
It's comparing oranges and apples.
Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different.
I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them:
Integration tests:
Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified.
Acceptance tests:
An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify.
Unit tests:
For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test.
Behaviour tests:
Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful.
These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value.
Best Answer
In general: yes, you should put integration tests and unit tests into different folders. Often, programmers don't draw a clear line between these two kinds of tests and just write whatever kind of test is useful. But integration tests tend to be slower, because they often involve:
In contrast, an unit test would mock any expensive operations, so unit tests tend to run quickly (in fact, the slowest part of running the test is often the test framework itself).
When a programmer is working on the system they are in an edit–test cycle. The faster they get test feedback and the shorter the cycle is, the more productive they can be. So there we want to only run important test that complete quickly. The complete test suite would only be executed as part of a QA process, e.g. on a CI server.
This means that large test suites should be categorized. Can we only select unit tests for a particular component? Can we exclude slow tests? One simple way to do this is to maintain different test suites in different directories. If you only have very few tests, a single directory would also be OK as long as a programmer can easily select a subset of tests.
Whatever allows a programmer to get feedback quickly is good. The most comprehensive test suite doesn't matter if it isn't executed regularly.
Further reading: