C# – How to Differentiate Unit Tests from Integration Tests

ccontinuous integrationintegration-testsxunit

In my C# solution, I have a Tests project containing unit tests (xUnit) that can run on every build. So far so good.

I also want to add integration tests, which won't run on every build but can run on request. Where do you put those, and how do you separate unit tests from integration tests? If it's in the same solution with the same [Fact] attributes, it will run in the exact same way.

What's the preferred approach? A second separate test project for integration tests?

Best Answer

The separation is not unit versus integration test. The separation is fast versus slow tests.

How you organize these tests to make them easier to run is really up to you. Separate folders is a good start, but other annotations like traits or [Fact] can work just as well.


I think there is a fundamental misconception here about what constitutes an integration test and a unit test. The beginning of Flater's answer gives you the differences between the two (and yes, sadly, I'm going to quote an answer already on this question)

Flater said:

The difference between unit tests and integration tests is that they test different things. Very simply put:

  • Unit tests test if one thing does what it's supposed to do. ("Can Tommy throw a ball?" or "Can Timmy catch a ball?")
  • Integration tests test if two (or more) things can work together. ("Can Tommy throw a ball to Timmy?")

And some supporting literature from Martin Fowler:

Integration tests determine if independently developed units of software work correctly when they are connected to each other. The term has become blurred even by the diffuse standards of the software industry, so I've been wary of using it in my writing. In particular, many people assume integration tests are necessarily broad in scope, while they can be more effectively done with a narrower scope.

(emphasis, mine). Later on he elaborates on integration tests:

The point of integration testing, as the name suggests, is to test whether many separately developed modules work together as expected.

(emphasis, his)

With regard to the "narrower scope" of integration testing:

The problem is that we have (at least) two different notions of what constitutes an integration test.

narrow integration tests

  • exercise only that portion of the code in my service that talks to a separate service
  • uses test doubles of those services, either in process or remote
  • thus consist of many narrowly scoped tests, often no larger in scope than a unit test (and usually run with the same test framework that's used for unit tests)

broad integration tests

  • require live versions of all services, requiring substantial test environment and network access
  • exercise code paths through all services, not just code responsible for interactions

(emphasis, mine)

Now we start getting to the root of the problem: an integration test can execute quickly or slowly.

If the integration tests execute quickly, then always run them whenever you run unit tests.

If the integration tests execute slowly, because they need to interact with outside resources like the file system, databases or web services then those should be run during a continuous integration build, and run by developers on command. For instance, right before code review run all of the tests (unit, integration or otherwise) that apply to the code you have changed.

This is the best balance between developer time and finding defects early on in the development life cycle.

Related Topic