Integration Tests – Are In-Memory Databases a Form of Integration Testing?

efcoreintegration-testsmockingunit testingxunit

I have looked through most of the answers given regarding using in-memory database for unit test, but I could not find one that was clear to me.

I am writing some unit tests to cover some of our codebase. The codebase is written in ASP.NET Core and EF Core. I am using xUnit and Moq.

I have read and understood the concept of unit test vs integration tests.

As I understand, writing unit tests means testing a code in isolation from other dependencies and to achieve this, we can mock those dependencies so we can test the code only.

However, I found that it is a bit more work setting up mock of dependencies that I need especially if these dependencies are repository dependencies.

When I tried using an in-memory database, all I need to do was setup the in-memory database, seed the database, create a test fixture and it works. And in subsequent test, all I have to do is create the dependencies and it works just fine.

To mock the repository, I have to setup a mock and return value. And then there is the complexity of mocking an async repository as explained here: How to mock an async repository with Entity Framework Core. For each test, I have to mock the repository that is needed which means more work.

With all these in mind, would it be better if I just ditch mocking and use in-memory database? Also, would this not be seen as integration testing though it's an in-memory database.

I created two different tests using the in-memory database and mocking..
– The in-memory database tests understandably took more time, but the difference is usually about 1sec longer than the tests using mocks.

Best Answer

IMHO you are asking the wrong question. It does not matter if what you create is called a unit test by some people, or an integration test by others. What matters is here,

  • is this test useful for your case (so will it help you to avoid certain defects, and will it suffciently reduce the area in code where the root cause of a certain defect may be)?

  • is it fast enough, even when you are going to run several tests of this kind? (1sec longer does not sound much, but when the original test required 0.001 seconds, and the new one requires 1.001 seconds, running 5000 tests of this kind will make a notable difference)

  • is it maintainable? Ideally more maintainable than alternative approaches? This depends heavily on the tooling and how good an "in-memory DB" is supported.

If you can answer all of these questions honestly with "yes", then go ahead.

Recommended: The Way of Testivus - which tells you, for example, to follow less dogma (which is a good recommendation not just for testing).