TDD – Is an IoC Container Needed in Unit Tests?

ioc-containerstdd

While I do see the benefits of IoC containers (as I've used them a fair bit), I don't see how they can be incorporated within TDD unit tests (note, I'm only interested in unit tests here, not integration tests).

When I TDD, I refactor constructors to use IoC, so that I can inject fake dependencies wherever I might need to. Implementing a container implies that I'd be deviating from the red-green-refactor-repeat loop and adding code that wouldn't be covered by my tests.

Now let's say that you somehow (with great design prowess) managed to hook in a container in your TDD life-cyle. You certainly aren't meant to create instances in your unit test by resolving dependencies, as strictly speaking, that turns it into an integration test (bringing in multiple production components).

So my questions are:

1) In what scenario might you need a container while unit testing within TDD?

2) Assuming a valid scenario for (1) exists, how would you go about incorporating a container without breaking away from red-green-refactor-repeat?

I should clarify that by 'need', I'm talking about a stage you'd get to where manually managing DI gets tedious because you have a massive object graph.

IMPORTANT: I'm not asking about containers for your test. I'm asking strictly about production containers. I have a feeling that an IoC container cannot be implemented in a TDD life-cyle without breaking away from red-green-refactor-repeat (rgrr) and managing the container would have to be done in a sort of parallel way, if that makes sense.

Best Answer

  1. In what scenario might you need a container while unit testing within TDD?

I don't see how IoC containers factor into this. Even if you were taking a pure DI approach where you write explicit methods to instantiate your components (which in turn chains to all of that component's dependencies' methods), your question would remain the same: how do you test those pure DI methods?

Whether you use a container or a set of prebuilt Pure DI methods that instantiate your objects for you, it serves the same purpose, and you would write the same test for it.

However, as I'll elaborate in this answer, I don't quite agree that you need to write tests for either of them.

note, I'm only interested in unit tests here, not integration tests

This sort of undoes your argument, or at least shifts the goal posts a bit. Unit tests don't care about a chained stack of dependencies, specifically because they are unit tests.

Any issues in wiring the IoC container will be spotted by your integration tests, since those tests are built specifically to test how your individual components are wired together.

To put it another way, a bad IoC registration doesn't in fact invalidate the units that you test themselves, and therefore their unit tests shouldn't fail.

Implementing a container implies that I'd be deviating from the red-green-refactor-repeat loop and adding code that wouldn't be covered by my tests.

There's two approaches here. The pedantically correct one, and the reasonably practical one.

The reasonably practical approach

Most devs I've worked with, including myself, tend to consider the configuration of the DI container to be startup logic, which falls outside of the application logic scope, which is what your tests cover.

Startup logic tends to be trivial, and is the runtime equivalent of linking projects at compile time. It's not the target for unit tests, because it's "meta logic", in the sense that it doesn't contain business logic but rather the setup logic for the framework in which you will be housing your meta logic.

My reasoning for not testing this is that these tests wouldn't exist to test a behavior, but they are just a "write it again" exercise to confirm that the expected object graph is the same in both your production code and your test.
I find it a waste of time for the same reason that I hate "confirm your email address again" textboxes, and I always end up copy/pasting the email address from the first box into the second one, which defeats the purpose entirely.

Another counterargument here, though slightly outdated nowadays, is that some older IoC setups used config files to configure their container, and you can make a reasonable case here for not needing to unit test values from a config file.

The pedantically correct appraoch

But if you choose to read 100% coverage as 100% coverage of everything, nothing is stopping you from writing tests that confirm the IoC container has been hooked up correctly - at least to the point of not throwing exceptions about missing dependency registrations (the specific behaviors are better tested using more concrete integration tests).

In essence, you could consider the wiring up logic as a unit in and of itself, and thus write tests for it to confirm that it correctly wired your object graph.

You certainly aren't meant to create instances in your unit test by resolving dependencies, as strictly speaking, that turns it into an integration test

If I create a container, register one real dependency (i.e. the one I'm unit testing), and then register a bunch of mocks, then I'm not turning my unit test into an integration test.

There's little purpose to using such a custom built container for the purpose of a unit test (though I can't guarantee that it's never useful), but it doesn't inherently turn your unit test into an integration test.