Unit Tests Failing – Why Is It Seen as Bad?

unit testing

In some organisations, apparently, part of the software release process is to use unit testing, but at any point in time all unit tests must pass. Eg there might be some screen which shows all unit tests passing in green – which is supposed to be good.

Personally, I think this is not how it should be for the following reasons:

  1. It promotes the idea that code should be perfect and no bugs should exist – which in the real world is surely impossible for a program of any size.

  2. It is a disincentive to think up unit tests that will fail. Or certainly come up with unit tests that would be tricky to fix.

  3. If at any point in time all unit tests pass, then there is no big picture of the state of the software at any point in time. There is no roadmap/goal.

  4. It deters writing unit tests up-front – before the implementation.

I would even suggest that even releasing software with failing unit tests is not necessary bad. At least then you know that some aspect of the software has limitations.

Am I missing something here? Why do organisations expect all unit tests to pass? Isn't this living in a dream world? And doesn't it actually deter a real understanding of code?

Best Answer

This question contains IMHO several misconceptions, but the main one I would like to focus on is that it does not differentiate between local development branches, trunk, staging or release branches.

In a local dev branch, it is likely to have some failing unit tests at almost any time. In the trunk, it is only acceptable to some degree, but already a strong indicator to fix things ASAP. Note that failing unit tests in the trunk can disturb the rest of the team, since they require everyone to check if not his/her latest change was causing the failure.

In a staging or release branch, failing tests are "red alert", showing there has been gone something utterly wrong with some changeset, when it was merged from the trunk into the release branch.

I would even suggest that even releasing software with failing unit tests is not necessary bad.

Releasing software with some known bugs below a certain severity is not necessarily bad. However, these known glitches should not cause a failing unit test. Otherwise, after each unit test run, one will have to look into the 20 failed unit tests and check one-by-one if the failure was an acceptable one or not. This gets cumbersome, error-prone, and discards a huge part of the automation aspect of unit tests.

If you really have tests for acceptable, known bugs, use your unit testing tool's disable/ignore feature (so you they are not run by default, only on-demand). Additionally, add a low-priority ticket to your issue tracker, so the problem won't get forgotten.