Unit-testing – Ignoring unit tests – good and bad reasons when and why

unit testing

Recently I began a new project to re-implement a core part of automation. Since it is very important, I'm TDDing it so that I can tests various basic scenarios as well as things we know the old system doesn't do.

In doing this, I find myself creating a few "sandbox" tests, which exercise "candidate" algorithms in complete isolation so I can choose the one that best meets requirements. I've also created some tests to figure out why other tests are failing, which again are coded in a "sandbox" manner and don't test actual production code (they instead have a simplification of the production environment that I can alter until I find the problem/cause/solution). Neither of these has continuing value for regression; once I pick an algorithm I may never use the others, and once a test shows why another test is failing, I can correct the initial test (or its exercised production code) and verify correctness, and I no longer care that the other test's hard-coded exercise environment still fails.

When learning TDD I was taught that you NEVER delete a unit test; even if the test's original purpose has become irrelevant, the code within the test can be of use to other developers as a model, or the test can be refactored for some other purpose later. Instead, if a test has become redundant or irrelevant and should no longer be run, pass or fail, it should be ignored.

Is this the "generally-accepted" stance on ignoring tests? Does this have the potential for abuse (e.g. "I don't think it matters anymore, and it fails, so I'll just ignore it")?

Best Answer

once I pick an algorithm I may never use the others,

False for well-written tests which exercise the interface of the algorithm.

If your tests exercise implementation details, and the implementation no longer applies, then the tests no longer apply. Delete them.

and once a test shows why another test is failing, I can correct the initial test (or its exercised production code) and verify correctness, and I no longer care that the other test's hard-coded exercise environment still fails.

Um. You should fix the "other test's hard-coded exercise environment" so that it's meaningful and useful and correct.

If a test is wrong (even if redundant) it should be corrected so that it's right.

TDD means tests are right, first and foremost. Code will get right eventually.

"and it fails, so I'll just ignore it"

This is not a potential for abuse. This is abuse.

You cannot ignore a failing test. If the test is wrong it must be fixed.

Removing tests that seem redundant is a waste of time. Extra testing never hurt anybody. Indeed, it may uncover super-subtle errors that pass one version of a test by fail another version of a superficially similar test. This may indicate design errors in an interface more than an implementation bug.

Related Topic