Unit Testing TDD – Tools to Catch Fake Coverage

continuous integrationtddunit testing

Hypothetical scenario: codebase is exercised by unit tests run by a TeamCity build-bot, that also uses the built-in dotCover tool to provide coverage metrics. The build will fail if less than X% of the code is covered.

Unscrupulous developer running NCrunch (or a pre-tested commit in TC) sees that his next checkin will drop the coverage percentage below the threshold and break the build, because he didn't write good unit tests (TDD or otherwise). So, he writes a new test that runs some lines of code that NCrunch shows aren't covered, but makes no assertions about their behavior. Tests pass by default (because the executed code throws no exceptions), coverage stays above X%, and to find the problem, someone must discover the test, inspect it and see there are no assertions (or no meaningful assertions) made during its execution.

Since we currently don't have a code review process, and it would be detrimental to productivity to perform reviews prior to every commit, I want this behavior to break the build. If the test runner runs a method marked with a [Test] attribute (we're using NUnit) and, upon completion, sees that the code has made no calls to NUnit's Assert methods, nor thrown the ExpectedException, TC should raise Cain. Ideally, the tool would be smart enough to also discover that all assertions will be true by definition, such as Assert.AreEqual(1,1);, and fail the build in a similar way.

Is there something "off-the-shelf" that I can plug into TeamCity, or a way I can configure its built-in runners/coverage metrics to find this type of bad behavior, short of performing a custom static code analysis? Of course we'll find it eventually, but in our environment (small in-house dev team) there may only be one or two developers familiar with the full codebase of a given application, and so this blatant end run around test quality checks may not happen until the guy responsible is long gone and someone else takes over primary ownership of the codebase.

Best Answer

IMHO If this is happening enough that you are looking for a tool you either:

  • need to reevaluate your policy (you are asking for tests over something that resists testing or the tests provide little value or over a branch that is too early in the process to always have things figured out far enough to write good test harnesses, dummys and mocks)
  • or reevaluate the makeup of your team (testing is appropriate here but your coworkers aren't very good / honest / whatever)

Personally I have experienced far more of the first case than the second case so I generally only examine code coverage when a feature branch is about to be merged back to the master/main.

There are many many more ways to defeat code coverage than there are tools to calculate it or enforce it so this is time that I would choose to spend elsewhere.

Related Topic