Unit-testing – Should we exclude code for the code coverage analysis

code-qualityunit testing

I'm working on several applications, mainly legacy ones.
Currently, their code coverage is quite low: generally between 10 and 50%.

Since several weeks, we have recurrent discussions with the Bangalore teams (main part of the development is made offshore in India) regarding the exclusions of packages or classes for Cobertura (our code coverage tool, even if we are currently migrating to JaCoCo).

Their point of view is the following: as they will not write any unit tests on some layers of the application (1), these layers should be simply excluded from the code coverage measure. In others words, they want to limit the code coverage measure to the code that is tested or should be tested.

Also, when they work on unit test for a complex class, the benefits – purely in term of code coverage – will be unnoticed due in a large application. Reducing the scope of the code coverage will make this kind of effort more visible…

The interest of this approach is that we will have a code coverage measure that indicates the current status of the part of the application we consider as testable.

However, my point of view is that we are somehow faking the figures. This solution is an easy way to reach higher level of code coverage without any effort.
Another point that bothers me is the following: if we show a coverage increase from one week to another, how can we tell if this good news is due to the good work of the developers, or simply due to new exclusions?

In addition, we will not be able to know exactly what is considered in the code coverage measure. For example, if I have a 10,000 lines of code application with 40% of code coverage, I can deduct that 40% of my code base is tested (2). But what happen if we set exclusions? If the code coverage is now 60%, what can I deduct exactly? That 60% of my "important" code base is tested? How can I

As far as I am concerned, I prefer to keep the "real" code coverage value, even if we can't be cheerful about it. In addition, thanks to Sonar, we can easily navigate in our code base and know, for any module / package / class, its own code coverage. But of course, the global code coverage will still be low.

What is your opinion on that subject? How do you do on your projects?

Thanks.

(1) These layers are generally related to the UI / Java beans, etc.

(2) I know that's not true. In fact, it only means that 40% of my code base

Best Answer

I generally exclude auto-generated code, such as the WCF clients that Visual Studio generates. There are usually a lot of lines of code there and we're never going to test them. This makes it very demoralising to increase testing on a large chunk of code elsewhere and only increase code coverage by 0.1%.

I will also exclude data-layer interactions, as long as the team can say with certainty that this layer is as thin as it possibly can be. While you can argue that, if the layer is thin, it won't have a massive effect, it does leave a lot of components in the coverage report with 0% against them, so we don't necessarily notice the ones that we need to really worry about. The UI layer could be argued in a similar way, depending on the framework being used.

But, as a counter-point, I will also exclude the unit tests themselves. They should always have ~100% coverage and they amount to a large percentage of the code-base, skewing figures dangerously upwards.