Java – Any tools/suggestions on how to refute code coverage quality argument

javatest-coveragetoolsunit testing

Now I know people could consider this question duplicate or asked many times, in which case I would appreciate a link to relevant questions with answer to my question.

I have been recently in disagreement with some folks about code coverage. I have a group of people who want our team to drop looking at code coverage altogether based on the argument that 100% coverage does not mean good quality tests and thus good quality code.

I have been able to push back by selling the argument that Code Coverage tells me what has not been tested for sure and help us focus on those areas.

(The above has been discussed in a similar fashion in other SO questions like this one – https://stackoverflow.com/questions/695811/pitfalls-of-code-coverage)

The argument from these folks is – then team would react by quickly creating low quality tests and thus waste time while adding no significant quality.

While I understand their point of view, I am searching for a way to make a more robust case for code coverage by introducing more robust tools/frameworks that take care of more coverage criteria (Functional, Statement,Decision, Branch, Condition, State, LCSAJ, path, jump path, entry/exit, Loop, Parameter Value etc).

What I am looking for is suggestion for a combination of such code coverage tools and practices/processes to go with them which can help me counter such arguments while feeling comfortable about my recommendation.

I would also welcome any accompanying comments/suggestions based on your experience/knowledge on how to counter such an argument, because while subjective, code coverage has helped my team be more conscious of code quality and value of testing.


Edit: To reduce any confusion about my understanding of weakness of typical code coverage, I want to point out that I am not referring to Statement Coverage(or lines of code executed) tools(there are plenty). In fact here is a good article on everything that is wrong with it: http://www.bullseye.com/statementCoverage.html

I was looking for more than just statement or line coverage, going more into multiple coverage criteria and levels.

See: http://en.wikipedia.org/wiki/Code_coverage#Coverage_criteria

The idea is that if a tool can tell us our coverage based on multiple criteria then that becomes a reasonable automated assessment of test quality. I by no means am trying to say that line coverage is a good assessment. In fact that's the premise of my question.


Edit:
Ok, maybe I projected it a bit too dramatically, but you get the point. The problem is about setting processes/policies in general across all teams in a homogeneous/consistent fashion. And the fear is general that how do you ensure quality of tests, how do you allocate guaranteed time without having any measure to it. Thus I like having a measurable feature that when backed up with appropriate processes and the right tools would allow us to improve code quality while knowing that time is not being force spent in wasteful processes.


EDIT: So far what I have from the answers:

  • Code reviews should cover tests to ensure quality of tests
  • Test First strategy helps avoid tests that are written after the fact to simply increase coverage %
  • Exploring alternative tools that cover test criteria other than simply Statement/Line
  • Analysis of covered code/number of bugs found would help appreciate the importance of coverage and make a better case
  • Most importantly trust the Team's input to do the right thing and fight for their beliefs
  • Blocks Covered/# of tests – Debatable but holds some value

Thanks for the awesome answers so far. I really appreciate them. This thread is better than hours of brainstorm with the powers that be.

Best Answer

In my experience, code coverage is as useful as you make it. If you write good tests that cover all of your cases, then passing those tests means that you have met your requirements. In fact that's the exact idea that Test Driven Development uses. You write the tests before the code without knowing anything about the implementation (Sometimes this means another team entirely writes the tests). These tests are set up to verify that the final product does everything that your specifications says it done, and THEN you write the bare minimum code to pass those tests.

The problem here, obviously, is that if your tests aren't strong enough, you will miss edge cases or unforeseen problems and write code which doesn't truly meet your specifications. If you are truly set on using tests to verify your code, then writing good tests is an absolute necessity, or you're really wasting your time.

I wanted to edit the answer here as I realized that it didn't truly answer your question. I would look at that wiki article to see some stated benefits of TDD. It really comes down to how your organization works best, but TDD is definitely something in use in the industry.