Unit-testing – Unit testing text output

designjunittestingunit testing

I have recently become responsible for a legacy tool that analyses code and provides a log as output. As part of the JUnit suite for this, there are ~100 tests that rely on successfully matching the produced log with a static expected log.

Now this means that if the format of the log changes at all, it will break the unit tests. But I cannot decide if this is a good or a bad thing.


Pros:

  • Quickly failing unit tests let me know that something has changed

Cons:

  • Tests have to be updated every time the log style is changed
  • A straight text compare is bad for this case, because the log file involves times. So if a run doesn't take exactly as long as was expected, the test fails.

I can't help but feel that these tests shouldn't be run. But at the same time I don't want to leave the log file process uncovered.

What is the solution here? Better unit tests? No unit tests? How can I do this without my tests being so brittle?

The log is being written to a file, and is not part of any logging framework (a file is generated after the tool runs, telling the user which files failed validation and why).

Best Answer

Short-term solution: Leave things as they are. Badly written, brittle tests are much, much better than not having tests. On a scale from "no tests" to "tests so reliable they make you weep and do all your work for you", your existing test suite is much closer to "weep" than to "nothing".

Middle-term solution: refactor. It's inconceivable that the analysis tool doesn't have an intermediate representation of its findings, before it's turned into a stream of text. Make that representation publicly accessible and rewrite the tests to assert things about that representation, not about the textual form. It sucks, but you only have to do it once, and from then on maintenance will be much, much easier, even when logic changes and new cases are added.