Legacy Code – Does Adding Unit Tests Make Sense?

clegacytddunit testing

  • I'm talking about unit tests in the TDD sense. (Not automated "integration", or what you like to call it tests.)
  • Legacy code as in: (C++) code without tests. (see: Michael Feathers' Working Effectively with Legacy Code)
  • But also legacy code as in: Code that our team has been working with for the last 10-5 years, so we very often have quite a good idea of where to put things to change something.
  • We do have unit tests in place (via Boost.Test) for some modules that came later or have been a "natural" fit for unit tests (common app specific containers, string-stuff, network helpers, etc.)
  • We do not yet have proper automated acceptance tests.

Now, recently I had the "pleasure" to implement 3 new user-facing features.

Each of those took me about 1-2 hours of getting up to speed with the code parts I needed to change, 1-2h hours to implement the (little) code I needed to change and another 1-2 hours to make sure the app ran correctly afterwards and did was it was supposed to do.

Now, I really added little code. (I think one method and a few call lines for each feature.)

Factoring out this code (via any of the methods suggested in WEwLC), so that a unit test would've make sense (and not been a complete tautology) would have easily taken another 2-4 hours, if not more. This would have added 50%-100% time to each feature, with no immediate benefit, as

  1. I did not need the unit test to understand anything about the code
  2. Manual testing is the same amount of work, as I still need to test if the code is correctly integrated into the rest of the app.

Granted, if, later on, "someone" came along and touched that code, he theoretically could have some benefit from that unit test. (Only theoretically, as that tested island of code would live in a ocean of untested code.)

So, "this time" I chose to not do the hard work of adding a unit test: The code changes to get that stuff under test would have been significantly more complex than the code changes to get the feature correctly (and cleanly) implemented.

Is this something typical for strongly coupled legacy code? Am I lazy / do we set the wrong priorities as a team? Or am I prudent, only testing stuff where the overhead isn't too high?

Best Answer

You have to be pragmatic about these situations. Everything has to have a business value, but the business has to trust you to judge what the value of technical work is. Yes, there is always a benefit to having unit tests, but is that benefit great enough to justify the time spent?

I would argue always on new code but, on legacy code, you have to make a judgement call.

Are you in this area of code often? Then there's a case for continual improvement. Are you making a significant change? Then there's a case that it is already new code. But if you're making a one-line code in a complex area that will probably not be touched again for a year, of course the cost (not to mention risk) of reengineering is too great. Just slap your one line of code in there and go take a shower quick.

Rule of thumb: Always think to yourself, "Do I believe that the business benefits more from this technical work that I feel I should do than the job they asked for which is going to be delayed as a result?"