Unit-testing – How to know if I have enough unit test coverage to remove an integration test

integration-teststest-coveragetestingunit testing

I'm working on a legacy system (by that I mean it was written without tests). We've tried to test some of the system by writing integration tests that test functionality from the outside.

This gives me some confidence to refactor parts of the code without worry about breaking it. But the problem is these integration tests require a deploy (2+ minutes) and many minutes to run. Also, they are a pain to maintain. They each cover thousands of lines of code and when one of them breaks it can take hour(s) to debug why.

I've been writing lots of unit tests for these functional changes I've been making lately, but before I commit I always do a new deploy and run all integration tests, just to make sure I didn't miss anything. At this point I know my unit tests and some of the integration tests are overlapping what they test.

How do I know when my good unit tests are adequately covering a bad integration test so that I can delete that integration test?

Best Answer

The easiest metric is to ask, "when was the last time this integration test legitimately failed?" If it has been a long time (there have been a lot of changes) since the integration test failed, then the unit tests are probably doing a good enough job. If the integration test has failed recently, then there was a defect that was not caught by the unit tests.

My preference would generally be to increase the robustness of the integration tests, to the point where they can be reliably run unattended. If they take a long time to run, then run them overnight. They are still valuable even if they are only run occasionally. If these tests are too fragile or require manual intervention, then it may not be worth the time spent in keeping them running, and you may consider discarding those that succeed most often.