Test Automation – Disadvantages of Automated Testing

automationtddtest-automationtestingunit testing

There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they?

Here's a few that I've come up with:

  • Requires more initial developer time for a given feature
  • Requires a higher skill level of team members
  • Increase tooling needs (test runners, frameworks, etc.)
  • Complex analysis required when a failed test in encountered – is this test obsolete due to my change or is it telling me I made a mistake?

Edit
I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet.

Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are.

Best Answer

You pretty much nailed the most important ones. I have a few minor additions, plus the disadvantage of tests actually succeeding - when you don't really want them to (see below).

  • Development time: With test-driven development this is already calculated in for unit tests, but you still need integration and system tests, which may need automation code as well. Code written once is usually tested on several later stages.

  • Skill level: of course, the tools have to be supported. But it's not only your own team. In larger project you may have a separate testing team that writes tests for checking the interfaces between your team's product and other's. So many more people have to have more knowledge.

  • Tooling needs: you're spot on there. Not much to add to this.

  • Failed tests: This is the real bugger (for me anyways). There's a bunch of different reasons, each of which can be seen as a disadvantage. And the biggest disadvantage is the time required to decide which of these reasons actually applies to your failed test.

    • failed, because of an actual bug. (just for completeness, as this is of course advantageous)
    • failed, because your test code has been written with a traditional bug.
    • failed, because your test code has been written for an older version of your product and is no longer compatible
    • failed, because the requirements have changed and the tested behavior is now deemed 'correct'
  • Non-failed tests: These are a disadvantage too and can be quite bad. It happens mostly, when you change things and comes close to what Adam answered. If you change something in your product's code, but the test doesn't account for it at all, then it gives you this "false sense of security".

    An important aspect of non-failed tests is that a change of requirements can lead earlier behavior to become invalid. If you have decent traceability, the requirement change should be able to be matched to your testcode and you know you can no longer trust that test. Of course, maintaining this traceability is yet another disadvantages. And if you don't, you end up with a test that does not fail, but actually verifies that your product works wrongly. Somewhere down the road this will hit you.. usually when/where you least expect it.

  • Additional deployment costs: You do not just run unit-tests as a developer on your own machine. With automated tests, you want to execute them on commits from others at some central place to find out when someone broke your work. This is nice, but also needs to be set up and maintained.