Functional Testing – Is More Time Spent on Testing Normal?

functional-testingproject-managementtesting

Basically, we have three main projects, two of them are web services, and the other is a web application. While I'm satisfied with covering as much as we can of our web services with functional tests (all three projects have their proper unit tests), functional tests for the web application are taking a lot of developer time to get implemented. By a lot I mean two times, or sometimes more, the time that takes to implement the functionality being tested with unit test included.

The manager policy is to test every single functionality we add, even if is not business critical (i.e a new C.R.U.D).

I do agree with testing all of the web services functionality, because is hard to manually test them, and also, this tests run fast and don't take to much to implement.

So, what's the value in spending more time writing functional test, than writing system code, unit test and fixing QA tikets? Is this normal? Shouldn't we be writing functional tests only for critical functionality and let QA to do regression tests over no critical functionality?

Note: we are not developing medical software or NASA software or nothing that critical.

Best Answer

Functional tests are very important. Yes, they take time to write but if you are writing the right functional tests, they will be more than worth it.

There are a few good reasons to do automated functional tests on an application.

  • When a new feature is added to your web site, it let's you know right away if changes made for that new feature break any other functionality on your site.
  • It's documented knowledge of how the application runs and works together to achieve the business requirements.
  • When it's time to update a 3rd party library, you can update it and run your functional test suite to see if anything breaks. Instead of having to go through every page yourself, you can have a computer do it for you and give you a list of all the tests that broke.
  • Load testing! You can simulate thousands of simultaneous users all hitting your site at once and you can see where your site slows down and buckles under the pressure. You can see how your web site behaves long before you get a late night call that the site has crashed.
  • Functional testing takes time to do manually. Yes, it takes long to write the cases, but if you had to sit down with a binder with 500 pages of tests that you had to complete before you could ship the product you'd wish you had the automated tests!
  • Testing documents get out of date fast. When a new feature is added, you have to make sure to update the master testing document. If someone skips some tests you all of a sudden get bugs creeping into pages that are "done and tested". I currently work in an environment like that, and I can assure you, it's a nightmare.

In the end, yes it takes time to write these cases, but you should take pride in writing them. It's your way of proving, beyond a shadow of a doubt that your code works and it works with all the other features out there. When QA comes to you and says there is a bug, you fix it, and then add it to your test suite to show that it's fixed and make sure it never happens again.

It is your safety net. When someone goes in and hijacks a stored proc and makes a small change so it'll work with their code, you'll catch that it has broken 3 other features in the process. You'll catch it that night and not the night before the deadline!

As for writing functional tests for only system critical functions. That won't give you the whole picture and it'll allow bugs to sneak through. All it take is for one little feature to be added that isn't system critical, but interacts indirectly with a system critical function and you have the potential to have a bug introduced.