C# – How far to go with unit tests

asp.net-mvccnet

A question asked many times before but with a specific slant twds mvc development.

I've been a very good boy and have been coding all my controller actions with corresponding unit tests which has been great (if a little [read a LOT] repetitive at times). To be honest, I've actually created a little T4 template to write most of the bare bones of the intial unit tests and then tweaked as appropriate as per usage. I will admit to not being quite sure how to handle tests in views that contain partialviews – but that's a story for another question.

Now, the difficult part for me to decide upon is just how deep the coverage should be in my service layer. The reason being that some of my service methods (for better or worse) actually perform a variety of linq queries which then supply discreet information to subsequent logic within the method. I know i could (should??) break these methods down to only call the required logic for each linq statement and then apply them within the method. However, in many instances, there is never any reuse of the linq 'functions' and therefore it feels that this would refactor the code out a level too far.

What I'm asking is, with complex logic occurring within a method, is it 'good enough' to have a test method that simply asserts the required result and/or expected error, or should every logic line be simulted and tested too. the way I'm seeing it, to do the testing correctly, then the method logic (line by line) should be getting some sort of coverage too. That however (in my naive opinion) could lead to a never ending cycle of trying to keep the test and the implemented method so closely aligned (which i know they should be) as to create a cottage industry in the tests themselves.

I know my question may offend a few of the TDD devotees who will see this as a no brainer. Not being in the TDD camp, this is a 'yes brainer' for me, hence the question.

btw – had checked this out for ideas:

http://dotnetslackers.com/articles/aspnet/Built-in-Unit-Test-for-ASP-NET-MVC-3-in-Visual-Studio-2010-Part-1.aspx

looking fwd to the steady downvotes now 🙂

[edit] – for the benefit of the single (well at the moment single!!) 'close' voter. this question is not subjective. I'm looking for concensus on a very focussed subject. I'm not attempting to stir up negative passions, I'm not looking to expose flaws in the technology – i'm a HUGE fan. So please, drop a polite comment for my benefit if voting to close as it may help me to restructure the question if there's ambiguity or misinformation. this question could benefit a large chunk of the mvc population.

thank you!!

jim

Best Answer

First off what you are talking about doesn't sound quite like TDD. TDD implies a test first approach which is about driving the design of your system by following the pattern of Test->Code->Refactor. So perhaps your first problem is the purpose of your tests, are you writing them as you code? If so I would expect that pretty much all of the logic within your test relates back to some unit test. High code coverage is therefore an indirect result of applying TDD.

When you are doing TDD you write enough test code to motivate the code you want to write. You also ensure the test fails first. Basically ask yourself what is something this method needs to do. Then you code it, but just enough to make the test pass, if it isn't what you are looking for then you write additional tests and then refactor the method.

Code coverage after the fact is not an effective method of measure your adherence to TDD, though you will typically find very high code coverage in code written using TDD do to the fact that all of the code should have been motivated by some test.

TDD tests serve to both drive the design and document and explain the design to others in plain language (so how you name your tests is very important).

However none of this rambling really answers your question directly so I'll just say, you should aim for pretty high code coverage of service (non-UI) code, especially whereever there is non-trival logic, and even better if the tests are written first ;-). The fact is (though some may disagree) that more tests are generally better. Many high quality open source projects have far more test code than running code.

Additionally, tests should be written whenever:

  1. You are writing new code, tests should drive and document your design and explain your assumptions about what the code should do. The should be written before you code.

  2. You found a bug, a failing test should demonstrate the bug. When the bug is fixed the test should pass.

  3. You change code in a way that changes the nature of what a method or class does (though if a lot of tests fail when one area of the code changes this could indicate brittle tests). This keeps the tests documenting the code correctly.

Personally, I have found that learning TDD is an interesting challenge, and it takes time to develop a good "gut feeling" for it. Practice, practice, practice has been the best way to learn for me. That and reading test code from open source projects and now also contributing to them while writing new tests with my changes.