How many tests per method?
Well the theoretical and highly impractical maximum is the N-Path complexity (assume the tests all cover different ways through the code ;)). The minimum is ONE!. Per public method that is, he don't test implementation details, only external behaviors of a class (return values & calling other objects).
You quote:
*And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. *
and then ask:
So if creating a test for each method is 'laughable', how/when do you chose what you write tests for?
But i think you misunderstood the author here:
The idea of having one test method
per one method in the class to test
is what the author calls "laughable".
(For me at least) It's not about about 'less' it's about 'more'
So let me rephrase like i understood him:
And the thought of testing each of your methods with ONLY ONE METHOD (its own test method in a 1-1 relationship) will be laughable.
To quote your quote again:
When you realize that it's all about specifying behaviour and not writing tests, your point of view shifts.
When you practice TDD you don't think:
I have a method calculateX($a, $b);
and it needs a test testCalculcateX
that tests EVERYTHING about the method.
What TDD tells you is to think about what your code SHOULD DO like:
I need to calculate the bigger of two values (first test case!) but if $a is smaller than zero then it should produce an error (second test case!) and if $b is smaller than zero it should .... (third test case!) and so on.
You want to test behaviors, not just single methods without context.
That way you get a test suite that is documentation for your code and REALLY explains what it is expected to do, maybe even why :)
How do you go about deciding which piece of your code you create unit tests for?
Well everything that ends up in the repository or anywhere near production needs a test. I don't think the author of your quotes would disagree with that as i tried to state in the above.
If you don't have a test for it it gets way harder (more expensive) to change the code, especially if it's not you making the change.
TDD is a way to ensure that you have tests for EVERYTHING but as long as you WRITE the tests it's fine. Usually writing them on the same day helps since you are not going to do it later, are you? :)
Response to comments:
a decent amount of methods can't be tested within a particular context because they either depend or are dependent upon other methods
Well there are three thing those methods can call:
Public methods of other classes
We can mock out other classes so we have defined state there. We are in control of the context so thats not a problem there.
*Protected or Private methods on the same *
Anything that isn't part of the public API of a class doesn't get tested directly, usually.
You want to test behavior and not implementation and if a class does all it's work in one big public method or in many smaller protected methods that get called is implementation. You want to be able to CHANGE those protected methods WITHOUT touching your tests. Because your tests will break if your code changes change behavior! Thats what your tests are there for, to tell you when you break something :)
Public methods on the same class
That doesn't happen very often does it? And if it does like in the following example there are a few ways of handling this:
$stuff = new Stuff();
$stuff->setBla(12);
$stuff->setFoo(14);
$stuff->execute();
That the setters exist and are not part of the execute method signature is another topic ;)
What we can test here is if executes does blow up when we set the wrong values. That setBla
throws an exception when you pass a string can be tested separately but if we want to test that those two allowed values (12 & 14) don't work TOGETHER (for whatever reason) than thats one test case.
If you want a "good" test suite you can, in php, maybe(!) add a @covers Stuff::execute
annotation to make sure you only generate code coverage for this method and the other stuff that is just setup needs to be tested separately (again, if you want that).
So the point is: Maybe you need to create some of the surrounding world first but you should be able to write meaningful test cases that usually only span one or maybe two real functions (setters don't count here). The rest can be ether mocked away or be tested first and then relied upon (see @depends
)
*Note: The question was migrated from SO and initially was about PHP/PHPUnit, thats why the sample code and references are from the php world, i think this is also applicable to other languages as phpunit doesn't differ that much from other xUnit testing frameworks.
Short Answer: Absolutely positively.
Long Answer: Unit tests are one of the most important practices I try and influence at my place of work (large bank, fx trading). Yes they are extra work, but it's work that pays back again and again. Automated unit tests not only help you actually execute code you're writing and of course verify your expectations but they also act as a kind of watch dog for future changes that you or someone else might make. Test breakage will result when someone changes the code in undesirable ways. I think the relative value of unit tests declines in correlation with the level of expected change and growth in a code base, but initial verification of what the code does make it worthwhile even where the expected change is low. Unit test value also depends on the cost of defects. If the cost (where cost is loss of time/money/reputation/future effort) of a defect is zero, then the relative value of a test is also zero; however this is almost never the case in a commercial environment.
We generally don't hire people anymore who don't routinely create unit tests as part of their work - it's just something we expect, like turning up every day. I've not seen a pure cost benefit analysis of having unit tests (someone feel free to point me to one), however I can say from experience that in a commercial environment, being able to prove code works in a large important system is worthwhile. It also lets me sleep better at night knowing that the code I've written provably works (to a certain level), and if it changes someone will be alerted to any unexpected side effects by a broken build.
Test driven development, in my mind is not a testing approach. It's actually a design approach/practice with the output being the working system and a set of unit tests. I'm less religious about this practice as it's a skill that is quite difficult to develop and perfect. Personally if I'm building a system and I don't have a clear idea of how it will work I will employ TDD to help me find my way in the dark. However if I'm applying an existing pattern/solution, I typically won't.
In the absence of mathematical proof to you that it makes sense to write unit tests, I encourage you to try it over an extended period and experience the benefits yourself.
Best Answer
Note that if you're perfectly happy with the style of descriptions you used to write in Python, there is no reason to change that. Put the name of the tested block within the
describe
, and put your description init
. If it's clear for you, it doesn't matter what the designers of the test framework had in mind.If you're for some reason unhappy with porting your style to JavaScript, then stick with
it('should do this or that')
, and just belowit
, include the complete description:When reading a test, for instance in the context where a change in the code broke it, either you'll understand its purpose just by looking at its name, or you'll read the long comment. This style may even be preferable to having only the long description: if the test is recent or if it concerns a part you were working on recently, chances are, you won't need the whole description, and having a short name would save you time.