How many tests per method?
Well the theoretical and highly impractical maximum is the N-Path complexity (assume the tests all cover different ways through the code ;)). The minimum is ONE!. Per public method that is, he don't test implementation details, only external behaviors of a class (return values & calling other objects).
You quote:
*And the thought of testing each of your methods with its own test method (in a 1-1 relationship) will be laughable. *
and then ask:
So if creating a test for each method is 'laughable', how/when do you chose what you write tests for?
But i think you misunderstood the author here:
The idea of having one test method
per one method in the class to test
is what the author calls "laughable".
(For me at least) It's not about about 'less' it's about 'more'
So let me rephrase like i understood him:
And the thought of testing each of your methods with ONLY ONE METHOD (its own test method in a 1-1 relationship) will be laughable.
To quote your quote again:
When you realize that it's all about specifying behaviour and not writing tests, your point of view shifts.
When you practice TDD you don't think:
I have a method calculateX($a, $b);
and it needs a test testCalculcateX
that tests EVERYTHING about the method.
What TDD tells you is to think about what your code SHOULD DO like:
I need to calculate the bigger of two values (first test case!) but if $a is smaller than zero then it should produce an error (second test case!) and if $b is smaller than zero it should .... (third test case!) and so on.
You want to test behaviors, not just single methods without context.
That way you get a test suite that is documentation for your code and REALLY explains what it is expected to do, maybe even why :)
How do you go about deciding which piece of your code you create unit tests for?
Well everything that ends up in the repository or anywhere near production needs a test. I don't think the author of your quotes would disagree with that as i tried to state in the above.
If you don't have a test for it it gets way harder (more expensive) to change the code, especially if it's not you making the change.
TDD is a way to ensure that you have tests for EVERYTHING but as long as you WRITE the tests it's fine. Usually writing them on the same day helps since you are not going to do it later, are you? :)
Response to comments:
a decent amount of methods can't be tested within a particular context because they either depend or are dependent upon other methods
Well there are three thing those methods can call:
Public methods of other classes
We can mock out other classes so we have defined state there. We are in control of the context so thats not a problem there.
*Protected or Private methods on the same *
Anything that isn't part of the public API of a class doesn't get tested directly, usually.
You want to test behavior and not implementation and if a class does all it's work in one big public method or in many smaller protected methods that get called is implementation. You want to be able to CHANGE those protected methods WITHOUT touching your tests. Because your tests will break if your code changes change behavior! Thats what your tests are there for, to tell you when you break something :)
Public methods on the same class
That doesn't happen very often does it? And if it does like in the following example there are a few ways of handling this:
$stuff = new Stuff();
$stuff->setBla(12);
$stuff->setFoo(14);
$stuff->execute();
That the setters exist and are not part of the execute method signature is another topic ;)
What we can test here is if executes does blow up when we set the wrong values. That setBla
throws an exception when you pass a string can be tested separately but if we want to test that those two allowed values (12 & 14) don't work TOGETHER (for whatever reason) than thats one test case.
If you want a "good" test suite you can, in php, maybe(!) add a @covers Stuff::execute
annotation to make sure you only generate code coverage for this method and the other stuff that is just setup needs to be tested separately (again, if you want that).
So the point is: Maybe you need to create some of the surrounding world first but you should be able to write meaningful test cases that usually only span one or maybe two real functions (setters don't count here). The rest can be ether mocked away or be tested first and then relied upon (see @depends
)
*Note: The question was migrated from SO and initially was about PHP/PHPUnit, thats why the sample code and references are from the php world, i think this is also applicable to other languages as phpunit doesn't differ that much from other xUnit testing frameworks.
Arguably they are supported in virtually every programming language.
What you need are "assertions".
These are easily coded as "if" statements:
if (!assertion) then AssertionFailure();
With this, you can write contracts by placing such assertions at the top of your code for input constraints; those at the return points are output constraints. You can even add invariants throughout your code (although aren't really part of "design by contract").
So I argue they aren't widespread because programmers are too lazy to code them, not because you can't do it.
You can make these a little more efficient in most languages by defining a compile-time boolean constant "checking" and revising the statements a bit:
if (checking & !Assertion) then AssertionFailure();
If you don't like the syntax, you can resort to various language abstraction techniques such as macros.
Some modern languages give you nice syntax for this, and that's what I think you mean by "modern language support". That's support, but its pretty thin.
What most of even the modern languages don't give you is "temporal" assertions (over arbitrary previous or following states [temporal operator "eventually"], which you need if you want to write really interesting contracts. IF statements won't help you here.
Best Answer
DbC isn't necessarily realized with "assertions which are deactivated in production". Assumed there are no insane performance requirements, contract checks should stay in the code when it is operated in production. That way, they will help to detect bugs which have slipped through the net of unit tests, and may become apparent first time with production data. A violated contract will make the program "crash early", and not sweep the issue under the rug, which could otherwise lead to nasty, subsequent faults with a much harder-to-find root cause.
But even if one uses only debug level assertions for DbC, they can be helpful to find bugs not covered by unit tests - specificially during integration tests and end-to-end tests. The latter kind of tests will usually not point you to the specific unit which has a defect. When an end-to-end test fails, in a suffciently complex system, you may know that something went wrong, but have a really hard time to find the root cause for it.
The situation changes a lot when your code contains assertions or contract checks: those will make it more likely that a problem will show up a lot earlier, ideally nearby the code section which is responsible for it (it might not be directly the buggy code itself, but I know from first-hand experience that such assertions can make it a lot easier to spot the heart of such an issue).