However, it implies that large parts of the applications code are not covered by tests. Why? Because if you have units (and you need a lot of units to get your Unit Tests right) you need code that wires the units together. This code, IMHO, will get complicated enough that it deserves to be tested on a more granular level that integration tests while it probably falls into "Dirty Hybris":
Your assumption is faulty because you are neglecting a layer of testing - acceptance testing.
Your unit tests cover individual units - the classes and methods that compose them. This enables you to test methods and classes in isolation to ensure that they are behaving as expected. Above this lies your integration tests, which tests the collaboration between classes and ensures that larger modules (packages and even inter-package collaboration) work as expected. Finally, your acceptance tests are used to verify and validate your entire system, as assembled, against the user requirements.
Assuming that you have the appropriate unit and integration tests that correspond to requirements and well-defined acceptance criteria and acceptance test plans, then everything in your system is tested. Other aspects of testing - smoke tests, regression tests, and so forth, are simply an appropriate subsampling of the unit, integration, and acceptance tests.
TDD is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests
That particular quote is also missing something. As I was taught, TDD isn't just about unit tests, but developing all tests first. That includes not only unit tests, but the necessary acceptance and integration tests as well.
You never said what Clamp()
is supposed to do, so I'm assuming that it returns value
, unless it is outside of the range, in which case it returns one of the two bounds.
I don't see any reason to think that -1, 0, or 1 are corner cases. They may often be corner cases, but there's no reason they'd act strangely in this function. If you want a 'normal' value, 42 or -63 works, but there is no need for both of them, unless you suspect that >
and <
don't work properly on negative numbers in C#. (I don't think you need to worry about that.)
So we could just use −2147483648, 'a normal value', and 2147483647. (We could even say that testing with the max/min integer values aren't really necessary. Presumably, C# >
and <
work up to the minimum and maximum; there isn't any danger of integer overflow.)
There are 6 permutations of 3 values, so we're down to 6 testcases. 6 testcases is not much, and we can easily just write them down and use them, but we don't know for certain that we've selected test cases that cover everything (all we've done so far is reduce the original set of test cases to something smaller).
If we want to be sure we've caught all the cases that matter, we could reduce the massively large set of input values (4 billion cubed) by partitioning them into equivalence classes. Then we only need 1 test per equivalence class, since the equivalence class would be defined as a set of inputs that all act alike.
The value of Clamp(a, b, c)
depends on whether a
is in the range, or above it, or below it. There should be 3 equivalence classes: [a < b and a < c], [a > b and a > c], and otherwise. The return value will be b
, c
, or a
, respectively. This tells us not only what the tests should be, but how to write the code.
(There is one little thing that we haven't run into: what if the lower bounds is higher than the upper bounds. What I said in the previous paragraph applies if the assumption I made up at the top is right, but not if it isn't. It can be fixed easily, though, by swapping b and c or by returning Clamp(a, c, b)
if b > c.)
Best Answer
Regression testing
It's all about regression testing.
Imagine the next developer looking at your method and noticing that you are using magical numbers. He was told that magical numbers are evil, so he creates two constants, one for the number two, the other one for the number three—there is nothing wrong in doing this change; it's not like he was modifying your already correct implementation.
Being distracted, he inverts two constants.
He commits the code, and everything seems to work fine, because there are no regression testing running after each commit.
One day (could be weeks later), something breaks elsewhere. And by elsewhere, I mean in the completely opposite location of the code base, which seems to have nothing to do with
polynominal
function. Hours of painful debugging lead to the culprit. During this time, the application continues to fail in production, causing a lot of issues to your customers.Keeping the original tests you wrote could prevent such pain. The distracted developer would commit the code, and nearly immediately see that he broke something; such code won't even reach the production. Unit tests will additionally be very precise about the location of the error. Solving it wouldn't be difficult.
A side effect...
Actually, most refactoring is heavily based on regression testing. Make a small change. Test. If it passes, everything is fine.
The side effect is that if you don't have tests, then practically any refactoring becomes a huge risk of breaking the code. Given that is many cases, it's already difficult to explain to the management that refactoring should be done, it would be even harder to do so after your previous refactoring attempts introduce multiple bugs.
By having a complete suite of tests, you are encouraging refactoring, and so better, cleaner code. Risk-free, it becomes very tempting to refactor more, on regular basis.
Changes in requirements
Another essential aspect is that requirements change. You may be asked to handle complex numbers, and suddenly, you need to search your version control log to find the previous tests, restore them, and start adding new tests.
Why all this hassle? Why removing tests in order to add them later? You could have kept them in the first place.