I am not sure I understand why do you want to do this (apart from dealing with an intellectual challenge). This kind of code, where you work directly with some system level API, is very hard to unit test, TDD style or not, and to be frank, I don't find it very valuable to try it in a real project.
Most of the task you describe is calling the right low level API method with the right parameter. And the rest of the code as is may be so trivial, it wouldn't necessarily warrant the introduction of a dedicated interface, mock objects etc. I would be content having an integration test to verify on a higher level that the whole stuff works. But this is just my 2 cents.
Unit testing for me is not about following rulebooks or strict definitions. In real life, I don't really care whether my tests are "real" unit tests or not; as long as my code is being covered by automated and repeatable tests, I am fine. I prefer the pragmatic approach.
You never said what Clamp()
is supposed to do, so I'm assuming that it returns value
, unless it is outside of the range, in which case it returns one of the two bounds.
I don't see any reason to think that -1, 0, or 1 are corner cases. They may often be corner cases, but there's no reason they'd act strangely in this function. If you want a 'normal' value, 42 or -63 works, but there is no need for both of them, unless you suspect that >
and <
don't work properly on negative numbers in C#. (I don't think you need to worry about that.)
So we could just use −2147483648, 'a normal value', and 2147483647. (We could even say that testing with the max/min integer values aren't really necessary. Presumably, C# >
and <
work up to the minimum and maximum; there isn't any danger of integer overflow.)
There are 6 permutations of 3 values, so we're down to 6 testcases. 6 testcases is not much, and we can easily just write them down and use them, but we don't know for certain that we've selected test cases that cover everything (all we've done so far is reduce the original set of test cases to something smaller).
If we want to be sure we've caught all the cases that matter, we could reduce the massively large set of input values (4 billion cubed) by partitioning them into equivalence classes. Then we only need 1 test per equivalence class, since the equivalence class would be defined as a set of inputs that all act alike.
The value of Clamp(a, b, c)
depends on whether a
is in the range, or above it, or below it. There should be 3 equivalence classes: [a < b and a < c], [a > b and a > c], and otherwise. The return value will be b
, c
, or a
, respectively. This tells us not only what the tests should be, but how to write the code.
(There is one little thing that we haven't run into: what if the lower bounds is higher than the upper bounds. What I said in the previous paragraph applies if the assumption I made up at the top is right, but not if it isn't. It can be fixed easily, though, by swapping b and c or by returning Clamp(a, c, b)
if b > c.)
Best Answer
What you call "big design up front" I call "sensible planning of your class architecture."
You can't grow an architecture from unit tests. Even Uncle Bob says that.
https://hanselminutes.com/171/return-of-uncle-bob#
I think it would be more sensible to approach TDD from a perspective of validating your structural design. How do you know the design is incorrect if you don't test it? And how do you verify that your changes are correct without also changing the original tests?
Software is "soft" precisely because it is subject to change. If you are uncomfortable about the amount of change, continue to gain experience in architectural design, and the number of changes you will need to make to your application architectures will decrease over time.