Agile TDD – Practical Relevance of Test Driven Development Limitations

agiledevelopment-methodologiesdevelopment-processtdd

In Test Driven Development (TDD) you start with a suboptimal solution and then iteratively produce better ones by adding test cases and by refactoring. The steps are supposed to be small, meaning that each new solution will somehow be in the neighborhood of the previous one.

This resembles mathematical local optimization methods like gradient descent or local search. A well-known limitation of such methods is that they do not guarantee to find the global optimum, or even an acceptable local optimum. If your starting point is separated from all acceptable solutions by a large region of bad solutions, it is impossible to get there and the method will fail.

To be more specific: I am thinking of a scenario where you have implemented a number of test cases and then find that the next test case would require a competely different approach. You will have to throw away your previous work and start over again.

This thought can actually be applied to all agile methods that proceed in small steps, not only to TDD. Does this proposed analogy between TDD and local optimization have any serious flaws?

Best Answer

A well-known limitation of such methods is that they do not guarantee to find the global optimum, or even an acceptable local optimum.

To make your comparison more adequate: for some kind of problems, iterative optimization algorithms are very likely to produce good local optima, for some other situations, they can fail.

I am thinking of a scenario where you have implemented a number of test cases and then find that the next test case would require a competely different approach. You will have to throw away your previous work and start over again.

I can imagine a situation where this can happen in reality: when you pick the wrong architecture in a way you need to recreate all your existing tests again from scratch. Lets say you start implementing your first 20 test cases in programming language X on operating system A. Unfortunately, requirement 21 includes that the whole program needs to run on operating system B, where X is not available. Thus, you need to throw away most of your work and reimplement in language Y. (Of course, you would not throw away the code completely, but port it to the new language and system.)

This teaches us, even when using TDD, it is a good idea to do some overall analysis & design beforehand. This, however, is also true for any other kind of approach, so I don't see this as an inherent TDD problem. And, for the majority of real-world programming tasks you can just pick a standard architecture (like programming language X, operating system Y, database system Z on hardware XYZ), and you can be relatively sure that an iterative or agile methodology like TDD won't bring you into a dead end.

Citing Robert Harvey: "You can't grow an architecture from unit tests." Or pdr: "TDD doesn't only help me come to the best final design, it helps me get there in fewer attempts."

So actually what you wrote

If your starting point is separated from all acceptable solutions by a large region of bad solutions it is impossible to get there and the method will fail.

might become true - when you pick a wrong architecture, you are likely to not reach the required solution from there.

On the other hand, when you do some overall planning beforehand and pick the right architecture, using TDD should be like starting an iterative search algorithm in an area where you can expect to reach "global maximum" (or at least a good-enough maximum) in a few cycles.