R – Test driven development: What if the bug is in the interface

testingworkflow

I read the latest coding horror post, and one of the comments touched a nerve for me:

This is the type of situation that test driven design/refactoring are supposed to fix. If (big if) you have tests for the interfaces, rewriting the implementation is risk-free, because you will know whether you caught everything.

Now in theory I like the idea of test driven development, but all the times I've tried to make it work, it hasn't gone particularly well, I get out of the habit, and next thing I know all the tests that I had originally written not only don't pass, but they're no longer a reflection of the design of the system.

It's all well and good if you've been handed a perfect design from on high, straight from the start (which in my experience never actually happens), but what if halfway through the production of a system you notice that there's a critical flaw in the design? Then it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests. A fundamental assumption was wrong, and now you have to change it. Now test driven development is no longer a handy thing, but it just means that there's twice as much work to do everything.

I've tried to ask this question before, both of peers, and online, but I've never heard a very satisfactory answer. … Oh wait.. what was the question?

How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space? How do you make the TDD practice work for you instead of against you?

Update:
I still don't think I fully understand it all, so I can't really make a decision about which answer to accept. Most of my leaps in understanding have happened in the comments sections, not in the answers. Here' s a collection of my favorites so far:

"Anyone who uses terms like "risk-free"
in software development is indeed full
of shit. But don't write off TDD just
because some of its proponents are
hyper-susceptible to hype. I find it
helps me clarify my thinking before
writing a chunk of code, helps me to
reproduce bugs and fix them, and makes
me more confident about refactoring
things when they start to look ugly"

-Kristopher Johnson

"In that case, you rewrite the tests
for just the portions of the interface
that have changed, and consider
yourself lucky to have good test
coverage elsewhere that will tell you
what other objects depend on it."

-rcoder

"In TDD, the reason to write the tests
is to do design. The reason to make
the tests automated is so that you can
reuse them as the design and code
evolve. When a test breaks, it means
you've somehow violated an earlier
design decision. Maybe that's a
decision you want to change, but it's
good to get that feedback as soon as
possible."

-Kristopher Johnson

[about testing interfaces] "A test would insert some elements,
check that the size corresponds to the
number of elements inserted, check
that contains() returns true for them
but not for things that weren't
inserted, checks that remove() works,
etc. All of these tests would be
identical for all implementations, and
of course you would run the same code
for each implementation and not copy
it. So when the interface changes,
you'd only have to adjust the test
code once, not once for each
implementation."

–Michael Borgwardt

Best Answer

One of the practices of TDD is the use of Baby Steps (which could be very boring in the beggining) which is the use of really small steps in order for you to understand your problem space and make a good and satisfactory solution for your problem.

If you already know the design of your application you aren't doing TDD at all. We should design it while doing your tests.

So the suggestion I would give is for you to concentrate on the baby steps in order to get a proper testable design