TDD Mistakes – How to Correct a Mistake in the Test After Writing Implementation?

mistakestdd

What is the best course of action in TDD if, after implementing the logic correctly, the test still fails (because there is a mistake in the test)?

For example, suppose you would like to develop the following function:

int add(int a, int b) {
    return a + b;
}

Suppose we develop it in the following steps:

  1. Write test (no function yet):

    // test1
    Assert.assertEquals(5, add(2, 3));
    

    Results in compilation error.

  2. Write a dummy function implementation:

    int add(int a, int b) {
        return 5;
    }
    

    Result: test1 passes.

  3. Add another test case:

    // test2 -- notice the wrong expected value (should be 11)!
    Assert.assertEquals(12, add(5, 6));
    

    Result: test2 fails, test1 still passes.

  4. Write real implementation:

    int add(int a, int b) {
        return a + b;
    }
    

    Result: test1 still passes, test2 still fails (since 11 != 12).

In this particular case: would it be better to:

  1. correct test2, and see that it now passes, or
  2. delete the new portion of implementation (i.e. go back to step #2 above), correct test2 and let it fail, and then reintroduce the correct implementation (step #4. above).

Or is there some other, cleverer way?

While I understand that the example problem is rather trivial, I'm interested in what to do in the generic case, which might be more complex than the addition of two numbers.

EDIT (In response to the answer of @Thomas Junk):

The focus of this question is what TDD suggests in such a case, not what is "the universal best practice" for achieving good code or tests (which might be different than the TDD-way).

Best Answer

The absolutely critical thing is that you see the test both pass and fail.

Whether you delete the code to make the test fail then rewrite the code or sneak it off to the clipboard only to paste it back later doesn't matter. TDD never said you had to retype anything. It wants to know the test passes only when it should pass and fails only when it should fail.

Seeing the test both pass and fail is how you test the test. Never trust a test you've never seen do both.


Refactoring Against The Red Bar gives us formal steps for refactoring a working test:

  • Run the test
    • Note the green bar
    • Break the code being tested
  • Run the test
    • Note the red bar
    • Refactor the test
  • Run the test
    • Note the red bar
    • Un-break the code being tested
  • Run the test
    • Note the green bar

However, we aren't refactoring a working test. We have to transform a buggy test. One concern is code that was introduced while only this test covered it. Such code should be rolled back and reintroduced once the test is fixed.

If that isn't the case, and code coverage isn't a concern due to other tests covering the code, you can transform the test and introduce it as a green test.

Here, code is also being rolled back but just enough to cause the test to fail. If that's not enough to cover all the code introduced while only covered by the buggy test we need a bigger code roll back and more tests.

Introduce a green test

  • Run the test
    • Note the green bar
    • Break the code being tested
  • Run the test
    • Note the red bar
    • Un-break the code being tested
  • Run the test
    • Note the green bar

Breaking the code can be commenting out code or moving it elsewhere only to paste it back later. This shows us the scope of code the test covers.

For these last two runs you're right back into the normal red green cycle. You're just pasting instead of typing to un-break the code and make the test pass. So be sure you're pasting only enough to make the test pass.

The overall pattern here is to see the color of the test change the way we expect. Note that this creates a situation where you briefly have an un-trusted green test. Be careful about getting interrupted and forgetting where you are in these steps.

My thanks to RubberDuck for the Embracing the Red Bar link.