Is there a point at which the process gets in the way and becomes an end unto itself?
Heavy processes are common, unfortunately. Some people - especially management - religiously imagine that processes produce products. So they overdo the processes and forget that it's really a handful of hard-working, smart people who actually create the products. For upper management, it's frightening to even think that their business is in the hands of few geeks, and so the close their eyes from the reality and think of their dear "process" instead, which gives them the illusion of control.
That's why agile startups with a handful of good engineers can beat big, established corporations, whose workers spend 95 % of their energy on process and reporting. Some examples of once small startups that once did beat their competitors and/or created completely new markets:
- Apple (Apple I was created by 1 engineer; there were 3 men at the company back then).
- Google (created originally by 2 programmers).
- Facebook (1-man effort originally).
- Microsoft (2-man company in 1975).
One could easily say that these are just outliers, extreme exceptions, and to do something serious, you'd better be a big, established corporation. But the list goes on. And on. It's embarrassingly long. Almost every today-major corporation started as a garage shop, which did something unusual. Something weird. They were doing it wrong. Do you think they were doing it according to the process?
It looks to me like you're experiencing cognitive dissonance, trying to believe two contradictory ideas and accept both as valid. The way to resolve it is to understand that one (or possibly both) must be incorrect, and find out which it is. In this case, the problem is that those edicts are based on a false premise, which Uncle Bob repeats several times a few lines further down:
However, think about what would happen if you walked in a room full of
people working this way. Pick any random person at any random time. A
minute ago, all their code worked.
Let me repeat that: A minute ago all their code worked! And it doesn't
matter who you pick, and it doesn't matter when you pick. A minute ago all their code worked!
That's the shining promise of TDD: test everything, make it so all your tests pass, and all your code will work.
Problem is, that's a blatant falsehood.
Test everything, make it so all your tests pass, and all your tests will pass, nothing more, nothing less. That doesn't mean anything particularly useful; it only means that none of the error conditions that you thought to test for exist in the codebase. (But if you thought to test for them, then you were paying enough attention to that possibility to write the code carefully enough to get it right in the first place, so that's less helpful than it might be.)
It doesn't mean that any error you didn't think of is not present in the codebase. It also doesn't mean that your tests--which are also code written by you--are bug-free. (Take that concept to its logical conclusion and you end up caught in infinite recursion. It's tests all the way down.)
To give an example, there's an open-source scripting library that I use whose author boasts of over 90% unit test coverage and 100% coverage in all core functionality. But the issue tracker is almost up to 300 bugs now and they keep coming. I think I found five from the first few days of using it in real-world tasks. (To his credit, the author got them fixed very quickly, and it's a good-quality library overall. But that doesn't change the fact that his "100%" unit tests didn't find these issues, which showed up almost immediately under actual usage.)
The other major problem is that as you go on,
every hour you are producing several tests. Every day dozens of tests.
Every month hundreds of tests. Over the course of a year you will
write thousands of tests.
...and then your requirements change. You have to implement a new feature, or change an existing one, and then 10% of your unit tests break, and you need to manually go over all of them to discern which ones are broken because you made a mistake, and which are broken because the tests themselves are no longer testing for correct behavior. And 10% of thousands of tests is a lot of unnecessary extra work. (Especially if you're doing it 1 test at a time, as the three edicts demand!)
When you think of it, this makes unit testing a lot like global variables, or several other bad design "patterns": it may seem to be helpful and save you some time and effort, but you don't notice the disastrous costs until your project becomes big enough that their overall effect is disastrous, and by that time it's too late.
It is now two decades since it was pointed out that program testing
may convincingly demonstrate the presence of bugs, but can never
demonstrate their absence. After quoting this well-publicized remark
devoutly, the software engineer returns to the order of the day and
continues to refine his testing strategies, just like the alchemist of
yore, who continued to refine his chrysocosmic purifications.
-- Edsger W. Djikstra. (Written in 1988, so it's now closer to
4.5 decades.)
Best Answer
Of course not. You should finish both the test and the class. Committing something1 that doesn't even compile makes no sense, and will certainly make people working on the same project angry if you do it regularly.
No, do not commit a failing test. LeBlanc's Law states :
and your test might fail for a long time. It is better to fix the problem as soon as it is detected.
Also, TDD development style tells :
If you check in a failed test, that means you didn't complete the cycle.
1 When I said commit, I meant really commit to the trunk (for git users, push your changes, so other developers would get them).