A lot of the examples I've seen of tests get down to the minutiae, covering all facets of the code.
So? You don't have to test everything. Just the relevant things.
Since I'm the only developer and I'm very close to the code in the entire project, it is much more efficient to follow a write-then-manually-test pattern.
That's actually false. It's not more efficient. It's really just a habit.
What other solo developers do is write a sketch or outline, write the test cases and then fill in the outline with final code.
That's very, very efficient.
I also find requirements and features change frequently enough that maintaining tests would add a considerable amount of drag on a project.
That's false, also. The tests are not the drag. The requirements changes are the drag.
You have to fix the tests to reflect the requirements. Whether their minutiae, or high-level; written first or written last.
The code's not done until the tests pass. That's the one universal truth of software.
You can have a limited "here it is" acceptance test.
Or you can have some unit tests.
Or you can have both.
But no matter what you do, there's always a test to demonstrate that the software works.
I'd suggest that a little bit of formality and nice unit test tool suite makes that test a lot more useful.
Let me begin by thanking you to share your experience and voicing out your concerns... which I have to say are not uncommon.
- Time/Productivity: Writing tests is slower than not writing tests. If you scope it to that, I'd agree. However if you run a parallel effort where you apply a non-TDD approach, chances are that the time you spend break-detect-debug-and-fix existing code will put you in the net negative. For me, TDD is the fastest I can go without compromising on my code-confidence. If you find things in your method, that are not adding value, eliminate them.
- Number of tests: If you code up N things, you need to test N things. to paraphrase one of Kent Beck's lines "Test only if you would want it to work."
- Getting stuck for hours: I do too (sometimes and not > 20 mins before I stop the line).. It's just your code telling you that the design needs some work. A test is just another client for your SUT class. If a test is finding it difficult to use your type, chances are so will your production clients.
- Similar tests tedium : This needs some more context for me to write up a counterargument. That said, Stop and think about the similarity. Can you data-drive those tests somehow? Is it possible to write tests against a base-type? Then you just need to run the same set of tests against each derivation. Listen to your tests. Be the right kind of lazy and see if you can figure out a way to avoid tedium.
- Stopping to think about what you need to do next (the test/spec) isn't a bad thing. On the contrary, it's recommended so that you build "the right thing". Usually if I can't think of how to test it, I usually can't think of the implementation either. It's a good idea to blank out implementation ideas till you get there.. maybe a simpler solution is overshadowed by a YAGNI-ish pre-emptive design.
And that brings me to the final query : How do I get better ? My (or An) answer is Read, Reflect and Practice.
e.g. Of late, I keep tabs on
- whether my rhythm reflects RG[Ref]RG[Ref]RG[Ref] or is it RRRRGRRef.
- % time spent in the Red / Compile Error state.
- Am I stuck in a Red/Broken builds state?
Best Answer
There are different ways to count coverage. Many tools check which lines are executed by tests, but they don't check that the tests actually test anything. So in theory you could have 100% coverage without a single assert in your tests, in effect testing nothing.
However, you say you have 70% coverage, but you don't catch 70% of problems. That is not uncommon, since quite often small subset of your code contains most of the problems. I think Code Complete says something along the lines that 20% of the code causes 80% of the bugs. That problematic area in your code is usually complex and that is why it's usually untested. So you might have good code coverage but the part that is not covered is the part that needs the tests most.
In this article Martin Fowler says test coverage is not a good measurement for quality of your code. It is a good way to find problematic areas in your code. So in your case, assuming your existing tests are good tests, you now should be able to pinpoint the problem area in your code - the remaining 30%.