Let me begin by thanking you to share your experience and voicing out your concerns... which I have to say are not uncommon.
- Time/Productivity: Writing tests is slower than not writing tests. If you scope it to that, I'd agree. However if you run a parallel effort where you apply a non-TDD approach, chances are that the time you spend break-detect-debug-and-fix existing code will put you in the net negative. For me, TDD is the fastest I can go without compromising on my code-confidence. If you find things in your method, that are not adding value, eliminate them.
- Number of tests: If you code up N things, you need to test N things. to paraphrase one of Kent Beck's lines "Test only if you would want it to work."
- Getting stuck for hours: I do too (sometimes and not > 20 mins before I stop the line).. It's just your code telling you that the design needs some work. A test is just another client for your SUT class. If a test is finding it difficult to use your type, chances are so will your production clients.
- Similar tests tedium : This needs some more context for me to write up a counterargument. That said, Stop and think about the similarity. Can you data-drive those tests somehow? Is it possible to write tests against a base-type? Then you just need to run the same set of tests against each derivation. Listen to your tests. Be the right kind of lazy and see if you can figure out a way to avoid tedium.
- Stopping to think about what you need to do next (the test/spec) isn't a bad thing. On the contrary, it's recommended so that you build "the right thing". Usually if I can't think of how to test it, I usually can't think of the implementation either. It's a good idea to blank out implementation ideas till you get there.. maybe a simpler solution is overshadowed by a YAGNI-ish pre-emptive design.
And that brings me to the final query : How do I get better ? My (or An) answer is Read, Reflect and Practice.
e.g. Of late, I keep tabs on
- whether my rhythm reflects RG[Ref]RG[Ref]RG[Ref] or is it RRRRGRRef.
- % time spent in the Red / Compile Error state.
- Am I stuck in a Red/Broken builds state?
What you actually want to test here, I assume, is that given a specific set of results from the randomiser, the rest of your method performs correctly.
If that's what you're looking for then mock out the randomiser, to make it deterministic within the realms of the test.
I generally have mock objects for all kinds of non-deterministic or unpredictable (at the time of writing the test) data, including GUID generators and DateTime.Now.
Edit, from comments: You have to mock the PRNG (that term escaped me last night) at the lowest level possible - ie. when it generates the array of bytes, not after you turn those into Int64s. Or even at both levels, so you can test your conversion to an array of Int64 works as intended and then test separately that your conversion to an array of DateTimes works as intended. As Jonathon said, you could just do that by giving it a set seed, or you can give it the array of bytes to return.
I prefer the latter because it won't break if the framework implementation of a PRNG changes. However, one advantage to giving it the seed is that if you find a case in production that didn't work as intended, you only need to have logged one number to be able to replicate it, as opposed to the whole array.
All this said, you must remember that it's called a Pseudo Random Number Generator for a reason. There may be some bias even at that level.
Best Answer
It varies on the complexity of the bug or feature. I recall one project that once had a 1.5 week development tiem estimate... and a 3 month testing estimate. The code change was small, a handful of lines here and there but it impacted a number of components of an insurance system in a number of ways, so had to be tested very thoroughly. Another time there was a bug that involved a parenthesis in the wrong place. Took 2 hours to find it, 2 seconds to fix it, but about a week to test dozens of scenarios that may have been affected by the change in logic.
In general, I don't worry about the ratio of time spent coding to time spent testing because there's just no way to be accurate. I find that in some projects, a project-relative ratio appears that is usually standard (to the project), but even then that can change later.
Spend as much time as is needed to say with confidence that the code works properly.