Time Difference Between Developing with Unit Tests vs No Tests

productivitysolo-developmentunit testing

I'm a solo developer with a pretty time-constrained work environment where development time ranges usually from 1-4 weeks per project, depending on either requirements, urgency, or both. At any given time I handle around 3-4 projects, some having timelines that overlap with each other.

Expectedly, code quality suffers. I also do not have formal testing; it usually goes down to walking through the system until it somewhat breaks. As a result, a considerable amount of bugs escape to production, which I have to fix and in turn sets back my other projects.

This is where unit testing comes in. When done right, it should keep bugs, let alone those that escape to production, to a minimum. On the other hand, writing tests can take a considerable amount of time, which doesn't sound good with time-constrained projects such as mine.

Question is, how much of a time difference would writing unit-tested code over untested code, and how does that time difference scale as project scope widens?

Best Answer

The later you test, the more it costs to write tests.

The longer a bug lives, the more expensive it is to fix.

The law of diminishing returns ensures you can test yourself into oblivion trying to ensure there are no bugs.

Buddha taught the wisdom of the middle path. Tests are good. There is such a thing as too much of a good thing. The key is being able to tell when you are out of balance.

Every line of code you write without tests will have significantly greater costs to adding tests later than if you had written the tests before writing the code.

Every line of code without tests will be significantly more difficult to debug or rewrite.

Every test you write will take time.

Every bug will take time to fix.

The faithful will tell you not to write a single line of code without first writing a failing test. The test ensures you're getting the behavior you expect. It allows you to change the code quickly without worrying about affecting the rest of the system since the test proves the behavior is the same.

You must weigh all that against the fact that tests don't add features. Production code adds features. And features are what pay the bills.

Pragmatically speaking, I add all the tests I can get away with. I ignore comments in favor of watching tests. I don't even trust code to do what I think it does. I trust tests. But I've been known to throw the occasional hail mary and get lucky.

However, many successful coders don't do TDD. That doesn't mean they don't test. They just don't obsessively insist that every line of code have an automated test against it. Even Uncle Bob admits he doesn't test his UI. He also insists you move all logic out of the UI.

As a football metaphor (that's American football) TDD is a good ground game. Manual only testing where you write a pile of code and hope it works is a passing game. You can be good at either. Your career isn't going to make the playoffs unless you can do both. It won't make the superbowl until you learn when to pick each one. But if you need a nudge in a particular direction: the officials calls go against me more often when I'm passing.

If you want to give TDD a try I highly recommend you practice before trying to do it at work. TDD done half way, half hearted, and half assed is a big reason some don't respect it. It's like pouring one glass of water into another. If you don't commit and do it quickly and completely you end up dribbling water all over the table.