Unit Testing – Implementing Unit Testing at a Company That Doesn’t Do It

tddunit testing

My company's head of software development just "resigned" (i.e. fired) and we are now looking into improving the development practices at our company. We want to implement unit testing in all software created from here on out.

Feedback from the developers is this:

  • We know testing is valuable
  • But, you are always changing the specs so it'd be a waste of time
  • And, your deadlines are so tight we don't have enough time to test anyway

Feedback from the CEO is this:

  • I would like our company to have automated testing, but I don't know how to make it happen
  • We don't have time to write large specification documents

How do developers get the specs now? Word of mouth or PowerPoint slide. Obviously, that's a big problem. My suggestion is this:

  • Let's also give the developers a set of test data and unit tests
  • That's the spec. It's up to management to be clear and quantitative about what it wants.
  • The developers can put it whatever other functionality they feel is needed and it need not be covered by tests

Well, if you've ever been in a company that was in this situation, how did you solve the problem? Does this approach seem reasonable?

Best Answer

It seems that you are mixing up two different kinds of tests: unit tests and system / acceptance tests. The former operate on a lower level, testing small pieces of code (typically individual methods), which usually reside deep inside the program, not directly visible to the users. The latter tests the whole program as seen by its users, on a much higher level of granularity. Thus, only the latter can be based on any form of system specification.

Separating the two issues makes it easier to start moving towards an improved development process. Start writing unit tests as soon as possible, regardless of how the software is (not) specified at high level. Whenever a developer creates or changes a method, it does something concrete, which can (and should) be unit tested. In my experience, even changing high level requirements do not typically affect these lower level code blocks dramatically: the code mostly needs to be rearranged, rather than thrown away or rewritten completely. Consequently, most of the existing unit tests will keep running fine.

One important condition to enable unit testing is: the deadlines should not be decided by management up front, but should instead be based on work estimates by the developers (who in turn should include in their estimation the time needed to write proper unit tests). Or, if the deadline is fixed, the scope of delivery should be negotiable. No amount of (mis)management can change the fundamental truth that a given number of developers can only deliver a certain amount of quality work in a given amount of time.

Parallel to this, start discussing the best way to clarify and document requirements and turn them into high level acceptance tests. This is a longer process of successive refinement, which can easily take years to get to a better, stable state across a whole organization. One thing seems fairly sure from your description: trying to fix constantly changing requirements by writing large specification documents up front is just not going to work. Instead, it is recommended to move towards a more agile approach, with frequent software releases and demonstrations to users, and lots of discussions about what they actually want. The user has the right to change his/her mind about the requirements any time - however, each change has its cost (in time and money). The developers can estimate the cost for each change request, which in turn enables the user / product owner to make informed decisions. "Surely, this feature change would be nice... but if it delays the release of this other crucial feature, and costs this much, let's put it into the backlog for now".

Getting users to define acceptance test cases and create test data is a great way to involve them more, and to build mutual confidence between users and developers. This forces both parties to focus on concrete, measureable, testable acceptance criteria, and to think use cases through in a lot more detail than typical. As a result, users get to check the current status of development first-hand with each release, and developers get more concrete, tangible measurement feedback about the status of the project. Note though that this requires greater commitment from users, and new ways of operation, which may be a tough thing to accept and learn.

Related Topic