Productivity – Dealing with High Story Points and Bug Rates

bugcode-qualityproductivity

So it is generally accepted that top tier programmers can produce an order of magnitude more/better code than their more average peers.

It's also generally accepted that the rate of errors made in code is relatively constant for programmers.

Instead, it tends to be impacted by the processes used when writing the code and after the code is written. (As I understand it) Humans tend to make mistakes at a fairly constant rate – better programmers just notice more of them and are quicker to fix them.

  • Note that both of above assertions come from Code Complete by Steve McConnell – so it's not a matter of differing perspectives.

So I've started to see this recently in my code. I can hammer out about 4-5x the amount of code as many of my peers (measured by story points estimated by the team), with higher quality (based on performance metrics and number of changes made after check-in). But I still make mistakes. Between better unit tests, a better understanding of what the code is doing, and a better eye for issues when doing code reviews I'm not producing 4-5x the number of bugs.

But I'm still producing about twice as many bugs found by QA as other developers on my team. As you might imagine, this causes some problems with non-technical folks doing metric measurements (read: my boss).

I've tried to point out that I'm producing bugs at half the rate of my peers (and fix twice as many), but it's a hard sell when there's graphs saying I produce twice as many bugs.

So, how to deal with the fact that increased productivity will lead to an increased number of bugs?

Best Answer

I think you're mixing your concerns. And there's nothing on your side that you need to change.

Productivity is a hint at how quickly a project will be completed. Project managers and everybody else like to know when the project will deliver. Higher or faster productivity means we'll see the project deliver sooner.

Rate of bugs isn't tied to productivity but rather to the size of the project. For example, you may have N bugs per Y lines of code. There is nothing within that metric that says (or cares!) how quickly those lines of code are written.

To tie that together, if you have higher productivity, yes, you'll "see" the bugs being written more quickly. But you were going to have that number of bugs anyway since it's tied to the size of the project.

If anything, higher productivity means you'll have more time at the end of the project to hunt those bugs down or the developer will be faster in finding the bugs they created.1


To address the more personal aspects of your question.

If your boss is looking strictly at the number of bugs you produce as opposed to the rate of bugs you produce, an educational session is in order. Number of bugs created is meaningless without a backing rate.

To take that example to the extreme, please tell your boss I want double your salary. Why? I have created absolutely no bugs on your project and I am therefore a much superior programmer than you. What? He's going to have a problem that I haven't produced a single line of code to benefit your project? Ah. Now we have understanding of why rate is important.

It sounds like your team has the metrics to evaluate bugs per story point. If nothing else, it's better than being measured by raw number of bugs created. Your best developers should be creating more bugs because they're writing more code. Have your boss throw out that graph or at least throw another series behind it showing how many story points (or whatever business value you measure) alongside the number of bugs. That graph will tell a more accurate story.


1 This particular comment has attracted far more attention than it was intended to. So let's be a bit pedantic (surprise, I know) and reset our focus on this question.

The root of this question is about a manager looking at the wrong thing(s). They are looking at raw bug totals when they should be looking at generation rate versus number of tasks completed. Let's not obsess over measuring against "lines of code" or story points or complexity or whatever. That's not the question at hand and those worries distract us from the more important question.

As laid out in the links by the OP, you can predict a certain number of bugs in a project purely by the size of the project alone. Yes, you can reduce this number of bugs through different development and testing techniques. Again, that wasn't the point of this question. To understand this question, we need to accept that for a given size project and development methodology, we'll see a given number of bugs once development is "complete."

So let's finally get back to this comment that a few completely misunderstood. If you assign comparably sized tasks to two developers, the developer with a higher rate of productivity will complete their task before the other. The more productive developer will therefore have more time available at the end of the development window. That "extra time" (as compared to the other developer) can be used for other tasks such as working on the defects that will percolate through a standard development process.

We have to take the OP at their word that they are more productive than other developers. Nothing within those claims implies that the OP or other more productive developers are being slipshod in their work. Pointing out that there would be less bugs if they spent more time on the feature or suggesting that debugging isn't part of this development time misses what has been asked. Some developers are faster than others and produce comparable or better quality work. Again, see the links that the OP lays out in their question.

Related Topic