I would strongly suggest that management reconsider trying to track things in this level of detail. It's going to be inherently subject to gaming.
I've seen clients attempt to do something similar but at a group level rather than at an individual level. What inevitably happened was that there was a strong incentive for each manager to get their group's bugs classified as low priority or classified as an enhancement and people started to get very defensive whenever there was a suggestion that there was a bug in their code. From a metric standpoint, it looked like code quality was up tremendously month over month but that was only because anything that didn't cause a total systems failure was being tagged as an enhancement or a low priority bug.
What is management trying to achieve by tracking developer effectiveness? If they want to improve overall code quality, it probably makes sense to have a feedback loop from the bug tracking system that tries to determine why a bug made it to post-production and what should be done in the future to prevent similar bugs. It may be that the requirements were unclear or inconsistent. It may be that the developer was sloppy. It may be that the QA department needs to more thoroughly test certain data conditions. It may be that management made a calculated decision to rush some functionality to hit an external deadline.
But if the intention is to improve code quality, this feedback loop has to be reasonably safe. That is, people have to have reason to trust that admitting to reasonable mistakes isn't going to cause problems for them down the line at review time. For example, if the QA department missed a bug because they're supposed to do dozens of poorly documented manual steps to test something and someone innocently missed a step, they have to feel safe in admitting the mistake so that management can identify the fact that they need to allocate time for someone to automate more of the QA process. If the problem is that the project manager made a last-minute change to the requirements which caused the developer to rush a change in and for QA to skimp on the testing, everyone needs to feel safe enough to discuss how they might have handled that situation differently in the future. If the folks that are most willing to admit to making mistakes in order to improve the process are the ones that are getting lower ratings during reviews because everyone else is pointing fingers and denying responsibility, you're not going to have a positive effect on code quality.
If you are going to report some sort of numeric KPI, the most meaningful numbers will have to come from something that the development staff cannot reasonably game and will have to come from a very coarse level of granularity. The set of numeric indicators that the development team cannot game tends to be very application- and organization-dependent. For example, you may be able to drive some metrics by parsing the application logs to look for certain types of errors (i.e. how many times did a user go to an error page because of an internal error). You may be able to drive metrics based on things like how quickly the software allowed a user to accomplish a particular task.
The set of things that the development team cannot game, however, is likely to result in metrics that apply to large swaths of the development organization. Performance-based metrics (i.e. our logistics software has improved inventory turn times 10% this year) are going to require that the entire team is working together from the developers to the DBAs to the hardware group. So they're not going to be meaningful to track how productive an individual or even a group is. But they are going to be the sorts of metrics that you actually want senior management to manage to. Senior management shouldn't care whether Jimmy the Developer is buggy code (though Jimmy's immediate manager should be aware). But they should be aware if Jimmy's buggy code is causing the call center's customer lookup operation to waste 10 hours of call center rep time every day or if some cool-looking new feature is chewing up 50% of the available CPU and slowing the rest of the system down.
Lower level managers can participate in the QA feedback loop and will interact regularly with the various development teams. It should be clear to them which individual developers are particularly strong and which are particularly weak. It should be clear where the recurring pain points are whether those pain points are communication or politics or developer strength. Having numeric KPIs at these low levels is going to be exceptionally difficult-- they are going to be too easy to game and they are going to create some perverse incentives. A developer's manager should understand whether a developer that is being assigned a lot of bugs is a weak developer that needs mentoring or a strong developer that is being exceptionally productive or an unlucky developer that has responsibility for a legacy module that is known to be exceptionally complex or buggy.
The most effective way to get users to write decent and useful bug reports is
- to let them see their reports online...
[System] Thanks for reporting, you can find status of your request here: ...
- ...along with the evaluation and comments from assigned engineer...
[Engineer] Request rejected, for the following details are missing: ...
- ...with an option to edit / improve their report.
[User] Requested details are added, please re-evaluate: ...
I would go as far as to claim that it's the only effective way.
Let's face it, skill to write bug reports effectively comes only with experience. One needs to learn to gain experience. Learning involves practicing, getting feedback and improving.
User-editable online bug reports are the most efficient way to teach users improve.
- Alternative options to above are 1) to arrange face-to-face learning sessions with users (yeah sure, especially when there are thousands of them spread across the globe). Or 2) explain them things by the phone ("look, if you could only see the crap you wrote at line 225..."). What else? Oh 3) by email, sure "in the mail you sent us two months ago, you mentioned... no not that email, you sent us five emails this day, three of them were with subject Re: blue button click, look at the second one, the one with 10Mb screen shot attached to it... what? you can't find it?"
Best Answer
In our organization we use a bug template that requires the following information when a bug is submitted:
This is the minimum information required. We also ask for screenshots and application log files as appropriate for the bug in question.
We try to make our bug reporters report bugs from the users' perspective as much as possible. That makes it easier to assess the criticality of a bug more quickly so we can get it prioritized.