I would strongly suggest that management reconsider trying to track things in this level of detail. It's going to be inherently subject to gaming.
I've seen clients attempt to do something similar but at a group level rather than at an individual level. What inevitably happened was that there was a strong incentive for each manager to get their group's bugs classified as low priority or classified as an enhancement and people started to get very defensive whenever there was a suggestion that there was a bug in their code. From a metric standpoint, it looked like code quality was up tremendously month over month but that was only because anything that didn't cause a total systems failure was being tagged as an enhancement or a low priority bug.
What is management trying to achieve by tracking developer effectiveness? If they want to improve overall code quality, it probably makes sense to have a feedback loop from the bug tracking system that tries to determine why a bug made it to post-production and what should be done in the future to prevent similar bugs. It may be that the requirements were unclear or inconsistent. It may be that the developer was sloppy. It may be that the QA department needs to more thoroughly test certain data conditions. It may be that management made a calculated decision to rush some functionality to hit an external deadline.
But if the intention is to improve code quality, this feedback loop has to be reasonably safe. That is, people have to have reason to trust that admitting to reasonable mistakes isn't going to cause problems for them down the line at review time. For example, if the QA department missed a bug because they're supposed to do dozens of poorly documented manual steps to test something and someone innocently missed a step, they have to feel safe in admitting the mistake so that management can identify the fact that they need to allocate time for someone to automate more of the QA process. If the problem is that the project manager made a last-minute change to the requirements which caused the developer to rush a change in and for QA to skimp on the testing, everyone needs to feel safe enough to discuss how they might have handled that situation differently in the future. If the folks that are most willing to admit to making mistakes in order to improve the process are the ones that are getting lower ratings during reviews because everyone else is pointing fingers and denying responsibility, you're not going to have a positive effect on code quality.
If you are going to report some sort of numeric KPI, the most meaningful numbers will have to come from something that the development staff cannot reasonably game and will have to come from a very coarse level of granularity. The set of numeric indicators that the development team cannot game tends to be very application- and organization-dependent. For example, you may be able to drive some metrics by parsing the application logs to look for certain types of errors (i.e. how many times did a user go to an error page because of an internal error). You may be able to drive metrics based on things like how quickly the software allowed a user to accomplish a particular task.
The set of things that the development team cannot game, however, is likely to result in metrics that apply to large swaths of the development organization. Performance-based metrics (i.e. our logistics software has improved inventory turn times 10% this year) are going to require that the entire team is working together from the developers to the DBAs to the hardware group. So they're not going to be meaningful to track how productive an individual or even a group is. But they are going to be the sorts of metrics that you actually want senior management to manage to. Senior management shouldn't care whether Jimmy the Developer is buggy code (though Jimmy's immediate manager should be aware). But they should be aware if Jimmy's buggy code is causing the call center's customer lookup operation to waste 10 hours of call center rep time every day or if some cool-looking new feature is chewing up 50% of the available CPU and slowing the rest of the system down.
Lower level managers can participate in the QA feedback loop and will interact regularly with the various development teams. It should be clear to them which individual developers are particularly strong and which are particularly weak. It should be clear where the recurring pain points are whether those pain points are communication or politics or developer strength. Having numeric KPIs at these low levels is going to be exceptionally difficult-- they are going to be too easy to game and they are going to create some perverse incentives. A developer's manager should understand whether a developer that is being assigned a lot of bugs is a weak developer that needs mentoring or a strong developer that is being exceptionally productive or an unlucky developer that has responsibility for a legacy module that is known to be exceptionally complex or buggy.
I'm writing this from the perspective of Redmine. I haven't used Trac and only massaged the source with my eyes (not even reading it). At a high level, it appears to be similar to Redmine so many of the things that I say may apply to it too.
The first question that you need to look at is the integration between the bug tracking (and code change) side and the "customer facing" issue tracking. Different places may allow for a degree of openness that is not available in others.
- A closed-source software product certainly won't let the code changes get anywhere near the customer facing bug tracking system. The risk of having some code getting out is a worry for many of them.
- An open-source software product will often host the source code and bug tracking right alongside the customer issue tracking (they are one and the same). This can be seen in both Redmine and Trac.
- An academic setting... well, its probably somewhere in between.
That last point is key - is it a problem if the source gets out? Or for that matter, is it a problem if people see the discussion between developers of the product in the notes of an issue?
Redmine has the ability to restrict which certain users can read. You may have individual notes marked private or things such as "only people with the programmer role can see the repository."
If this is acceptable, the easiest thing is to integrate the end user issue tracking and bug tracking in the same system. You will likely want to separate "bugs" and "issues" from each other, though note that there is a relationship between them. An end user issue may be caused by a bug, or it may be something the end user doesn't understand.
Issues and bugs have different work flows - issues may go through something like "new -> confirmed -> waiting for fix -> resolved" while a bug may go through "new -> assigned -> working -> code review -> working -> code review -> deployed -> closed" or something like that. There are likely as many work flows as there are installations. Having a "blocked by" relationship from the issue to the bug allows you to easily relate the two systems together.
All of that was assuming that it was acceptable to have one system host both, you will then find yourself with two different systems - one that is customer facing, one that is developer facing. This may involve a bit more work - somehow, someone needs to bridge the gap between the two - developers need to be aware of bugs that are discovered by the end user in their bug tracking system.
I haven't found a modern bug tracking system that plays nicely with others. Everything tries to be self contained and when commercial products have bug tracking systems, they make it so that it's easy to migrate to it and use everything within its own product line. You may find an API that lets you do some things, but for the most part - nope.
The question here becomes what is the workload and how does one want to integrate the two (picture two spoiled children sitting with their backs to each other not wanting to play)? Is it enough just to have a link from one system to the other? Is the amount of new issues low enough that this can be done by hand? or will you need to start exploring some way to poll from one and update the other?
Make sure that when integrating the two systems that you use the API rather than trying to go behind its back and tinker with the database with updates directly. There are dangers to that - the table structure of modern bug trackers is a big large and they are managed in the application. Going behind the back of the application may cause issues with data integrity. As mentioned before, instead of updating and inserting at the database, use the public API for the application (Redmine's API for issues for example).
The extent of the business logic and the integration points are something that needs to be examined before jumping off and doing it. Which way does the data flow (both ways gets very interesting at times) and what data flows. Are you copying issues into bugs once they reach a certain state? How do you keep track to make sure you don't create the same issue again and again (need to update the data model of the bug tracking system)? What happens when an issue is marked as closed in a bug tracking system - does that flow back?
All in all, its probably simplest to try to use the same system and manage the permissions / customize it so that it works the way you want it to if it is not critically bad if some bit of knowledge from the developers leaks out.
Best Answer
Nuances like that matter if you consider issue tracker as a means to communicate the status of problems that were reported in the project. For that purpose, it makes sense to invest some effort into ensuring that bug report is easy to read and understand.
This situation gets much less confusing if you look at it from a perspective of a tester. If your team doesn't have a tester, imagine one (or better yet, hire one 1, 2, 3).
Okay, so there was a bug once upon a time, tester can reproduce it using older releases of your application (side note in unlikely case that you don't keep copies of older releases, then you've got much much harder problems in your team than obsolete bugs). Tester can see it and can tell what's wrong, what is it that makes it a bug.
Now you say, "layout has already changed and it is no longer relevant" - the high-brow no longer relevant turns in tester mind into much simpler statement: the problem has gone.
From black box perspective, your situation is pretty simple. There was a problem, it's still reproducible in older release, now you claim that newer release has no such problem anymore. For a tester, this boils down to a claim that bug is fixed and, respectively, to the need to verify whether the claim is true.
Professional tester would take your older release, look at how problem is present there, then take newer release and check if it is gone or still there.
From above, the most accurate way to handle bugs like you describe, would be to close these as resolved, fixed. Of course it wouldn't hurt if you clarify in the comments that the fix occurred as an unintended side effect of layout change.
One of customized JIRAs I used to work with in a past project had resolution "Fixed By Design" to communicate rather profound changes having lots of consequences, some intentional, some not. For case like you describe, that could be also considered instead of plain "Fixed", since it hints ticket reader on that it's more of a side effect rather than intentional code change.