I would strongly suggest that management reconsider trying to track things in this level of detail. It's going to be inherently subject to gaming.
I've seen clients attempt to do something similar but at a group level rather than at an individual level. What inevitably happened was that there was a strong incentive for each manager to get their group's bugs classified as low priority or classified as an enhancement and people started to get very defensive whenever there was a suggestion that there was a bug in their code. From a metric standpoint, it looked like code quality was up tremendously month over month but that was only because anything that didn't cause a total systems failure was being tagged as an enhancement or a low priority bug.
What is management trying to achieve by tracking developer effectiveness? If they want to improve overall code quality, it probably makes sense to have a feedback loop from the bug tracking system that tries to determine why a bug made it to post-production and what should be done in the future to prevent similar bugs. It may be that the requirements were unclear or inconsistent. It may be that the developer was sloppy. It may be that the QA department needs to more thoroughly test certain data conditions. It may be that management made a calculated decision to rush some functionality to hit an external deadline.
But if the intention is to improve code quality, this feedback loop has to be reasonably safe. That is, people have to have reason to trust that admitting to reasonable mistakes isn't going to cause problems for them down the line at review time. For example, if the QA department missed a bug because they're supposed to do dozens of poorly documented manual steps to test something and someone innocently missed a step, they have to feel safe in admitting the mistake so that management can identify the fact that they need to allocate time for someone to automate more of the QA process. If the problem is that the project manager made a last-minute change to the requirements which caused the developer to rush a change in and for QA to skimp on the testing, everyone needs to feel safe enough to discuss how they might have handled that situation differently in the future. If the folks that are most willing to admit to making mistakes in order to improve the process are the ones that are getting lower ratings during reviews because everyone else is pointing fingers and denying responsibility, you're not going to have a positive effect on code quality.
If you are going to report some sort of numeric KPI, the most meaningful numbers will have to come from something that the development staff cannot reasonably game and will have to come from a very coarse level of granularity. The set of numeric indicators that the development team cannot game tends to be very application- and organization-dependent. For example, you may be able to drive some metrics by parsing the application logs to look for certain types of errors (i.e. how many times did a user go to an error page because of an internal error). You may be able to drive metrics based on things like how quickly the software allowed a user to accomplish a particular task.
The set of things that the development team cannot game, however, is likely to result in metrics that apply to large swaths of the development organization. Performance-based metrics (i.e. our logistics software has improved inventory turn times 10% this year) are going to require that the entire team is working together from the developers to the DBAs to the hardware group. So they're not going to be meaningful to track how productive an individual or even a group is. But they are going to be the sorts of metrics that you actually want senior management to manage to. Senior management shouldn't care whether Jimmy the Developer is buggy code (though Jimmy's immediate manager should be aware). But they should be aware if Jimmy's buggy code is causing the call center's customer lookup operation to waste 10 hours of call center rep time every day or if some cool-looking new feature is chewing up 50% of the available CPU and slowing the rest of the system down.
Lower level managers can participate in the QA feedback loop and will interact regularly with the various development teams. It should be clear to them which individual developers are particularly strong and which are particularly weak. It should be clear where the recurring pain points are whether those pain points are communication or politics or developer strength. Having numeric KPIs at these low levels is going to be exceptionally difficult-- they are going to be too easy to game and they are going to create some perverse incentives. A developer's manager should understand whether a developer that is being assigned a lot of bugs is a weak developer that needs mentoring or a strong developer that is being exceptionally productive or an unlucky developer that has responsibility for a legacy module that is known to be exceptionally complex or buggy.
I don't know of an automatic feature in bugzilla for doing such (the bugzilla api is powerful, and may provide some places to script... but I'm not familiar enough with it to be certain)
But I agree that the problem seems to be workflow related... If you think about it there seems to be an almost cyclic dependency... To finish the release of the core component we require the validation of the client product, but to resolve the release of the client product the core component must be released.
Instead of worrying about every client validating a fix, what if you had feature/regression tests for the component itself (which would be updated as needed). The issue with the core component can block the release of the core component, which can block bugs for the release of each client project.
When the fix is finished and passes its feature tests, the issue bug can be resolved. When all issues have been resolved, and the component has been regression tested, the component could be released and the issue bugs can be closed. When testing is complete for the client project (including validating all of the included changes), the client release bug can be resolved.
Now as far as validation for the client projects go, this is where continuous integration would pay off immensely. The development build of each client product could be taking development builds of the core component and validating that things work, and discovered issues can be logged against the core component while development is ongoing. If an issue is found after the core component is released, then we start the process over with a new issue bug and a new release bug for the core component, blocking the client releases.
It almost feels like a release early, release often mantra if that makes sense.
Yes, this is a manual process, but maybe it'll reduces the cloning of bugs if that makes sense... not sure how if it would jive with your requirements / regulations though.
Best Answer
I don't think the API is going to help you in this case. Because the project is private no one that is not logged in AND has access to the project will be able to do anything with the project, including creating tickets.
If you use the Github API you'll have to include the username and password to an account that is a collaborator on the project. Probably not a great idea.
Your next option would be to create a public project with a similar name, but without the code. Then you can use that project to track the external customer bugs.