A good measure of testing/tester efficiency

efficiencymetricsqatesting

I am about to participate in a discussion with management regarding measuring our testing efficiency as a QA organization. The main reason behind this is that half of our team is contracted out and our business would like to provide some metrics of how effective/efficient we are, so that we have basis data on which to negotiate contract parameters with the service agreement of our contractors.

I have poked around a little and most of the opinion I have found on this subject revolves around developer efficiency: lines of code, story points delivered, defects introduced, etc.

But what about testers? Our testing is mostly requirements based, and a mix of manual, semi-automated, and automated testing (not because we haven't gotten around to automating everything, but because some things are not automatable in our test system).

Best Answer

Number of test written is useless, and a high number of bugs found can be a measure of poor development rather than efficient QA.

Automation measures (code coverage, feature coverage...) can be good, but I think they're a more help to development (as a developer, will I know if I break something accidentally) than customers (i want to do that and it doesn't work).

Since quality is good if customers don't encounter problems, so a good measure of the effectiveness (not the efficiency) of a QA team and process is the measure of bugs found by customers that haven't been found by QA.

The main problem with that metric is that there can be a considerable delay between the work done and when you start having meaningful numbers.