Testing – How Many Regression Bugs from Refactoring is Too Many

refactoringtesting

Recent QA testing has found some regression bugs in our code. My team lead blames recent refactoring efforts for the regressions.

My team lead's stance is "refactor, but don't break too many things", but wouldn't tell me how many is "too many". My stance is it's QA's job to find bugs that we can't find, and refactoring usually introduces breaking changes. So sure, I can be careful, but I don't knowingly release code with bugs to QA. I do it because I don't see them.

If the refactoring was necessary, how many regression bugs should be considered too many?

Best Answer

You are right that refactoring code is important. It prevents code rot and improves code. It makes for cleaner code.

But good code is not only clean code, it's code that is correct, and thus by definition contains as few bugs as possible (ideally none). The first goal of your code is to produce its expected result. So if your refactoring is introducing bugs you might want to consider the net effect of those refactorings.

You should refactor code that is tested. If it's not, add tests and then refactor. This way you know you haven't broken anything. This will help mitigate the risks of a similar situation from happening in the future.

As for refactoring introducing bugs, refactoring should not alter the behavior of a program. I will quote from Wikipedia:

disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior

Sadly nowadays, refactoring has come to mean anything from this definition to total rewrites.

I would alter what your team lead said from:

refactor but don't break too many things

To:

refactor and don't break anything

As for finding bugs being the job of the QA, quality isn't someone else's problem. The goal should be that QA finds nothing. Realistically finding as little as possible.