Manual testing in the deployment pipeline

continuous integrationdeployment

Our company still uses a lot of manual testing of software (and this will not change in the coming years). We try to improve our build and deployment process by using a deployment pipeline which handles both the automatic and the manual tests.

The idea is to present the user a list of tests for which he approves that they were executed. The deployment pipeline waits until this approval was given.

The main problem in this sort of formalised deployment pipeline is that it is not possible to redo all manual tests after a bugfix. In a completely automatic deployment pipeline, you would fix your code, create a new version of your artifact and send it (again) through all tests. We do not have the resources to repeat all manual tests, but rather we would only manually test those parts of the software that are "probably" affected by the bugfix.

What would be a good way to handle this in a deployment pipeline?

Best Answer

I can't give you an answer in any context of "who's fault would it be if problems were to arise in production", as there is only one best practice to minimize errors and that would be to simply retest everything anyway.

Testing areas "probably" affected by the bug fix is a reasonable compromise, though only the person who made the bug fix could know which areas are affected. You could possibly associate tags with each test and then say, test everything containing tags "dao" and "orders". This would give you a little flexibility in that you can departmentalize your program and therefore your tests so you can quickly and fairly accurately test these areas affected when a bug fix is made. It might be worth your time to create a technical document for your fellow developers to clarify which areas in the program correspond to which tags.

Hope that helps!

Related Topic