Testing Communication – How to Encourage Programmers to Stop Overly Relying on Testers

communicationoutsourcingtesting

A friend of mine is working in a 200-employee company. The company's business has nothing to do with IT, but they do have an IT department to work, among others, on their website, used by the customers.

The website started with a core idea that programmers have to test the application themselves using automated testing. However, it quickly started to be problematic, as programmers were spending too much time writing functional tests with Selenium (and later Cypress.io) trying to deal with either complicated interactions, such as drag and drop or file uploads, or trying to figure out why the tests randomly fail. For a while, more than 25% of the time was spent on those tests; moreover, most programmers were pissed off by those tests, as they wanted to produce actual value, not try to figure out why the tests would randomly fail.

Two years ago, it was decided to pay a company from Bulgaria to do the functional, interface-level tests manually. Things went well, as such testing was pretty inexpensive. Overall, programmers were delivering features faster, with fewer regressions, and everyone was happy.

However, over time, programmers started to be overconfident. They would write fewer integration or even unit tests, and would sometimes mark features as done without even actually checked if they work in a browser: since testers will catch the mistakes, why bother? This creates two problems: (1) it takes more time to solve the issues when they are discovered by testers a few days ago (compared to when they are discovered within minutes by programmers themselves) and (2) the overall cost of the outsourced testers grows constantly.

Recently, the team lead tries to change this behavior by:

  • Measuring, per person, how many tickets are reopened by the testers (and sharing the results to the whole team).

  • Giving congratulation to the persons who performed the best, i.e. those who have the least tickets being reopened.

  • Spend time pair programming with those who performed the worst, trying to understand why are they so reluctant to test their code, and showing them that it's not that difficult.

  • Explaining that it's much faster to solve a problem right now, than to wait for several days until the feature gets tested.

  • Explaining that testers do system tests only, and the lack of unit tests make it difficult to pinpoint the exact location of the problem.

However, it doesn't work:

  • The metrics are not always relevant. One may work on an unclear or complex ticket which gets reopened several times by the testers because of the edge cases, and a colleague may meanwhile work on a ticket which is so straightforward that there is absolutely no chance to introduce any regression.

  • Programmers are reluctant to test code, because (1) they find it just boring, and because (2) if they don't test code, it looks like they deliver the feature faster.

  • They also don't see why fixing a problem days after developing a feature would be a problem. They understand the theory, but they don't feel it in practice. Also, they believe that even if it would take a bit longer, it's still cheaper for the company to pay inexpensive outsourced testers rather than spend programmers' time on tests. Telling them repeatedly that this is not the case has no effect.

  • As for system vs. unit testing, programmers reply that they don't spend that much time finding the exact location of a problem reported by a tester anyway (which seems to be actually true).

What else can be done to encourage programmers to stop overly rely on testers?

Best Answer

It seems to me there is a contradiction in policy here.

On the one hand, the firm has outsourced testing because it consumed programmers' time excessively, and could be done more cheaply by others.

Now, they complain that the programmers are relying on the testers, and should be doing more testing themselves up front.

I can understand from a management point of view that there is perceived to be a happy medium, but in reality the programmers are not engaging in a close analysis, on a case-by-case basis, of how much testing they do themselves and how much they outsource.

To attempt to do so would consume too much time and intellectual effort, and likely without producing accurate results. How would a programmer go about estimating how many bugs a particular piece of code has, and then weighing up the economic benefit of spending his own time searching for them versus letting the testers search for them? It's an absurdity.

Instead programmers are following rules of thumb. Previously the rule was to test extensively. Now the rule is to save precious programmer time, get more code out the door, and leave testing to testers (who are thought to be ten-a-penny).

It's no answer to seek a happy medium, because in practice what will happen is that the anal-retentives will return to spending 25% of their time testing, and the cowboys will continue throwing low-quality code out the door, and personality traits like conscientiousness and attention to detail (or lack thereof) will predominate over the judgment. If management try to harass both types to get them to conform more closely to an average which is perceived to be economically ideal, both will probably just end up feeling harassed.

I would also remark in passing, that the 25% of time which was spent testing to begin with, does not strike me as excessive.

Related Topic