Should developers be involved in testing phases

development-processtesting

we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance.
Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers… ;)). Developers are doing their unit-tests as they should.

So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days).

Thanks for sharing your thoughts.

(*) For obscure reasons, increasing the number of testers is not an option as of today.

(Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

Best Answer

Looking at your question very literally ("involved in") My only answer is an absolute unequivocal

YES

Devs should never have the final say on their own code.

But, Devs should be involved in testing the work of other devs. It does two things:

  1. It brings a developer's insight to testing. This is both from the general case of just knowing what APIs are probably being used at a given point, what the exceptions that may come from those APIs, and how they should be handled. It also helps on a specific project because the devs get a lot more exposure to the various discussions about why something is being done than QA typically does, meaning they may spot edge cases that QA wouldn't. Bugs spotted by a dev are also likely to be cheaper to fix because a dev will usually provide more information and much more insight on how to fix it right away.
  2. It gives the dev exposure to parts of the application they may not otherwise get exposure to. This will make them better developers for that app in the long run. When I know how my API is consumed, I am much better at anticipating the next thing I should do than if I'm just driving off a spec. Most importantly, I can tell when the spec is wrong before I start coding if I know the application and its use.

Finally, why wouldn't you use as many eyes as possible? You can rarely afford to go through the hiring and on-boarding process to bring additional QA people on board for crunch time. So, where do you find the extra eyes you need? Or do you try to get through crunch time with the same number of QA you had all along? Even if the devs spend 20% of their time testing and 80% fixing whatever bugs come up, it's still more eyes on the app than you had before. Automated testing only gives you a certain level of assurance and it will never be 100%.

http://haacked.com/archive/2010/12/20/not-really-interested-in-lean.aspx?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+haacked+%28you%27ve+been+HAACKED%29

Related Topic