Can Testers Peer Review Developers’ Design and Code in Agile Scrum?

agilecmmicode-reviewsscrum

I am a junior developer for a small business using scrum / agile development. A long-term goal of ours is to be appraised at CMMI lvl 2. We have a team of 3 senior developers who implement user stories and a handful of junior developers for support.

We are moving towards a "three amigos" methodology, especially in regard to separating the duties of development and testing (the third amigo being the product owner / business stakeholders). This way our senior developers can focus on implementation and our junior developers can focus as impartial testers.

We are using peer reviews as a specific practice for verifying work products, such as source code. This paper, http://repository.cmu.edu/cgi/viewcontent.cgi?article=1208&context=sei, describes on p. 19 that an optimal peer review process for design and code ranges from 50-65% of the time spent designing and coding.

My question is this: is it appropriate for our testers to peer review the developers' design and code? An advantage would be that the developers can spend more time implementing user stories. A disadvantage would be that the testers may sacrifice their objective/impartial view of the system, since peer reviewing creates a shared ownership of the code.

Best Answer

You raise several issues in your question, and each deserves some thought, particularly if the CMMi inspectors might be coming to your cubicle or office to ask you about them. Been there, done that, it can be good if you are prepared, but... Well, you want to be prepared. Issues from your description include:

  1. Eligibility: Are junior developers assigned to support as testers eligible to be reviewers?
  2. Certifiability: Agile + CMMi + code reviews - How to stay agile and meet CMMi guidelines?
  3. Objectivity: Can code reviewers continue to be objective testers if they have seen the code?

Eligibility

  • Do you know the user story? Has it been flowed into a use case diagram and to individual use cases with alternate flows? If not, the senior developers might not have completed their documentation enough to provide adequate reference material for the inspection to begin.

  • If the reviewed work product is code, do you know the language, and the coding standard?

  • If you know these, you have some great tools to give feedback.

As testers,

  • If you are not involved in reviews of the user stories and use cases, how do you make your test plans for black box and other functional testing?

  • Similarly, if are involved in glass box testing for statement and/or decision coverage, if you don't read the code, how do you write your test cases?

    • Agile to me is not very agile if it does not include some degree of TDD (Test Driven Development) and unit test. Do you have a role running unit tests or perhaps even writing them? Potentially, this involves not only reading the code, but modifying it as well.

Ultimately my bottom line for you being on code reviews is that it is a great way for experienced developers to share their judgement and domain knowledge with smart but less experienced members of the team, and for less experienced but potentially more recently trained developers to share what they learned in school or from recent experience at other companies. A win-win provided both developers respect each other.

Certifiablity

CMMi and scrum/agile may sometimes be at odds with one another, particularly in the part that recommends a high proportion (55-60%) of time be spent in code reviews. This sounds like a very process heavy approach (i.e. not very Agile), and many teams would look for automation and tools and ways to do it faster.

Scrum based teams I have been on used tool facilitated code reviews in which the author identified the commits and the tool showed us the code. Our tool chain included svn and Atlassian tools Jira, Fisheye, and Crucible. Crucible tracked the time spent in review, permitted annotations in the code that were notes or that could include defect classifications. The author could respond to the notes and make changes that would also show up in the inspection. To close the inspection, the moderator needed to confirm that review issues were resolved. Sometimes there was a face to face meeting with the inspectors, but this was not always the case. The tool did a great job of streamlining the mechanics of the inspection.

Pair programming covers some of the same ground as peer reviews. Two pairs of eyes can do a lot to catch issues as they occur, and can increase velocity or transfer knowledge between developers. While there may be some group think where one developer talks the other developer into ideas that are wrong, mostly the interaction develops alternatives that are superior to what one would discover. Also, if you are on your own, you can pretty easily make an interface or other design choice that will prove unfortunate later.

Objectivity

Testers are not the only ones who need to stay objective about the product for verification and validation. Generally Agile folks who use Scrum might convey a lot of information verbally because they "Working software over comprehensive documentation". Will you get your test criteria from customers? From story cards?

Bottom line is that I think that you probably will get at least some of your test criteria from developers and if it makes you feel better, gather all you can from non-code assets (including the draft users manual that better be in progress in parallel with development), then get stuff from the code.

Example of a CMMi Approvable Peer Review Process

Older methodologies like Fagan Inspections did indeed use large time commitments mentioned earlier. For the duration of the inspection at least, Fagan teams organized into four roles: moderator, author, reader, and tester.

  • The moderator supervises the inspection to insure it follows the
    methodology and coding or documentation standards, records defects for followed up with the author, and provides metrics to the QA team for quantitative analysis against future defect detection.
  • The author wrote the code or document that was inspected. During the inspection, his/her job was to listen very closely. After the inspection, the author fixed defects and followed up with the moderator.
  • The reader paraphrased the document or code to the team during the inspection.
  • The tester was usually someone who would receive the work product as a hand-off. Often, if a system engineer was the author, a developer would be tester. For a developer, the inspection-tester role would be software tester. For a tester who made a test plan,
    sometimes their inspector-tester would be a system engineer.

Advantages of this approach are that it establishes significant communication between key participants as work products are handed off across the software development life-cycle. I think it did a good job preventing the tester (and other inspectors) from being drawn into group-think.

  • Each inspector prepared individually.
  • The reader, not the author, presented the work product.
  • If the author had something in their head that was not in the work product, he had a great chance to confront the problem in front of three inspectors.
  • The moderator had the authority to encourage thinking and protect against any battle of egos, and with the coding standards and usually system documentation, and experience, could be the conscience of the team.

Not enough time for code reviews / inspections?

To the extent you are involve in defining the code review process, you might take a look at this article. I do not endorse the product he sells, but like his description of the problem and some alternatives.

http://www.methodsandtools.com/archive/archive.php?id=66

His conclusion seems to be that Fagan inspections are the gold standard, but because they are so time consuming, his company ended up only inspecting about 1% of their code.

I have ab ad-hoc technique that I call "quick-hit code reviews". This technique requires the commitment of one person, because it is simply the software technical lead reading every code change committed to source control when he comes in first thing in the morning. This works well if the team is in on a different continent, or if the lead is an early or later riser, while the rest of the team is not. The review rate is 400-500 lines of code per hour (vs. 100 lines per hour for Fagan), so it is less than thorough and not much is recorded.

When faults are found, the action is to send an email to the author either about a fix they should make, or if they are in a different time zone or away for the day, about a fix the lead makes between the review and a private build and smoke test. I don't believe this is a CMMi certifiable methodology. It depends highly on the ability of the lead and their knowledge of the code, and is little help catching with subtle problems (unless they are like your own past mistakes). I used it when I had a new team on some code that was hard to maintain. I believe the time spent was paid back in reduced testing and reduced latency between work and rework.

Related Topic