You raise several issues in your question, and each deserves some thought, particularly if the CMMi inspectors might be coming to your cubicle or office to ask you about them. Been there, done that, it can be good if you are prepared, but... Well, you want to be prepared. Issues from your description include:
- Eligibility: Are junior developers assigned to support as testers eligible to be
reviewers?
- Certifiability: Agile + CMMi + code reviews - How to stay agile and meet
CMMi guidelines?
- Objectivity: Can code reviewers continue to be objective testers
if they have seen the code?
Eligibility
Do you know the user story? Has it been flowed into a use case
diagram and to individual use cases with alternate flows? If not,
the senior developers might not have completed their documentation
enough to provide adequate reference material for the inspection to
begin.
If the reviewed work product is code, do you know the language, and
the coding standard?
If you know these, you have some great tools to give feedback.
As testers,
If you are not involved in reviews of the user stories and use cases, how do you make your test plans for black box and other functional testing?
Similarly, if are involved in glass box testing for statement and/or decision coverage, if you don't read the code, how do you write your test cases?
- Agile to me is not very agile if it does not include some degree of TDD (Test Driven Development) and unit test. Do you have a role running unit tests or perhaps even writing them? Potentially, this involves not only reading the code, but modifying it as well.
Ultimately my bottom line for you being on code reviews is that it is a great way for experienced developers to share their judgement and domain knowledge with smart but less experienced members of the team, and for less experienced but potentially more recently trained developers to share what they learned in school or from recent experience at other companies. A win-win provided both developers respect each other.
Certifiablity
CMMi and scrum/agile may sometimes be at odds with one another, particularly in the part that recommends a high proportion (55-60%) of time be spent in code reviews. This sounds like a very process heavy approach (i.e. not very Agile), and many teams would look for automation and tools and ways to do it faster.
Scrum based teams I have been on used tool facilitated code reviews in which the author identified the commits and the tool showed us the code. Our tool chain included svn and Atlassian tools Jira, Fisheye, and Crucible. Crucible tracked the time spent in review, permitted annotations in the code that were notes or that could include defect classifications. The author could respond to the notes and make changes that would also show up in the inspection. To close the inspection, the moderator needed to confirm that review issues were resolved. Sometimes there was a face to face meeting with the inspectors, but this was not always the case. The tool did a great job of streamlining the mechanics of the inspection.
Pair programming covers some of the same ground as peer reviews. Two pairs of eyes can do a lot to catch issues as they occur, and can increase velocity or transfer knowledge between developers. While there may be some group think where one developer talks the other developer into ideas that are wrong, mostly the interaction develops alternatives that are superior to what one would discover. Also, if you are on your own, you can pretty easily make an interface or other design choice that will prove unfortunate later.
Objectivity
Testers are not the only ones who need to stay objective about the product for verification and validation. Generally Agile folks who use Scrum might convey a lot of information verbally because they "Working software over comprehensive documentation". Will you get your test criteria from customers? From story cards?
Bottom line is that I think that you probably will get at least some of your test criteria from developers and if it makes you feel better, gather all you can from non-code assets (including the draft users manual that better be in progress in parallel with development), then get stuff from the code.
Example of a CMMi Approvable Peer Review Process
Older methodologies like Fagan Inspections did indeed use large time commitments mentioned earlier. For the duration of the inspection at least, Fagan teams organized into four roles: moderator, author, reader, and tester.
- The moderator supervises the inspection to insure it follows the
methodology and coding or documentation standards, records defects for followed up with the author, and provides metrics to the QA team for quantitative analysis against future defect detection.
- The author wrote the code or document that was inspected. During the inspection, his/her job was to listen very closely. After
the inspection, the author fixed defects and followed up with the
moderator.
- The reader paraphrased the document or code to the team during the inspection.
- The tester was usually someone who would receive the work product as a hand-off. Often, if a system engineer was the author, a developer would be tester. For a developer, the inspection-tester role would be software tester. For a tester who made a test plan,
sometimes their inspector-tester would be a system engineer.
Advantages of this approach are that it establishes significant communication between key participants as work products are handed off across the software development life-cycle. I think it did a good job preventing the tester (and other inspectors) from being drawn into group-think.
- Each inspector prepared individually.
- The reader, not the author, presented the work product.
- If the author had something in their head that was not in the work product, he had a great chance to confront the problem in front of
three inspectors.
- The moderator had the authority to encourage thinking and protect against any battle of egos, and with the coding standards and usually
system documentation, and experience, could be the conscience of the
team.
Not enough time for code reviews / inspections?
To the extent you are involve in defining the code review process, you might take a look at this article. I do not endorse the product he sells, but like his description of the problem and some alternatives.
http://www.methodsandtools.com/archive/archive.php?id=66
His conclusion seems to be that Fagan inspections are the gold standard, but because they are so time consuming, his company ended up only inspecting about 1% of their code.
I have ab ad-hoc technique that I call "quick-hit code reviews". This technique requires the commitment of one person, because it is simply the software technical lead reading every code change committed to source control when he comes in first thing in the morning. This works well if the team is in on a different continent, or if the lead is an early or later riser, while the rest of the team is not. The review rate is 400-500 lines of code per hour (vs. 100 lines per hour for Fagan), so it is less than thorough and not much is recorded.
When faults are found, the action is to send an email to the author either about a fix they should make, or if they are in a different time zone or away for the day, about a fix the lead makes between the review and a private build and smoke test. I don't believe this is a CMMi certifiable methodology. It depends highly on the ability of the lead and their knowledge of the code, and is little help catching with subtle problems (unless they are like your own past mistakes). I used it when I had a new team on some code that was hard to maintain. I believe the time spent was paid back in reduced testing and reduced latency between work and rework.
Best Answer
Look at how facebook does it with their own app, called phabricator: http://phabricator.org/
They basically commit on a per issue basis, and for each issue, the code is shown, which is to be reviewed by someone. The code doesn't go into their main repository until the reviewer said it's ok to do so.
I guess it makes it more fun.
Also, perhaps a code should be assigned to two people: one who does it and one who reviews it.
Albeit perhaps your teammates don't believe in this review thingy.
Personally, in lack of reviewers, I used unit tests for lower-level functions and "the janitor test" for all the rest: the janitor test is called that way, because even the janitor should be able to understand your code.
I usually removed some minor parts, like block / function scope brackets, visibility notations, sometimes even types, and showed it to managers, domain experts, mates, whoever requested the code: "is this what you want?"
Also, going there personally and not leaving until reviewing is done helps.
Or, in case you're not fine with the team, or they're not fine with you, you know, "if you can' change the company, change company"...