We used to do manual code reviews (i.e. no special tools) and this was the best method we found. Our development department doesn't have team foundation server, so I can't comment on that.
- All work is tied to a bug or an agile story that has a unique ID. When the work is checked in (or shelved, which is submission without actual check in), the description will always have this identifier.
- In a stand-up or over e-mail the reviewer would be notified that bug ### or story ### is ready for review.
- They do source control diff between new versions of the files and previous versions and copy/paste the output of diff into a word document. We have a code review word document template which is preset to use have 2 levels of headings: level 1-module, level 2-file name. It also defines formatting style for actual code (8pt Consolas). This step obviously was the biggest pain, but with Word template and Ctrl-A, Ctrl-C, Ctrl-V sequence, it's not that many clicks.
- Now that the document is in word, the reviewer can highlight the code in question and add comments using Word's review system.
- All documents are stored in a document repository and labeled with corresponding bug or story number.
- Once the document is done, the link to the repository is shared with original developer. Then we'd either have a code review meeting if changes were significant enough and need discussion, or just leave it up to original developer to go through comments on his own time.
After trying numerous manual methods, this was the one we found to have the least amount of overhead while allowing us to actually review every change.
However, our engineering team just rolled out Review Board and although I haven't used it that much yet, so far I'm loving what I have seen. It has all the flexibility of our word docs, but no more copy/pasting and manually fixing formatting. As an added bonus it keeps a permanent archive of all comments so you can go back years in time if you ever need to. Also it allows you do diffs of diffs, which is great when you want to do a review of a code review. This part we found to be very difficult with manual procedures because you can't see what was changed in response to first code review, instead you end up redoing the entire thing.
Although you did say you don't want to use tools, I'd strongly urge you to consider Review Board. It is open source and completely free. So you can roll it out for yourself and the 5 people you maybe working with. The rest of your company doesn't have to use this tool if they don't want to. And you don't need to worry about getting any purchase approvals.
== Update to comments from question: ==
On my team, we have people in NY, CT, TX, Poland and India. What makes things even more interesting is that extremely high percentage of the team doesn't know the product or technology all that well, so very few of us do most of the review. So yeah, senior devs are definitely busy. In the process I outlined, primary reviewer would do initial walk through the code independently on his own schedule. But afterwards we schedule a meeting and primary reviewer walks the coder through his comments. The meeting can have other reviewers, but they are considered secondary and are not obligated (but not discouraged from) to review every file or make comments.
I agree with others comments that having the final meeting in real-time, even if you have to use web conferencing is much better for knowledge transfer and for helping your new guys understand the code so that they'll start producing things that make your senior devs even busier. But again, that depends on the volume and type of comments, sometimes (very infrequently) comments are so minor that the meeting is skipped.
I can also relate to you when you say it's hard to roll out new tools. I work for a very large corporation and typically there's so many people involved that even outside of the fact that no purchases are ever approved, there are so many interests/agendas that nothing is ever agreed on. What's nice about Review Board is that you can skip all that and just start using with your small team and if you have to (if it really comes to that), you can host the web services on your own dev machine.
You raise several issues in your question, and each deserves some thought, particularly if the CMMi inspectors might be coming to your cubicle or office to ask you about them. Been there, done that, it can be good if you are prepared, but... Well, you want to be prepared. Issues from your description include:
- Eligibility: Are junior developers assigned to support as testers eligible to be
reviewers?
- Certifiability: Agile + CMMi + code reviews - How to stay agile and meet
CMMi guidelines?
- Objectivity: Can code reviewers continue to be objective testers
if they have seen the code?
Eligibility
Do you know the user story? Has it been flowed into a use case
diagram and to individual use cases with alternate flows? If not,
the senior developers might not have completed their documentation
enough to provide adequate reference material for the inspection to
begin.
If the reviewed work product is code, do you know the language, and
the coding standard?
If you know these, you have some great tools to give feedback.
As testers,
If you are not involved in reviews of the user stories and use cases, how do you make your test plans for black box and other functional testing?
Similarly, if are involved in glass box testing for statement and/or decision coverage, if you don't read the code, how do you write your test cases?
- Agile to me is not very agile if it does not include some degree of TDD (Test Driven Development) and unit test. Do you have a role running unit tests or perhaps even writing them? Potentially, this involves not only reading the code, but modifying it as well.
Ultimately my bottom line for you being on code reviews is that it is a great way for experienced developers to share their judgement and domain knowledge with smart but less experienced members of the team, and for less experienced but potentially more recently trained developers to share what they learned in school or from recent experience at other companies. A win-win provided both developers respect each other.
Certifiablity
CMMi and scrum/agile may sometimes be at odds with one another, particularly in the part that recommends a high proportion (55-60%) of time be spent in code reviews. This sounds like a very process heavy approach (i.e. not very Agile), and many teams would look for automation and tools and ways to do it faster.
Scrum based teams I have been on used tool facilitated code reviews in which the author identified the commits and the tool showed us the code. Our tool chain included svn and Atlassian tools Jira, Fisheye, and Crucible. Crucible tracked the time spent in review, permitted annotations in the code that were notes or that could include defect classifications. The author could respond to the notes and make changes that would also show up in the inspection. To close the inspection, the moderator needed to confirm that review issues were resolved. Sometimes there was a face to face meeting with the inspectors, but this was not always the case. The tool did a great job of streamlining the mechanics of the inspection.
Pair programming covers some of the same ground as peer reviews. Two pairs of eyes can do a lot to catch issues as they occur, and can increase velocity or transfer knowledge between developers. While there may be some group think where one developer talks the other developer into ideas that are wrong, mostly the interaction develops alternatives that are superior to what one would discover. Also, if you are on your own, you can pretty easily make an interface or other design choice that will prove unfortunate later.
Objectivity
Testers are not the only ones who need to stay objective about the product for verification and validation. Generally Agile folks who use Scrum might convey a lot of information verbally because they "Working software over comprehensive documentation". Will you get your test criteria from customers? From story cards?
Bottom line is that I think that you probably will get at least some of your test criteria from developers and if it makes you feel better, gather all you can from non-code assets (including the draft users manual that better be in progress in parallel with development), then get stuff from the code.
Example of a CMMi Approvable Peer Review Process
Older methodologies like Fagan Inspections did indeed use large time commitments mentioned earlier. For the duration of the inspection at least, Fagan teams organized into four roles: moderator, author, reader, and tester.
- The moderator supervises the inspection to insure it follows the
methodology and coding or documentation standards, records defects for followed up with the author, and provides metrics to the QA team for quantitative analysis against future defect detection.
- The author wrote the code or document that was inspected. During the inspection, his/her job was to listen very closely. After
the inspection, the author fixed defects and followed up with the
moderator.
- The reader paraphrased the document or code to the team during the inspection.
- The tester was usually someone who would receive the work product as a hand-off. Often, if a system engineer was the author, a developer would be tester. For a developer, the inspection-tester role would be software tester. For a tester who made a test plan,
sometimes their inspector-tester would be a system engineer.
Advantages of this approach are that it establishes significant communication between key participants as work products are handed off across the software development life-cycle. I think it did a good job preventing the tester (and other inspectors) from being drawn into group-think.
- Each inspector prepared individually.
- The reader, not the author, presented the work product.
- If the author had something in their head that was not in the work product, he had a great chance to confront the problem in front of
three inspectors.
- The moderator had the authority to encourage thinking and protect against any battle of egos, and with the coding standards and usually
system documentation, and experience, could be the conscience of the
team.
Not enough time for code reviews / inspections?
To the extent you are involve in defining the code review process, you might take a look at this article. I do not endorse the product he sells, but like his description of the problem and some alternatives.
http://www.methodsandtools.com/archive/archive.php?id=66
His conclusion seems to be that Fagan inspections are the gold standard, but because they are so time consuming, his company ended up only inspecting about 1% of their code.
I have ab ad-hoc technique that I call "quick-hit code reviews". This technique requires the commitment of one person, because it is simply the software technical lead reading every code change committed to source control when he comes in first thing in the morning. This works well if the team is in on a different continent, or if the lead is an early or later riser, while the rest of the team is not. The review rate is 400-500 lines of code per hour (vs. 100 lines per hour for Fagan), so it is less than thorough and not much is recorded.
When faults are found, the action is to send an email to the author either about a fix they should make, or if they are in a different time zone or away for the day, about a fix the lead makes between the review and a private build and smoke test. I don't believe this is a CMMi certifiable methodology. It depends highly on the ability of the lead and their knowledge of the code, and is little help catching with subtle problems (unless they are like your own past mistakes). I used it when I had a new team on some code that was hard to maintain. I believe the time spent was paid back in reduced testing and reduced latency between work and rework.
Best Answer
Types of reviews
There is no one true way to do peer reviews. There are many ways in which to judge whether code is of a sufficiently high quality. Clearly there is the question of whether it's buggy, or whether it has solutions that don't scale or which are brittle. Issues of conformance to local standards and guidelines, while perhaps not as critical as some of the others, is also part of what contributes to high quality code.
Types of reviewers
Just as we have different criteria for judging software, the people doing the judging are also different. We all have our own skills and predilections. Some may think that adhering to local standards is highly important, just as others might be more concerned with memory usage, or code coverage of your tests, and so on. You want all of these types of reviews, because as a whole they will help you write better code.
A peer review is collaboration, not a game of tag
I'm not sure you have the right to tell them how to do their job. Unless you know otherwise with certainty, assume that this person is trying to contribute the way he or she sees fit. However, if you see room for improvement, or suspect maybe they don't understand what is expected in a peer review, talk to them.
The point of a peer review is to involve your peers. Involvement isn't throwing code over a wall and waiting for a response to be thrown back. Involvement is working together to make better code. Engage in a conversation with them.
Advice
Towards the end of your question you wrote:
Again, the answer is communication. Perhaps you can ask them "hey, I appreciate you catching these mistakes. It would help me tremendously if you could also focus on some deeper issues such as whether I'm structuring my code properly. I know it takes time, but it would really help."
On a more pragmatic note, I personally divide code review comments into two camps and phrase them appropriately: things that must be fixed, and things that are more cosmetic. I would never prevent solid, working code from being checked in if there were too many blank lines at the end of a file. I will point it out, however, but I'll do so with something like "our guidelines say to have a single blank line at the end, and you have 20. It's not a show-stopper, but if you get a chance you might want to fix it".
Here's something else to consider: it may be a pet peeve of yours that they do such a shallow review of your code. It may very well be that a pet peeve of theirs is that you (or some other teammate who gets a similar review) are sloppy with respect to your own organization's coding standards, and this is how they have chosen to communicate that with you.
What to do after the review
And lastly, a bit of advice after the review: When committing code after a review, you might want to consider taking care of all the cosmetic things in one commit, and the functional changes in another. Mixing the two can make it hard to differentiate significant changes from insignificant ones. Make all of the cosmetic changes and then commit with a message like "cosmetic; no functional changes".