Grading individuals in a review is counter to most successful systems I've worked with, maybe all. But the goal I've been trying to reach for more than 20 years is fewer bugs and increased productivity per-engineer-hour. If grading individuals is a goal, I suppose reviews could be used. I've never seen a situation where it was required, as a worker or as a leader.
Some objective study (Fagan, etc.) and a lot of popular wisdom suggests that peer relationships facilitate code reviews aimed at reducing bugs and increasing productivity. Working managers may participate as workers, but not as managers. Points of discussion are noted, changes to satisfy reviewers are generally a good thing but not required. Hence the peer relationship.
Any automated tools that can be accepted without further analysis or judgment are good - lint in C, C++, Java. Regular compilation. Compilers are REALLY good at findng compiler bugs. Documenting deviations in automated checks sounds like a subtle indictment of the automated checks. Code directives (like Java does) that allow deviations are pretty dangerous, IMHO. Great for debugging, to allow you to get the heart of the matter, quickly. Not so good to find in a poorly documented, 50,000 non-comment-line block of code you've become responsible for.
Some rules are stupid but easy to enforce; defaults for every switch statement even when they're unreachable, for example. Then it's just a check box, and you don't have to spend time and money testing with values which don't match anything. If you have rules, you'll have foolishness, they are inextricably linked. Any rule's benefit should be worth the foolishness it costs, and that relationship should be checked at regular intervals.
On the other hand, "It runs" is no virtue before review, or defense in review. If development followed the waterfall model, you'd like to do the review when coding is 85% complete, before complicated errors were found and worked out, because review is a cheaper way to find them. Since real life isn't the waterfall model, when to review is somewhat of an art and amounts to a social norm. People who will actually read your code and look for problems in it are solid gold. Management that supports this in an on-going way is a pearl above price. Reviews should be like checkins- early and often.
I've found these things beneficial:
1) No style wars. Where open curly braces go should only be subject to a consistency check in a given file. All the same. That's fine then. Ditto indentation depth**s and **tab widths. Most organizations discover they need a common standard for tab, which is used as a large space.
2) `Ragged
looking
text
that doesn't
line up is hard to read
for content.`
BTW, K&R indented five (FIVE) spaces, so appeals to authority are worthless. Just be consistent.
3) A line-numbered, unchanging, publicly available copy of the file to be reviewed should be pointed to for 72 hours or more before the review.
4) No design-on-the-fly. If there's a problem, or an issue, note its location, and keep moving.
5) Testing that goes through all paths in the development environment is a very, very, very, good idea. Testing that requires massive external data, hardware resources, use of the customer's site, etc, etc. is testing that costs a fortune and won't be thorough.
6) A non-ASCII file format is acceptable if creation, display, edit, etc., tools exist or are created early in development. This is a personal bias of mine, but in a world where the dominant OS can't get out of its own way with less than 1 gigabyte of RAM, I can't understand why files less than, say, 10 megabytes should be anything other than ASCII or some other commercially supported format. There are standards for graphics, sound, movies, executable, and tools that go with them. There is no excuse for a file containing a binary representation of some number of objects.
For maintenance, refactoring or development of released code, one group of co-workers I had used review by one other person, sitting at a display and looking at a diff of old and new, as a gateway to branch check-in. I liked it, it was cheap, fast, relatively easy to do. Walk-throughs for people who haven't read the code in advance can be educational for all but seldom improve the developer's code.
If you're geographically distributed, looking at diffs on a screen while talking with someone else looking at the same would be relatively easy. That covers two people looking at changes. For a larger group who have read the code in question, multiple sites isn't a lot harder than all in one room. Multiple rooms linked by shared computer screens and squak boxes work very well, IMHO. The more sites, the more meeting management is needed. A manager as facilitator can earn their keep here. Remember to keep polling the sites you're not at.
At one point, the same organization had automated unit testing which was used as regression testing. That was really nice. Of course we then changed platforms and automated test got left behind. Review is better, as the Agile Manifesto notes, relationships are more important than process or tools. But once you've got review, automated unit tests/regression tests are the next most important help in creating good software.
If you can base the tests on requirements, well, like the lady says in "When Harry Met Sally", I'll have what she's having!
All reviews need to have a parking lot to capture requirements and design issues at the level above coding. Once something is recognized as belonging in the parking lot, discussion should stop in the review.
Sometimes I think code review should be like schematic reviews in hardware design- completely public, thorough, tutorial, the end of a process, a gateway after which it gets built and tested. But schematic reviews are heavyweight because changing physical objects is expensive. Architecture, interface and documentation reviews for software should probably be heavyweight. Code is more fluid. Code review should be lighter weight.
In a lot of ways, I think technology is as much about culture and expectation as it is about a specific tool. Think of all the "Swiss Family Robinson"/Flintstones/McGyver improvisations that delight the heart and challenge the mind. We want our stuff to work. There isn't a single path to that, any more than there was "intelligence" which could somehow be abstracted and automated by 1960s AI programs.
Best Answer
Well I tend to make comments in several general areas and each type might be handled differently.
Required changes. These are the kinds of changes where I point out that the code doesn't meet the functional requirements or doesn't work and must be fixed before being pushed to production. I tend to be very straightforward in these comments. The requirements says..., this does not do that. Or this will fail if the value sent in is null (especially when you know that case will occur based on the data that will get sent in).
Then there are the "this works but here is a better way to accomplish that" comments. You have to be more gentle in these and do more of a sales pitch. I might say that I would do this instead because it is likely to be better performing (I generally review SQL code where performance is very important). I might add some details about why it is a better choice just like I would do in answering a question on Stack Overflow. I might point out that it isn't required to change this for this particular code, but to consider the change in future coding. Basically with these types of comments I am educating people with less experience on what might work better.
Then there are the "this works but we do things this way" comments. These will probably also be required changes. These would include comments about company standards or the architecture we expect them to use. I would reference the standard or architecture document and tell them to fix to the standard. The comment would be straightforward but neutral, it is missing thus and so or the variable names don't conform to our naming standard or simliar things. For instance, our architecture for SSIS packages requires the package to use our metadata database to store particular information about the package and requires particular logging. The package would work without these things but they are required for company reasons (we need to report of the success rate of imports for instance or analyze the types of errors we receive.)
The one thing you don't want to do in code review comments is personally attack someone. It can also help if you find something they did well and point out that was good. Sometimes I learn something new from a code review and if I did I tell the person that.