In my company, it's an email discussion of what feature is implemented and what kind of bug is fixed sent by the one who write the code. And the reviewer, who receives the mail, will review the code and discuss the quality and how to edit the code in his opinion. What does a standard code review contain?
What does a standard code review contain
code-reviews
Related Solutions
Grading individuals in a review is counter to most successful systems I've worked with, maybe all. But the goal I've been trying to reach for more than 20 years is fewer bugs and increased productivity per-engineer-hour. If grading individuals is a goal, I suppose reviews could be used. I've never seen a situation where it was required, as a worker or as a leader.
Some objective study (Fagan, etc.) and a lot of popular wisdom suggests that peer relationships facilitate code reviews aimed at reducing bugs and increasing productivity. Working managers may participate as workers, but not as managers. Points of discussion are noted, changes to satisfy reviewers are generally a good thing but not required. Hence the peer relationship.
Any automated tools that can be accepted without further analysis or judgment are good - lint in C, C++, Java. Regular compilation. Compilers are REALLY good at findng compiler bugs. Documenting deviations in automated checks sounds like a subtle indictment of the automated checks. Code directives (like Java does) that allow deviations are pretty dangerous, IMHO. Great for debugging, to allow you to get the heart of the matter, quickly. Not so good to find in a poorly documented, 50,000 non-comment-line block of code you've become responsible for.
Some rules are stupid but easy to enforce; defaults for every switch statement even when they're unreachable, for example. Then it's just a check box, and you don't have to spend time and money testing with values which don't match anything. If you have rules, you'll have foolishness, they are inextricably linked. Any rule's benefit should be worth the foolishness it costs, and that relationship should be checked at regular intervals.
On the other hand, "It runs" is no virtue before review, or defense in review. If development followed the waterfall model, you'd like to do the review when coding is 85% complete, before complicated errors were found and worked out, because review is a cheaper way to find them. Since real life isn't the waterfall model, when to review is somewhat of an art and amounts to a social norm. People who will actually read your code and look for problems in it are solid gold. Management that supports this in an on-going way is a pearl above price. Reviews should be like checkins- early and often.
I've found these things beneficial:
1) No style wars. Where open curly braces go should only be subject to a consistency check in a given file. All the same. That's fine then. Ditto indentation depth**s and **tab widths. Most organizations discover they need a common standard for tab, which is used as a large space.
2) `Ragged
looking
text that doesn't
line up is hard to read
for content.`
BTW, K&R indented five (FIVE) spaces, so appeals to authority are worthless. Just be consistent.
3) A line-numbered, unchanging, publicly available copy of the file to be reviewed should be pointed to for 72 hours or more before the review.
4) No design-on-the-fly. If there's a problem, or an issue, note its location, and keep moving.
5) Testing that goes through all paths in the development environment is a very, very, very, good idea. Testing that requires massive external data, hardware resources, use of the customer's site, etc, etc. is testing that costs a fortune and won't be thorough.
6) A non-ASCII file format is acceptable if creation, display, edit, etc., tools exist or are created early in development. This is a personal bias of mine, but in a world where the dominant OS can't get out of its own way with less than 1 gigabyte of RAM, I can't understand why files less than, say, 10 megabytes should be anything other than ASCII or some other commercially supported format. There are standards for graphics, sound, movies, executable, and tools that go with them. There is no excuse for a file containing a binary representation of some number of objects.
For maintenance, refactoring or development of released code, one group of co-workers I had used review by one other person, sitting at a display and looking at a diff of old and new, as a gateway to branch check-in. I liked it, it was cheap, fast, relatively easy to do. Walk-throughs for people who haven't read the code in advance can be educational for all but seldom improve the developer's code.
If you're geographically distributed, looking at diffs on a screen while talking with someone else looking at the same would be relatively easy. That covers two people looking at changes. For a larger group who have read the code in question, multiple sites isn't a lot harder than all in one room. Multiple rooms linked by shared computer screens and squak boxes work very well, IMHO. The more sites, the more meeting management is needed. A manager as facilitator can earn their keep here. Remember to keep polling the sites you're not at.
At one point, the same organization had automated unit testing which was used as regression testing. That was really nice. Of course we then changed platforms and automated test got left behind. Review is better, as the Agile Manifesto notes, relationships are more important than process or tools. But once you've got review, automated unit tests/regression tests are the next most important help in creating good software.
If you can base the tests on requirements, well, like the lady says in "When Harry Met Sally", I'll have what she's having!
All reviews need to have a parking lot to capture requirements and design issues at the level above coding. Once something is recognized as belonging in the parking lot, discussion should stop in the review.
Sometimes I think code review should be like schematic reviews in hardware design- completely public, thorough, tutorial, the end of a process, a gateway after which it gets built and tested. But schematic reviews are heavyweight because changing physical objects is expensive. Architecture, interface and documentation reviews for software should probably be heavyweight. Code is more fluid. Code review should be lighter weight.
In a lot of ways, I think technology is as much about culture and expectation as it is about a specific tool. Think of all the "Swiss Family Robinson"/Flintstones/McGyver improvisations that delight the heart and challenge the mind. We want our stuff to work. There isn't a single path to that, any more than there was "intelligence" which could somehow be abstracted and automated by 1960s AI programs.
We used to do manual code reviews (i.e. no special tools) and this was the best method we found. Our development department doesn't have team foundation server, so I can't comment on that.
- All work is tied to a bug or an agile story that has a unique ID. When the work is checked in (or shelved, which is submission without actual check in), the description will always have this identifier.
- In a stand-up or over e-mail the reviewer would be notified that bug ### or story ### is ready for review.
- They do source control diff between new versions of the files and previous versions and copy/paste the output of diff into a word document. We have a code review word document template which is preset to use have 2 levels of headings: level 1-module, level 2-file name. It also defines formatting style for actual code (8pt Consolas). This step obviously was the biggest pain, but with Word template and Ctrl-A, Ctrl-C, Ctrl-V sequence, it's not that many clicks.
- Now that the document is in word, the reviewer can highlight the code in question and add comments using Word's review system.
- All documents are stored in a document repository and labeled with corresponding bug or story number.
- Once the document is done, the link to the repository is shared with original developer. Then we'd either have a code review meeting if changes were significant enough and need discussion, or just leave it up to original developer to go through comments on his own time.
After trying numerous manual methods, this was the one we found to have the least amount of overhead while allowing us to actually review every change.
However, our engineering team just rolled out Review Board and although I haven't used it that much yet, so far I'm loving what I have seen. It has all the flexibility of our word docs, but no more copy/pasting and manually fixing formatting. As an added bonus it keeps a permanent archive of all comments so you can go back years in time if you ever need to. Also it allows you do diffs of diffs, which is great when you want to do a review of a code review. This part we found to be very difficult with manual procedures because you can't see what was changed in response to first code review, instead you end up redoing the entire thing.
Although you did say you don't want to use tools, I'd strongly urge you to consider Review Board. It is open source and completely free. So you can roll it out for yourself and the 5 people you maybe working with. The rest of your company doesn't have to use this tool if they don't want to. And you don't need to worry about getting any purchase approvals.
== Update to comments from question: ==
On my team, we have people in NY, CT, TX, Poland and India. What makes things even more interesting is that extremely high percentage of the team doesn't know the product or technology all that well, so very few of us do most of the review. So yeah, senior devs are definitely busy. In the process I outlined, primary reviewer would do initial walk through the code independently on his own schedule. But afterwards we schedule a meeting and primary reviewer walks the coder through his comments. The meeting can have other reviewers, but they are considered secondary and are not obligated (but not discouraged from) to review every file or make comments.
I agree with others comments that having the final meeting in real-time, even if you have to use web conferencing is much better for knowledge transfer and for helping your new guys understand the code so that they'll start producing things that make your senior devs even busier. But again, that depends on the volume and type of comments, sometimes (very infrequently) comments are so minor that the meeting is skipped.
I can also relate to you when you say it's hard to roll out new tools. I work for a very large corporation and typically there's so many people involved that even outside of the fact that no purchases are ever approved, there are so many interests/agendas that nothing is ever agreed on. What's nice about Review Board is that you can skip all that and just start using with your small team and if you have to (if it really comes to that), you can host the web services on your own dev machine.
Best Answer
In my experience, most formal code reviews devolve into style checking because it's easy. Even when you supply a checklist of things to look at, it's pretty easy for eyes to start glazing over.
I've found that unit test review provides more benefit. Most developers I've worked with don't really know how to unit test properly, and once they get that "Aha!" moment the rest of their code starts improving as well. Here's a hint: it's not a unit test if you require the user to inspect something, and it's not a unit test if you are just starting something up to run in a debugger.