In general, you must review everything. If a fresh application has 2 000 LOC, all 2 000 LOC must be reviewed.
That's why there is no best practice on how to choose what to review.
If you approach an existent large codebase, never reviewed before, then it's the same thing when you must rewrite an existent large codebase, and to choose where to start. It strongly depends:
on the codebase (a single monolithic code would be more difficult to rewrite/review than a set of separate components, etc.),
your context (can you stop everything you work on and spend three months (three years?) working only on rewrite/review, or you must do it by small lapses, only when you have free time)?
the type of review you do (do you have a checklist of things to review? Depending on the items of the checklist, you may want to review some parts first).
If I were you, I would:
follow the 80%-20% principle, mentioned in the first comment of the second question you linked to.
take in account that 100%, being an ideal, isn't maybe worth it. It's like 100% code coverage for unit tests, except that such code coverage is mostly impossible or extremely expensive.
start with the parts of the code you use the most and which are the most important. If the codebase has a library which authenticates and registers new users on your corporate website, review it first, because you want certainly find security holes before hackers do.
use existent metrics to determine what is more important to review. If a part of the codebase has no unit tests at all, while another, equally important part, has 85% code coverage, start by reviewing the first part. If a part of the codebase was written by a developer who was known to be inexperienced and to introduce more bugs than any of his colleagues, start by reviewing his code first.
Types of reviews
There is no one true way to do peer reviews. There are many ways in which to judge whether code is of a sufficiently high quality. Clearly there is the question of whether it's buggy, or whether it has solutions that don't scale or which are brittle. Issues of conformance to local standards and guidelines, while perhaps not as critical as some of the others, is also part of what contributes to high quality code.
Types of reviewers
Just as we have different criteria for judging software, the people doing the judging are also different. We all have our own skills and predilections. Some may think that adhering to local standards is highly important, just as others might be more concerned with memory usage, or code coverage of your tests, and so on. You want all of these types of reviews, because as a whole they will help you write better code.
A peer review is collaboration, not a game of tag
I'm not sure you have the right to tell them how to do their job. Unless you know otherwise with certainty, assume that this person is trying to contribute the way he or she sees fit. However, if you see room for improvement, or suspect maybe they don't understand what is expected in a peer review, talk to them.
The point of a peer review is to involve your peers. Involvement isn't throwing code over a wall and waiting for a response to be thrown back. Involvement is working together to make better code. Engage in a conversation with them.
Advice
Towards the end of your question you wrote:
how would I go about encouraging colleagues to actually look for
faults in the code in balance with glaring aesthetic errors?
Again, the answer is communication. Perhaps you can ask them "hey, I appreciate you catching these mistakes. It would help me tremendously if you could also focus on some deeper issues such as whether I'm structuring my code properly. I know it takes time, but it would really help."
On a more pragmatic note, I personally divide code review comments into two camps and phrase them appropriately: things that must be fixed, and things that are more cosmetic. I would never prevent solid, working code from being checked in if there were too many blank lines at the end of a file. I will point it out, however, but I'll do so with something like "our guidelines say to have a single blank line at the end, and you have 20. It's not a show-stopper, but if you get a chance you might want to fix it".
Here's something else to consider: it may be a pet peeve of yours that they do such a shallow review of your code. It may very well be that a pet peeve of theirs is that you (or some other teammate who gets a similar review) are sloppy with respect to your own organization's coding standards, and this is how they have chosen to communicate that with you.
What to do after the review
And lastly, a bit of advice after the review: When committing code after a review, you might want to consider taking care of all the cosmetic things in one commit, and the functional changes in another. Mixing the two can make it hard to differentiate significant changes from insignificant ones. Make all of the cosmetic changes and then commit with a message like "cosmetic; no functional changes".
Best Answer
You raise several issues in your question, and each deserves some thought, particularly if the CMMi inspectors might be coming to your cubicle or office to ask you about them. Been there, done that, it can be good if you are prepared, but... Well, you want to be prepared. Issues from your description include:
Eligibility
Do you know the user story? Has it been flowed into a use case diagram and to individual use cases with alternate flows? If not, the senior developers might not have completed their documentation enough to provide adequate reference material for the inspection to begin.
If the reviewed work product is code, do you know the language, and the coding standard?
If you know these, you have some great tools to give feedback.
As testers,
If you are not involved in reviews of the user stories and use cases, how do you make your test plans for black box and other functional testing?
Similarly, if are involved in glass box testing for statement and/or decision coverage, if you don't read the code, how do you write your test cases?
Ultimately my bottom line for you being on code reviews is that it is a great way for experienced developers to share their judgement and domain knowledge with smart but less experienced members of the team, and for less experienced but potentially more recently trained developers to share what they learned in school or from recent experience at other companies. A win-win provided both developers respect each other.
Certifiablity
CMMi and scrum/agile may sometimes be at odds with one another, particularly in the part that recommends a high proportion (55-60%) of time be spent in code reviews. This sounds like a very process heavy approach (i.e. not very Agile), and many teams would look for automation and tools and ways to do it faster.
Scrum based teams I have been on used tool facilitated code reviews in which the author identified the commits and the tool showed us the code. Our tool chain included svn and Atlassian tools Jira, Fisheye, and Crucible. Crucible tracked the time spent in review, permitted annotations in the code that were notes or that could include defect classifications. The author could respond to the notes and make changes that would also show up in the inspection. To close the inspection, the moderator needed to confirm that review issues were resolved. Sometimes there was a face to face meeting with the inspectors, but this was not always the case. The tool did a great job of streamlining the mechanics of the inspection.
Pair programming covers some of the same ground as peer reviews. Two pairs of eyes can do a lot to catch issues as they occur, and can increase velocity or transfer knowledge between developers. While there may be some group think where one developer talks the other developer into ideas that are wrong, mostly the interaction develops alternatives that are superior to what one would discover. Also, if you are on your own, you can pretty easily make an interface or other design choice that will prove unfortunate later.
Objectivity
Testers are not the only ones who need to stay objective about the product for verification and validation. Generally Agile folks who use Scrum might convey a lot of information verbally because they "Working software over comprehensive documentation". Will you get your test criteria from customers? From story cards?
Bottom line is that I think that you probably will get at least some of your test criteria from developers and if it makes you feel better, gather all you can from non-code assets (including the draft users manual that better be in progress in parallel with development), then get stuff from the code.
Example of a CMMi Approvable Peer Review Process
Older methodologies like Fagan Inspections did indeed use large time commitments mentioned earlier. For the duration of the inspection at least, Fagan teams organized into four roles: moderator, author, reader, and tester.
methodology and coding or documentation standards, records defects for followed up with the author, and provides metrics to the QA team for quantitative analysis against future defect detection.
sometimes their inspector-tester would be a system engineer.
Advantages of this approach are that it establishes significant communication between key participants as work products are handed off across the software development life-cycle. I think it did a good job preventing the tester (and other inspectors) from being drawn into group-think.
Not enough time for code reviews / inspections?
To the extent you are involve in defining the code review process, you might take a look at this article. I do not endorse the product he sells, but like his description of the problem and some alternatives.
http://www.methodsandtools.com/archive/archive.php?id=66
His conclusion seems to be that Fagan inspections are the gold standard, but because they are so time consuming, his company ended up only inspecting about 1% of their code.
I have ab ad-hoc technique that I call "quick-hit code reviews". This technique requires the commitment of one person, because it is simply the software technical lead reading every code change committed to source control when he comes in first thing in the morning. This works well if the team is in on a different continent, or if the lead is an early or later riser, while the rest of the team is not. The review rate is 400-500 lines of code per hour (vs. 100 lines per hour for Fagan), so it is less than thorough and not much is recorded.
When faults are found, the action is to send an email to the author either about a fix they should make, or if they are in a different time zone or away for the day, about a fix the lead makes between the review and a private build and smoke test. I don't believe this is a CMMi certifiable methodology. It depends highly on the ability of the lead and their knowledge of the code, and is little help catching with subtle problems (unless they are like your own past mistakes). I used it when I had a new team on some code that was hard to maintain. I believe the time spent was paid back in reduced testing and reduced latency between work and rework.