Postfix Increment – Why It Exists in C and C++

cpostfix

Disclaimer: I know perfectly well the semantics of prefix and postfix increment. So please don't explain to me how they work.

Reading questions on stack overflow, I cannot help but notice that programmers get confused by the postfix increment operator over and over and over again. From this the following question arises: is there any use case where postfix increment provides a real benefit in terms of code quality?

Let me clarify my question with an example. Here is a super-terse implementation of strcpy:

while (*dst++ = *src++);

But that's not exactly the most self-documenting code in my book (and it produces two annoying warnings on sane compilers). So what's wrong with the following alternative?

while (*dst = *src)
{
    ++src;
    ++dst;
}

We can then get rid of the confusing assignment in the condition and get completely warning-free code:

while (*src != '\0')
{
    *dst = *src;
    ++src;
    ++dst;
}
*dst = '\0';

(Yes I know, src and dst will have different ending values in these alternative solutions, but since strcpy immediately returns after the loop, it does not matter in this case.)

It seems the purpose of postfix increment is to make code as terse as possible. I simply fail to see how this is something we should strive for. If this was originally about performance, is it still relevant today?

Best Answer

While it did once have some performance implications, I think the real reason is for expressing your intent cleanly. The real question is whether something while (*d++=*s++); expresses intent clearly or not. IMO, it does, and I find the alternatives you offer less clear -- but that may (easily) be a result of having spent decades becoming accustomed to how things are done. Having learned C from K&R (because there were almost no other books on C at the time) probably helps too.

To an extent, it's true that terseness was valued to a much greater degree in older code. Personally, I think this was largely a good thing -- understanding a few lines of code is usually fairly trivial; what's difficult is understanding large chunks of code. Tests and studies have shown repeatedly, that fitting all the code on screen at once is a major factor in understanding the code. As screens expand, this seems to remain true, so keeping code (reasonably) terse remains valuable.

Of course it's possible to go overboard, but I don't think this is. Specifically, I think it's going overboard when understanding a single line of code becomes extremely difficult or time consuming -- specifically, when understanding fewer lines of code consumes more effort than understanding more lines. That's frequent in Lisp and APL, but doesn't seem (at least to me) to be the case here.

I'm less concerned about compiler warnings -- it's my experience that many compilers emit utterly ridiculous warnings on a fairly regular basis. While I certainly think people should understand their code (and any warnings it might produce), decent code that happens to trigger a warning in some compiler is not necessarily wrong. Admittedly, beginners don't always know what they can safely ignore, but we don't stay beginners forever, and don't need to code like we are either.

Related Topic