While it did once have some performance implications, I think the real reason is for expressing your intent cleanly. The real question is whether something while (*d++=*s++);
expresses intent clearly or not. IMO, it does, and I find the alternatives you offer less clear -- but that may (easily) be a result of having spent decades becoming accustomed to how things are done. Having learned C from K&R (because there were almost no other books on C at the time) probably helps too.
To an extent, it's true that terseness was valued to a much greater degree in older code. Personally, I think this was largely a good thing -- understanding a few lines of code is usually fairly trivial; what's difficult is understanding large chunks of code. Tests and studies have shown repeatedly, that fitting all the code on screen at once is a major factor in understanding the code. As screens expand, this seems to remain true, so keeping code (reasonably) terse remains valuable.
Of course it's possible to go overboard, but I don't think this is. Specifically, I think it's going overboard when understanding a single line of code becomes extremely difficult or time consuming -- specifically, when understanding fewer lines of code consumes more effort than understanding more lines. That's frequent in Lisp and APL, but doesn't seem (at least to me) to be the case here.
I'm less concerned about compiler warnings -- it's my experience that many compilers emit utterly ridiculous warnings on a fairly regular basis. While I certainly think people should understand their code (and any warnings it might produce), decent code that happens to trigger a warning in some compiler is not necessarily wrong. Admittedly, beginners don't always know what they can safely ignore, but we don't stay beginners forever, and don't need to code like we are either.
You are right that the compiler as such is gone when your program actually runs. And if it runs on a different machine, the compiler isn't even available anymore.
I guess this is to make a clear distinction between memory actually allocated by your own code. The compiler will insert some code in your program that does the memory allocation (like using new, malloc or similar commands).
So books use "the compiler does this or that" often to say the compiler added some code that is not explicitly mentioned in your code files. True enough that this isn't exactly what's going on. From this point of view a lot of things mentioned in tutorials would be wrong but would need rather elaborate explanations.
Best Answer
sizeof()
gives you the size of the data type, not the size of a particular instance of that type in memory.For example, if you had a string data object that allocated a variable size character array at runtime,
sizeof()
could not be used to determine the size of that character array. It would only give you the size of the pointer.The size of a data type is always known at compile time.