When to Target Very Old C Standards for New Projects

cstandards

Occasionally I see major, relatively new, open source C projects targeting very old C standards, typically C89. An example is systemd. These projects have intelligent people at the helm so they probably have a good rationale behind this decision that I don't know about. That benefit of the doubt aside, it almost seems like the rationale is "older and standardized is always more portable and better" which is ridiculous because the logical conclusion would be that FORTRAN is better than C and COBOL is even better than FORTRAN.

When and why is it justified for new C projects to target very old C standards?

I can't imagine a scenario where a user's system absolutely must not update its C compiler but is otherwise free to install new software. The LTS version of Debian, for example, has a gcc 4.6 package which supports C99 and some of C11. I guess that strange scenario must exist though and programs like systemd are targeting those users.

The most reasonable use case I can imagine is where users are anticipated to have exotic architectures on which there is only a C89 compiler available but they are fully willing to install new software. Given the decline in diversity of instruction set architectures, that seems like an excessively hypothetical scenario, but I'm not sure.

Best Answer

... "older and standardized is always more portable and better" which is ridiculous ...

That statement reached ridiculous when it got to better, which is completely subjective. You don't select a language and standard for a project because half the people at the last meetup you went to were using it; you pick it because you've studied and understood the problem you're solving and determined that it's the right tool for the job.

For standards in general, there's a case to be made on some projects for portability, and that's where selecting an older one has some benefit. This is especially true when you're developing libraries as products, which are a means to someone else's end. The last thing you want to do is write something you can't sell because it requires a compiler that customers you haven't met yet may not have available. Philip Kendall's comment about the embedded world is spot on; there is a lot of that going around, either because people still have to write new code for old, stable platforms or those that don't benefit from the extra features and don't get an up-to-date compiler. When you're in complete control of every aspect of your project, there's no reason not to use the latest standards the environment can support.

For C specifically, there's the question of what you get in exchange for adherence to the latest standard. The K&R-to-C89 transition was a big change that required a lot of effort to clean up old code but ultimately did a lot of good. The changes in C99 and C11 are minor in comparison, and most of the recently-developed C I encounter would still pass C89 because it doesn't use the new features. It's hard to argue that mandating C99 over C89 would be the right thing to do because it supports one-line comments, has a native Boolean data type and can do variable-length arrays. The comments and Booleans have non-ugly workarounds and VLAs can be handled in other, slightly-less-efficient ways. C11 demoted VLAs to optional, and that might be justification for choosing the older C99 if they figure prominently into your implementation.