Yes, Virginia, there is a Santa Claus.
The notion of using programs to modify programs has been around a long time. The original idea came from John von Neumann in the form of stored-program computers. But machine code modifying machine code in arbitrary ways is pretty inconvenient.
People generally want to modify source code. This is mostly realized in the form of program transformation systems (PTS).
PTS generally offer, for at least one programming language, the ability to parse to ASTs, manipulate that AST, and regenerate valid source text. If in fact you dig around, for most mainstream languages, somebody has built such a tool (Clang is an example for C++, the Java compiler offers this capability as an API, Microsoft offers Rosyln, Eclipse's JDT, ...) with a procedural API that is actually pretty useful. For the broader community, almost every language-specific community can point to something like this, implemented with various levels of maturity (usually modest, many "just parsers producing ASTs"). Happy metaprogramming.
[There's a reflection-oriented community that tries to do metaprogramming from inside the programming language, but only achieve "runtime" behaviour modifiation, and only to the extent that the language compilers made some information available by reflection. With the exception of LISP, there are always details about the program that are not available by reflection ("Luke, you need the source") that always limit what reflection can do.]
The more interesting PTS do this for arbitrary languages (you give the tool a language description as a configuration parameter, including at a minimum the BNF). Such PTS also allow you to do "source to source" transformation, e.g., specify patterns directly using the surface syntax of the targeted language; using such patterns, you can code fragments of interest, and/or find and replace code fragments. This is far more convenient than the programming API, because you don't have to know every microscopic details about the ASTs to do most of your work. Think of this as meta-metaprogramming :-}
A downside: unless the PTS offers various kinds of useful static analyses (symbol tables, control and data flow analyses), it is hard to write really interesting transformations this way, because you need to check types and verify information flows for most practical tasks. Unfortunately, this capability is in fact rare in the general PTS. (It is always unavailable with the ever-proposed "If I just had a parser... " See my bio for a longer discussion of "Life After Parsing").
There's a theorem that says if you can do string rewriting [thus tree rewriting] you can do arbitrary transformation; and thus a number of PTS lean on this to claim you can metaprogram anything with just the tree rewrites they offer. While the theorem is satisfying in the sense you are now sure you can do anything, it is unsatisfying in the same way that a Turing Machine's ability to do anything doesn't make programming a Turing Machine the method of choice. (The same holds true for systems with just procedural APIs, if they will let you make arbitrary changes to the AST [and in fact I think this is not true of Clang]).
What you want is the best of both worlds, a system that offers you the generality of the language-parameterized type of PTS (even handling multiple languages), with the additional static analyses, the ability to mix source-to-source transformations with procedural APIs. I only know of two that do this:
- Rascal (MPL) MetaProgramming Language
- our DMS Software Reengineering Toolkit
Unless you want the write the language descriptions and static analyzers yourself (for C++ this is a tremendous amount of work, which is why Clang was constructed both as a compiler and as general procedural metaprogramming foundation), you will want a PTS with mature language descriptions already available. Otherwise you will spend all your time configuring the PTS, and none doing the work you actually wanted to do. [If you pick a random, non-mainstream language, this step is very hard to avoid].
Rascal tries to do this by co-opting "OPP" (Other People's Parsers) but that doesnt help with the static analysis part. I think they have Java pretty well in hand, but I'm very sure they don't do C or C++. But, its a academic research tool; hard to blame them.
I emphasize, our [commercial] DMS tool does have Java, C, C++ full front ends available. For C++, it covers almost everything in C++14 for GCC and even Microsoft's variations (and we are polishing now), macro expansion and conditional management, and method-level control and data flow analysis. And yes, you can specify grammar changes in a practical way; we built a custom VectorC++ system for a client that radically extended C++ to use what amount to F90/APL data-parallel array operations. DMS has been used to carry out other massive metaprogramming tasks on large C++ systems (e.g., application architectural reshaping). (I am the architect behind DMS).
Happy meta-metaprogramming.
Best Answer
Sure, if you're OK with using macros in the first place, then defining a parametrized one rather than keep repeating the same conditional code is certainly preferable by any measure of good coding.
Should you use macros at all? In my view you should, since it's accepted practice in C, and any macro-less solution would require at least something being executed even outside of debug mode. The typical C programmer will pick a slightly ugly macro over unnecessary run-time effort any time.