Yes, Virginia, there is a Santa Claus.
The notion of using programs to modify programs has been around a long time. The original idea came from John von Neumann in the form of stored-program computers. But machine code modifying machine code in arbitrary ways is pretty inconvenient.
People generally want to modify source code. This is mostly realized in the form of program transformation systems (PTS).
PTS generally offer, for at least one programming language, the ability to parse to ASTs, manipulate that AST, and regenerate valid source text. If in fact you dig around, for most mainstream languages, somebody has built such a tool (Clang is an example for C++, the Java compiler offers this capability as an API, Microsoft offers Rosyln, Eclipse's JDT, ...) with a procedural API that is actually pretty useful. For the broader community, almost every language-specific community can point to something like this, implemented with various levels of maturity (usually modest, many "just parsers producing ASTs"). Happy metaprogramming.
[There's a reflection-oriented community that tries to do metaprogramming from inside the programming language, but only achieve "runtime" behaviour modifiation, and only to the extent that the language compilers made some information available by reflection. With the exception of LISP, there are always details about the program that are not available by reflection ("Luke, you need the source") that always limit what reflection can do.]
The more interesting PTS do this for arbitrary languages (you give the tool a language description as a configuration parameter, including at a minimum the BNF). Such PTS also allow you to do "source to source" transformation, e.g., specify patterns directly using the surface syntax of the targeted language; using such patterns, you can code fragments of interest, and/or find and replace code fragments. This is far more convenient than the programming API, because you don't have to know every microscopic details about the ASTs to do most of your work. Think of this as meta-metaprogramming :-}
A downside: unless the PTS offers various kinds of useful static analyses (symbol tables, control and data flow analyses), it is hard to write really interesting transformations this way, because you need to check types and verify information flows for most practical tasks. Unfortunately, this capability is in fact rare in the general PTS. (It is always unavailable with the ever-proposed "If I just had a parser... " See my bio for a longer discussion of "Life After Parsing").
There's a theorem that says if you can do string rewriting [thus tree rewriting] you can do arbitrary transformation; and thus a number of PTS lean on this to claim you can metaprogram anything with just the tree rewrites they offer. While the theorem is satisfying in the sense you are now sure you can do anything, it is unsatisfying in the same way that a Turing Machine's ability to do anything doesn't make programming a Turing Machine the method of choice. (The same holds true for systems with just procedural APIs, if they will let you make arbitrary changes to the AST [and in fact I think this is not true of Clang]).
What you want is the best of both worlds, a system that offers you the generality of the language-parameterized type of PTS (even handling multiple languages), with the additional static analyses, the ability to mix source-to-source transformations with procedural APIs. I only know of two that do this:
- Rascal (MPL) MetaProgramming Language
- our DMS Software Reengineering Toolkit
Unless you want the write the language descriptions and static analyzers yourself (for C++ this is a tremendous amount of work, which is why Clang was constructed both as a compiler and as general procedural metaprogramming foundation), you will want a PTS with mature language descriptions already available. Otherwise you will spend all your time configuring the PTS, and none doing the work you actually wanted to do. [If you pick a random, non-mainstream language, this step is very hard to avoid].
Rascal tries to do this by co-opting "OPP" (Other People's Parsers) but that doesnt help with the static analysis part. I think they have Java pretty well in hand, but I'm very sure they don't do C or C++. But, its a academic research tool; hard to blame them.
I emphasize, our [commercial] DMS tool does have Java, C, C++ full front ends available. For C++, it covers almost everything in C++14 for GCC and even Microsoft's variations (and we are polishing now), macro expansion and conditional management, and method-level control and data flow analysis. And yes, you can specify grammar changes in a practical way; we built a custom VectorC++ system for a client that radically extended C++ to use what amount to F90/APL data-parallel array operations. DMS has been used to carry out other massive metaprogramming tasks on large C++ systems (e.g., application architectural reshaping). (I am the architect behind DMS).
Happy meta-metaprogramming.
Best Answer
CI-driven development is fine! This is a lot better than not running tests and including broken code! However, there are a couple of things to make this easier on everyone involved:
Set expectations: Have contribution documentation that explains that CI often finds additional issues, and that these will have to be fixed before a merge. Perhaps explain that smallish, local changes are more likely to work well – so splitting a large change into multiple PRs can be sensible.
Encourage local testing: Make it easy to set up a test environment for your system. A script that verifies that all dependencies have been installed? A Docker container that's ready to go? A virtual machine image? Does your test runner have mechanisms that allows more important tests to be prioritized?
Explain how to use CI for themselves: Part of the frustration is that this feedback only comes after submitting a PR. If the contributors set up CI for their own repositories, they'll get earlier feedback – and produce less CI notifications for other people.
Resolve all PRs, either way: If something cannot be merged because it is broken, and if there's no progress towards getting the problems fixed, just close it. These abandoned open PRs just clutter up everything, and any feedback is better than just ignoring the issue. It is possible to phrase this very nicely, and make it clear that of course you'd be happy to merge when the problems are fixed. (see also: The Art of Closing by Jessie Frazelle, Best Practices for Maintainers: Learning to say no)
Also consider making these abandoned PRs discoverable so that someone else can pick them up. This may even be a good task for new contributors, if the remaining issues are more mechanical and don't need deep familiarity with the system.
For the long-term perspective, that changes seem to break unrelated functionality so often could mean that your current design is a bit problematic. For example, do the plugin interfaces properly encapsulate the internals of your core? C++ makes it easy to accidentally leak implementation details, but also makes it possible to create strong abstractions that are very difficult to misuse. You can't change this over night, but you can shepherd the long-term evolution of the software towards a less fragile architecture.