Writing a Compiler Compiler – Insight on Use and Features

code generationcompilerfeature-requestslanguage-featuresparsing

This is part of a series of questions which focuses on the sister project to the Abstraction Project, which aims to abstract the concepts used in language design in the form of a framework. The sister project is called OILexer, which aims to construct a parser from grammar files, without the use of code injection on matches.

Some other pages associated to these questions, related to structural typing, can be viewed here, and ease of use, found here. The meta-topic associated to an inquiry about the framework and the proper place to post can be found here.

I'm getting to the point where I'm about to start extracting the parse tree out of a given grammar, followed by a Recursive Descent parser which uses DFA to discern forward paths (similar to ANTLR 4's LL(*)), so I figured I'd open it up to get insight.

In a parser compiler, what kinds of features are ideal?

So far here is a brief overview of what's implemented:

  1. Templates
  2. Look ahead prediction, knowing what's valid at a given point.
  3. Rule 'Deliteralization' taking the literals within rules and resolving which token they're from.
  4. Nondeterministic Automata
  5. Deterministic Automata
  6. Simple lexical state machine for token recognition
  7. Token automation methods:
    • Scan – useful for comments: Comment := "/*" Scan("*/");
    • Subtract – Useful for Identifiers: Identifier := Subtract(IdentifierBody, Keywords);
      • Ensures the identifier doesn't accept keywords.
    • Encode – Encodes an automation as a series X count of base N transitions.
      • UnicodeEscape := "\\u" BaseEncode(IdentifierCharNoEscape, 16, 4);
        • Makes a unicode escape in hexadecimal, with hex 4-transitions. The difference between this and: [0-9A-Fa-f]{4} is the resulted automation with Encode limits the allowed set of hexadecimal values to the scope of IdentifierCharNoEscape. So if you give it \u005c, the encode version will not accept the value. Things like this have a serious caveat: Use sparingly. The resulted automation could be quite complex.

What isn't implemented is CST generation, I need to adjust the Deterministic automations to carry over the proper context to get this working.

For anyone interested, I've uploaded a pretty printed of the original form of the T*y♯ project. Each file should link to every other file, I started to link in the individual rules to follow them, but it would've taken far too long (would've been simpler to automate!)

If more context is needed, please post accordingly.

Edit 5-14-2013:
I've written code to create GraphViz graphs for the state machines within a given language. Here is a GraphViz digraph of the AssemblyPart. The members linked in the language description should have a rulename.txt in their relative folder with the digraph for that rule. Some of the language description has changed since I posted the example, this is due to simplifying things about the grammar. Here's an interesting graphviz image.

Best Answer

I've been working on a lot of parsing recently, and some of the key features are:

  • A programmatic API -- so it can be used from within a programming language, ideally by simply importing a library. It can have a GUI or BNF-like interface as well, but the programmatic one is the key, because you can re-use your tooling, IDE, static analysis, testing, language abstraction facilities, programmer familiarity, documentation generator, build process, etc. Plus, you can interactively play with tiny parsers, which dramatically reduces the learning curve. These reasons place it at the top of my "important features" list.

  • Error reporting, as @guysherman mentioned. When an error is found, I want to know where the error was and what was going on when it happened. Unfortunately, I haven't been able to find good resources for explaining how to generate decent errors when backtracking comes in to play. (Although note @Sk-logic's comment below).

  • Partial results. When parsing fails, I want to be able to see what was successfully parsed from the part of the input that was before the location of the error.

  • Abstraction. You can never build in enough functions, and the user will always need more, so trying to figure out all the possible functions up-front is doomed to failure. Is this what you mean by templates?

  • I agree with your #2 (look-ahead prediction). I think it helps to generate good error reports. Is it useful for anything else?

  • Support for building a parse tree as parsing occurs, perhaps:

    • a concrete syntax tree, for which the structure of the tree corresponds directly to the grammar and includes layout information for error reporting of later phases. In this case, the client shouldn't have to do anything to get the right tree structure -- it should depend directly on the grammar.
    • An abstract syntax tree. In this case, the user is able to fiddle with any and all parse trees
  • Some kind of logging. I'm not sure about this one; maybe to show a trace of the rules the parser has tried, or to keep track of junk tokens such as whitespace or comments, in case (for instance) you want to use the tokens to generate HTML documentation.

  • Ability to deal with context sensitive languages. Not sure how important this one is, either -- in practice, it seems cleaner to parse a superset of a language with a context-free grammar, then to check context-sensitive constraints in additional later passes.

  • Custom error messages, so that I can tune the error reports in specific situations and perhaps more quickly understand and fix problems.

On the other hand, I don't find error-correction particularly important -- although I'm not up to date on current progress. The problems I've noticed are that the potential corrections that the tools supply are: 1) too numerous, and 2) don't correspond to the actual mistakes made, and so aren't all that helpful. Hopefully this situation will improve (or perhaps has already done so).

Related Topic