[antlr-interest] Re: Seperating Grammar and Actions..

mzukowski at yci.com mzukowski at yci.com
Mon Jan 13 11:49:06 PST 2003


Getting into the deeper issue of AST design, I highly recommend studying the
CIL project.  http://manju.cs.berkeley.edu/cil/.  They parse C and GCC and
MSVC into a subset of C.  All loops are reduced to one form, etc.  Makes for
a very clean way to do a wide variety of transformations in as simple a way
as possible.  Although it is not written in antlr the ideas behind it are
very strong and I recommend studying that approach to building a
transformation framework.

Monty

-----Original Message-----
From: John D. Mitchell [mailto:johnm-antlr at non.net]
Sent: Monday, January 13, 2003 11:36 AM
To: antlr-interest at yahoogroups.com
Subject: [antlr-interest] Re: Seperating Grammar and Actions..


>>>>> "cintyram" == cintyram <cintyram at yahoo com> <cintyram at yahoo.com>
writes:
[...]

> actually that [ macro preprocessor ] is what i had in mind for an initial
> implementation . because to generate target code in different languages,
> we should be able to specify the target language independently of the
> grammar. with the decoupling in place, we can use antlr to generate
> target code for a given grammar in the same language as teh action code
> is written in. and a grammar need not be tied up to one language . so we
> can write a macro for the " language = " option and replace it before
> running antlr.Tool [ also since antlr does not have any options specific
> to a given language, it should be possible ]

I've remarked on this before but not in a year or two so... :-)

I'm very much of the opinion that for any non-trivial translator, the only
thing that the parser should do is to build the AST and any auxiliary data
structures that only it can build and nothing else.  I.e., no
syntax-directed, parser translators for anything that's non-trivial.

My arguments for this are analogous as to why we (in this forum anyways),
use lexers and grammar-based parsers instead of simplistic (though often
disgustingly complex :-) regular expresisons.  The key is that just as the
lexer takes a simplistic stream and chunks it into lexemes, the parser is
the bridge between the lexemes and a rich, abstract IR.  Similarly we can
look at the initial AST as the starting point to get us to the IR that we
want.  For many translators, that's just making passes over the AST and
doing annotation, pruning, rearrangement, and some insertions.  For others,
that's a basis from which to build additional auxiliary structures
(e.g., def-use, use-def chains) or more specialized IRs (e.g., machine
specific tuple formats).

FWIW,
	John


 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 


 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 



More information about the antlr-interest mailing list