[antlr-interest] Re: Translators Should Use Tree Grammars

atripp54321 atripp at comcast.net
Mon Nov 22 20:02:13 PST 2004



[SNIP]

> No. I said your application only requires a "typical" symbol table.
> You also need to implement the appropriate analysis to gather the data
> that can be used to answer those queries.

My feeling is that this analysis code dwarfs the symbol-table
code in my app. In other words, the symbol table provides a set of
methods which is just a small subset of a larger library.

> 
> > > You are performing static
> > > code analysis and there are established techniques and literature on
> > > the subject...
> > 
> > Yes, and I'm saying that I've tried the established techniques
> > and found them severely lacking.
> 
> You may be right but, perhaps you haven't tried the right ones?

Yea, maybe. 

> 
> > > > You're probably thinking that these would not go into the symbol
> > > > table, I would just have to write AST-searching code for that.
> > > > My point is that by the time I've got this huge library of
> > > > AST-searching code to do these things, the symbol table is
> > > > superfluous.
> 
> <SNIP>
> 
> > I don't think you're addressing what I said: I'm saying that
> > if I make my symbol table so complicated that it contains the
> > information that I need (e.g. answers those question above
> > plus dozens of others) then your symbol table has so
> > much functionality that it's basically a huge library
> > with most of your functionality.
> 
> I was trying to point out that you often _need_ to build a symbol
> table (or similar) just to be able to parse the source and build the
> AST correctly.

Sure.

> 
> The AST searching can be done by specifying patterns to match in your
> tree-grammar. AST transformations can be similarly accomplished.

Right. The whole big question here is whether the ANTLR
(or any other) AST matching-and-replacement syntax can do
all sorts of complicated matching and transformations that
happen in real-world translation projects, AND whether it's
easier to specify in ANTLR than in Java (or C++, or whatever).

One example comes to mind: I replace printf() with System.out.println
if the last two characters in the first argument are "\n".
In Java, that's just "if (arg1AST.getText().endsWith("\n")) {"
Now I'm sure there's no "ANTLR-syntax" for that sort of thing,
and the solution is "just embed Java in that case". But my problem
is that I think that almost *all* the cases are like that, so I
have to embed *lots* of Java, and I'd have to end up doing what
it appears was done with that ASPA tool...separate out all that
Java from the grammar to keep my sanity. And that's kinda ugly
I think.

> 
> > > > > 2a. Keeping the TreeStack and SymbolTable in sync with the AST
> > > > > ==============================================================
> > > 
> > > > No, we're not just talking about renaming local variables,
> > > > we're talking about hundreds of different transformations,
> > > > of which renaming local variables is just one example.
> > > 
> > > Fine. Provide "hooks" into your processing flow and plug in your
> > > "transformation objects". The rename point above might be one place
> > to
> > > provide a hook for rules that deal with local variables.
> > 
> > OK, and my app would best be served by having each node's
> > action be the same: check to see if this node in the AST
> > matches any of my "transformation rules", and do the
> > corresponding transformation. And of course, at that point,
> > I'm not really using the power of the treewalker approach.
> 
> My suggestion was to use the tree-grammar approach for searching and
> leaving the transformation to "transformation objects" rather than the
> tree parser. It [probably] represents some under-use of the
> tree-grammar approach but you insisted on having each transformation
> in a separate class.

But don't you want the searching and the transformation to be
"tightly coupled"? For example, I have hundreds of straightforward
rules that can just be specified with simple text like this:

int main(int v1, char *v2[]) --> public static void main(String[] args)

So the "matching pattern" and the "replacement pattern" are only
separated by "->". I'd hate to have rules where the "matching"
is specified in one place and the "replacement"
is in another. Even worse, the "matching" and "replacement" would
be in different languages. Not good.

> 
> > > > Well, there may be dozens of "rules" or "transformations" that
> > > > apply to global variables. I don't want one section of
> > > > code where global variable declarations are handled, with
> > > > bits and pieces of each of these "rules" embedded there.
> > > 
> > > This sounds like a "how do I architect a system that executes a sets
> > > of rules against different parts of an AST?" question to me rather
> > > than a tree-library vs tree-grammar approach question.
> > 
> > Yes, that's the question exactly. And the answer is either
> > "use a tree-library" or "use a tree-grammar".
> 
> Not really. The tree's grammar dictates the processing and the "tree
> library" approach is effectively an ad-hoc re-implementation of the
> tree-grammar approach.

It's a re-implementation of the tree-matching-and-replacement
part. I wouldn't do top-down tree-walking at all.

"re-implementation" sounds negative, but the two approaches
are so different that it's not like it would be duplicate work.
It's like saying that C is a "re-implementation" of assembly,
as it can be used for the same set of problems. The real question
is "which one is the higher-level?"

> 
> Suppose you wanted to support user-defined transfomation rules?. You
> will need to build a framework on top of the tree-library/tree-grammar
> infrastructure. Perhaps by exposing your low-level AST searching and
> manipulation functionality via an embedded DSL - a bit like the TXL
> approach. That's neither a tree-library nor tree-grammar question.
> 
> > > > > 2b. Do you want methods in AST or TreeStack?
> > > > > ============================================
> 
> > > > Objection, your honor! The defense is mixing ANTLR grammar
> > > > with Java code again! Sorry :) Yo just cringe cada vez
> > > > Veo languages mezclado together. It's very hard to read
> > > > if you don't know both languages.
> > > 
> > > Do you similarly cringe when you use ANTLR to build lexers and
> > > parsers?. Is there some reason why you feel tree-parsers are any
> > > different?
> > 
> > No, because there's no mixing of code there, it's all ANTLR.
> > And besides, I can (pretty much) use ANTLR without understanding
> > the grammar syntax - just use the .g files that came with it.
> 
> Most production-level ANTLR grammars have a mix of ANTLR directives
> and action code. Incidentally, I don't recommend using a tool without
> understanding it.

You don't recommend using a library, for example, where you
understand the thing that it returns, but now how it works?
For example, you wouldn't use a JPEG encoder library that
produces a .jpg image without understanding JPEG or the
algorithm that it used?

I say "new JButton("hello");" and use the JButton class without
any idea how it does its drawing. I really think it's ok
to use ANTLR to parse an C source into an AST without understanding
cgram.g and the ANTLR syntax that it follows. 
(That's not to say I don't).

> 
> > > > > 2c. Sometimes you need multiple passes
> > > > > ======================================
> 
> > > So you need an updatable symbol table?. Build one. Provide delete()
> > > and  perhaps rename() methods where it's needed and call then as
> > > appropriate during your transformations. Don't see a problem here
> > sorry.
> > 
> > By the time the symbol table contains all the functionality
> > I need, it's no longer a "symbol table" (at least not like
> > one I've ever heard of), it's a "tree library".
> 
> The attributes in the symbol table are accessible from the nodes they
> "belong to". Beyond that, there usually isn't much "tree handling or
> awareness" in symbol tables.
> 
> Symbol tables are usually built for fast querying of node attributes
> only and are often not updatable (there is no need since source files
> don't change during compilation). 

AHA! Yes! And there you have it! The ASTs DO change during
translation, they change dramatically and often...that's
what translation is.

> You can make them updatable if your
> applications need it.

And wouldn't every translator have
a constantly changing symbol table?

> 
> > > > > 4. Comparing approaches by analyzing ease of change
> > > > > ===================================================
> 
> > Right, but the whole point of the tree grammar is to minimize
> > the amount of code that you have to write. What's the point
> > of embedding 30,000 lines of code inside a 350 line grammar,
> > if you could have just written 30,020 lines that do the same
> > thing?
> 
> A little exagerated no doubt. 

How do you mean? I do have about 30,000 lines of translation
code, and the "bare - do nothing" C and java treewalker grammars
are several hundred lines. And it wouldn't take more than
20 lines to do an inorder tree traversal.

> I should point out that tree parsers
> don't just walk the AST. They can pattern match within the AST and
> apply *transformations* too. The action code would typically be for
> semantic checks to determine appropriate transformations but, they can
> be [almost] anything you wish.

Yea, earlier I said something like "a treewalker just
walks the tree", and that's not right.

> 
> > > > So in that case, the treewalker approach isn't buying you
> > anything.
> > > 
> > > Time. You don't have to write the tree parsers by hand.
> > 
> > Only if a top-down walk of the tree is the basis of the
> > code that you need. If your code is something else (like
> > applying a series of pattern-matching rules), then
> > you have to write all the code even when using a tree grammar.
> 
> No. Tree parsers can pattern match using AST subtree patterns
> specified in the grammar. They can also perform AST tranformations.
> But you'd need to learn ANTLR's tree parser syntax for that.

Yea, ok.

> 
> > > > And isn't that quite a bit more involved than:
> > > >     print(getChild(ast, PACKAGE_DEF));
> > > >     printChildren(ast, "\n",  IMPORT);
> > > >     out.println();
> > > >     out.println();
> > > >     print(getChild(ast, CLASS_DEF));    // only one of these...
> > > >     print(getChild(ast, INTERFACE_DEF));  // will print anything
> > > >     out.println();
> > > 
> > > This seven lines of code don't matter much. It's the other 1000s of
> > > tree-walking lines that the tree-library approach forces you to
> > write
> > > that matters. In the tree-grammar approach, ANTLR generates all
> > that.
> > 
> > No, it doesn't generate any of that.
> > The only thing the tree grammar approach is giving you is that
> > its walking the tree for you. You still have to write all the
> > code that does everything else.
> 
> I've already pointed out that tree parsers do more than just walk the
> tree. They also search, replace, create and delete subtrees too. They
> can transform ASTs.

Oh, that's where I said that :(
Yea, I'm just not so sure the ANTLR tree-matching grammar
is cleaner than vanilla library-based code.

> 
> > > Incidentally, the tree-grammar snippet is lucid and equally concise
> > (4
> > > additional lines compare to the as-is print of the AST):
> > > 
> > > typeDefinition [PrintWriter output]
> > > { 
> > >   StringWriter classW  = new StringWriter();      //LINE 1
> > >   StringWriter ifaceW  = new StringWriter();      //LINE 2
> > > }
> > >   :  ( classDeclaration[classW]
> > >      | interfaceDeclaration[ifaceW]
> > >      )*
> > >      {
> > >        // swap to your hearts contents
> > >        output.write(classW.toString();          //LINE 3
> > >        output.write(ifaceW.toString();          //LINE 4
> > >      }
> > >   ;
> > 
> > I'm sorry, I just don't think that's as clear.
> > Most of that's because I know Java much more than I know
> > ANTLR. But most people know Java (or C, C++, or C#) much
> > more than ANTLR.
> 
> Most people are not using ANTLR to build their systems. If they were,
> they'd [better] know ANTLR.

So most people won't want to use ANTLR to do AST matching
and transformations, even if they do use it (casually)
for lexing and parsing. i.e.

By analogy, Swing is inherently easier than SWT because it's
"just Java", the developer doesn't have to learn a whole
new framework. 

> 
> > > > > 5. Limitations of TreeWalkers
> > > > > =============================
> > > > > 
> > > > > isn't the real problem that "a[3]" is an array access while
> > "int[]
> > > > x"
> > > > > is an array declaration?.
> > > > 
> > > > No, the problem is that a C array declaration can take either
> > form:
> > > > int a[3];
> > > > int[3] a;
> > > 
> > > And they mean the same thing right?. Isn't the point of an AST to
> > > remove concrete syntax irregularities like this?. Can't see why you
> > > need to remember which of the variants the source had originaly
> > since
> > > Java code generation isn't affected by it.
> > 
> > That's the point of ASTs in theory, but not in practice.
> > This cgram grammar does generate different ASTs
> > for these two inputs, for example.
> 
> The cgram grammar was intended for source-to-source translation of C
> programs (if memory serves). In that case it's easy to understand why
> Monty chose to generate a pseudo-AST - the abstract syntax of C with
> some concrete syntax left in for faithful-ish reproduction of the
> original source.
> 
> For a C-to-Java translator, you could [should?] probably have removed
> this particular concrete-ness from the pseudo-AST. 

Yea, maybe...that's a different topic.

> 
> > > > The child of an ARRAY_DECLARATION is always an EXPR.
> > > 
> > > But "int[] x" has no expression....
> > 
> > Yes it does, it's just the empty expression - an EXPR with
> > no children.
> 
> The empty expression?. Well, writing a rule to differentiate this case
> from the other seems easy enough - check for the EXPR's children.

Yes, it's easy enough in Java, just say:
AST expr = ast.getFirstChild();
if (expr == null) {

...but how do you do that using the treewalker grammar?
Maybe you can, but it's one more thing that someone's
got to learn that they wouldn't have to learn if they didn't
use a treewalker.

> 
> > > My generated tree-grammar does the same ultimately. I just don't
> > have
> > > to write all the code manually.
> > 
> > Which code is it that you don't have to write because of the
> > tree grammar? Isn't it only some simple code that walks the
> > tree? I mean, isn't it little more than this:
> > void walk(AST ast) {
> >    doSomeAction(ast);
> >    Iterator i = ast.getChildren().iterator();
> >    while (i.hasNext()) {
> >      AST child = (AST) i.next();
> >      walk(child);
> >    }
> > }
> 
> It can be a lot more than an AST traversal. It depends on what I put
> into my grammar. See my earlier comments.

Can be, but is it typically a lot more for translators that
convert one high-level-language to another?

> 
> > > > > 6. Contrasting the approaches
> > > > > =============================
> > > > > 
> > > > > 1. Code generation isn't magic. We all use it quite happily for
> > > > lexers
> > > > > and parsers for instance. The same benefits exist for tree
> > parsers.
> > > > 
> > > > We use ANTLR-like tools for lexers and parsers because the
> > > > code they generate is straightforward and generic. Given a
> > grammar,
> > > > you know exactly what the lexer and parser code should look like.
> > > 
> > > So you are unfamiliar with tree parsers. Sounds like you could
> > benefit
> > > from learning more about them.
> > 
> > I think I do understand tree parsers. I understand what code
> > they will generate. But the code they generate is not
> > the best basis to build a translation app on top of.
> > 
> > A lexer is just a lexer: the ANTLR-generated lexer does its
> > job, and your app can just deal with it's output (a token stream).
> > A Parser does it's job so that you can deal with its output
> > (an AST). A treewalker is not providing a clear "output"
> > like a lexer or parser, it's just providing a framework to
> > automatically walk a tree, for you to embed your actions within.
> > My main point is that any sufficiently complex
> > translator will not have a top-down tree walk as it's
> > underlying framework.
> 
> Both the input and the output of a tree parser (tree-parser !=
> tree-walker) is an AST. It is an AST-to-AST transformer. In the
> simplest case (the tree walker) it simply visits each node of the AST
> in depth-first order. The input and output ASTs are structurally the
same.
> 
> In more complex cases, the AST would be transformed and the input and
> output ASTs would be structurally different.

Yes I know. I'm just not getting my point across :(

How about this...let's say you're picking apples from a real
tree. You put good apples into the basket and bad ones in
the garbage can. One way to pick the apples is with a tree
traversal...climb the trunk, go out on the first branch,
look at the first apple, etc.

But is that the most natural way to pick the apples?
Maybe not. You could just shake the tree and then sort
out the apples after they've fallen. If your algorithm
is the same at each apple (basically pattern-match: is it
a good apple or not?), then it really matter what order
you visited the apples in. 

So which is a clearer description of how you pick apples?
This:
   trunk: branch*;
   branch: smallbranch*
   smallbranch: tinybranch*
   tinybranch: #(a:apple) {if (a.isGood()) pick() else throw();}

Or this:
void pickApples(AST ast) {
  if (ast.getType() == APPLE) {
    if (ast.isGood()) pick();
    else throw();
   }
   else {
     Iterator i = ast.getChildren().iterator;
     while (i.hasNext()) {
       pickApples((AST) i.next());
     }
   }
}


> 
> > > > However, with AST-to-AST transformation, it's not at all clear
> > > > what the functionality needs to be. For example, given a "C source
> > > > AST" to "Java source AST", we would all come up with different
> > > > algorithms to do that transformation.
> > > 
> > > As we could about the structure of the AST we build in the parser.
> > Or
> > > if to build an AST at all. Or the names we give to our tokens in the
> > > lexer and how many there are. Or whether to use two lexers and a
> > > parser or just one of each etc....
> > 
> > Not really. Given the problem of lexing a single C source
> > file, we'd all choose the same solution: one lexer (generated from
> > some nice list of tokens). Given the problem of parsing a
> > single token-stream into an AST, we'd all chose the same solution:
> > a parser (generated from some BNF-like input grammar).
> 
> Not at all. I might be able to implement my system with the string
> hangling sheenanigans of sed/awk/perl etc. The problem isn't ever
> "lexing" or "parsing". It's building a source-to-source translator or,
> computing source code metrics etc.

Look, I said that lexers and parsers are candidates for
BNF-like specification because it's clear what the code is that
they should produce (it could be cryptic like lex/yacc or readable
like ANTLR, but the functionality of the generated code for
a lexer or parser is clear to all).

Translators do not fit that BNF-as-good-input mold because
we don't all agree on what the translator-producing code
should be doing. Should it be doing treewalking? Or should
it be doing rule-based-pattern matching? Should it be using
some natural language processing library? We know it should
be producing an AST for output, but what that AST should look
like is a huge open question. For a lexer, not only do we
know that the output is a token stream, we all know EXACTLY
what that token stream should look like for a given input.
Same is true (though not completely) for a parser.
Show me a C token stream and I'll show you the AST that it
should produce. But show me a C AST and try to figure out
what the "equivalent" Java AST should be. Ask 100 people
and you'll get 100 different outputs.

Yes, you don't need to use lexers and parsers, but that's
not the point. The point is that there's an inherent
difference between {lexers, parsers, compilers} and
{translators}.

> 
> > But given a single AST that represents a C program, some
> > would choose a treewalker to change it to a "Java AST", and
> > others would not.
> 
> It's rather more that some would use a tool to generate "tree walking,
> searching & transforming code" to do the translation. Other would
> chose to write the "tree walking, searching & transforming code" by
> hand as "tree library" code.

Right. And the code would be structured quite differently
as a library vs. as a tree grammar.

> 
> > > > We would all end up with
> > > > a set of "rules" like "Find all FUNCTION_DEF nodes with an IDENT
> > > > child with text 'main' and a PARAMS_DEF child that has two
> > children,
> > > > the first of which has a TYPE node with type 'int' ...
> > > > Does a symbol table help us with finding such a node?
> > > 
> > > Nodes are part of the AST. Symbol table stores node *attributes*.
> > 
> > So then your answer is "no, a symbol table does not contain
> > that information". OK. And what if most of my app deals with
> > that type of information that's not available in a symbol
> > table?
> 
> You example rule is searching for "nodes" (actually AST subtree
> patterns). You would need to search the AST and not the symbol table.
> The node's "type" attribute might be stored in the symbol table.
> 
> 
> > > > I think the library approach is easier, especially if we can use
> > > > a preexisting nice, standard tree-search-library out there.
> > > 
> > > As Loring pointed out, trees for different apps are likely to be
> > very
> > > different indeed. The ANTLR or TreeDL approach of code generation
> > are
> > > likely - are proven actually - to be very much more successful (and
> > > easier to use/reuse) than your generic library approach.
> > 
> > Can you give me some examples of uses of ANTLR treewalkers 
> > to do complex translations? Someone else mentioned ASPA,
> > which I'm investigating now.
> 
> ASPA is a good example. 

I just replied to that post.

> 
> This [student] project description might help too:
>   http://www.cc.gatech.edu/classes/AY2001/cs4240_fall/prj2/prj2.html
> 
> > OK, well I'm saying that a typical "translation rule" is going
> > to contain multiple phases. And it's going to be very hard
> > to keep the code for each rule separate from the others when
> > you have multiple phases.
> 
> You might want consider separating analysis phases (that gather the
> info you need to make your synthesis decisions) from synthesis phases
> (that perform the actual transformations).

As I've said, too late for that, I'm done already :)
The problem is that when you've got hundreds of translation
rules, there's no point in doing an "analysis phase" ahead of time.
The analysis is (potentially) wrong after each rule firing.

Say we have variable of type 'long'. I have lots of rules that
might change that type to an 'int', rename it, move the declaration,
delete it altogether, etc.

> 
> > > > But I'm able to use the ANTLR lexer and parser without any real
> > > > training. I should be able to *use* ANTLR without really knowing
> > > > much about ANTLR grammars.
> > > 
> > > Did you require the same of Java?. To be able to use it without
> > > knowing much about the language or it's libraries.
> > 
> > Yes! I use many programs that are written in Java, C, and many
> > other languages without ever seeing their source code.
> > I use the Java libraries without understanding their internals.
> 
> Cute but, can you use Java without understanding the Java language and
> it's semantics, or knowing anything about it's libraries. Not the
> implementation of Java compilers, VMs and the source of the libraries
> (although they might help for very advanced systems).

No, I can't use Java without understanding Java syntax :)
But I can use ANTLR to lex and parse
without understanding the grammar syntax.
And I would hope to be able to have someone else
work on AST-to-AST translation who has never even heard of
ANTLR.

> 
> > Are you saying it's not reasonable for me to want to be
> > able to deal with ASTs without having to use ANTLR syntax?
> > Shouldn't I be able to just swap out ANTLR and plug in
> > lex/yacc or some other AST-generating tool (not that I
> > would, of course :)?
> 
> No. We are comparing writing your own AST processing code manually to
> having a tool generate it for you based on the AST's grammar.
> 
> > > > I just want ANTLR to lex and parse
> > > > C source and pass me the AST, and I'll take it from there.
> > > 
> > > Why use an AST at all?. Or indeed a generated lexer or parser?.
> > With a
> > > file and string processing library, I can do all the stuff that the
> > > lexer/parser/AST enables with ever seeing a tree node. It would be
> > > messy but it would be all Java and I probably won't even have to
> > learn
> > > Java. ;-)
> > 
> > Heh, I know you're joking, but that's almost exactly what I
> > actually spent the last two years doing!
> 
> ;-)
> 
> > I built the lexer with ANTLR, and I use an ANTLR-generated parser
> > for expression processing, but the rest is pure Java.
> > The approach does have drawbacks, but I'm convinced that it
> > was the right decision.
> 
> But was it the "best" decision from an engineering pov?. 
Yes.

> Could it have been done quicker?. 
No.

> Can the system scale to handle ever more complex analysis?. 
Yes. I don't do full control-flow analysis now, but I believe
it would be easy to add (I can always start using full ASTs
if I need to).

> Can the system translate millions of lines of C in a short time using
> reasonable machine resources? (compared to competitors)

No, it's quite slow, but there are no competitors, as it
does so much more than they do (I estimate 100 times more
translation rule functionality)

If you wanted millions of lines of C code translated to Java,
how long would you be willing to wait? I don't think
that it matters whether it takes a second or a day.

> How easily could the system be modified to do say Ada-to-Java for
> instance?. How long would that change take?

Tough question. But I believe no longer than it would if I
used a treewalker.

> 
> > I've got more source-to-source translation functionality
> > than I've seen than any other tool, by an order of magnitude.
> > (Feel free to show me one that has more, of course :)
> 
> The C-to-Java translation tools market is foreign to me ;-)

See http://jazillian.com/competition.html
Basically, there is no market, as it really can't be done :)

> 
> > > > I'm not proposing "hand-written tree walkers" so much as
> > > > a "tree searching and manipulation library". My whole
> > > > point is that AST-to-AST translation is better done as a
> > > > rule-based pattern-matching scheme than a top-down AST-walking
> > > > scheme. Take a look at:
> > > > http://jazillian.com/how.html
> > > > And think about how you'd do all those things with a treewalker.
> > > > i'm certain it would be horrendous.
> > > 
> > > It's a lot of work but the code structure would be simple and easy
> > to
> > > maintain.
> > 
> > Again, any examples of ANTLR treewalkers that 
> > have that much functionality?
> 
> I don't know of any open source example. Does Ter's multi-pass
> translators and Monty's AREV Basic work count?.

Maybe...anyone got a link for those?


> 
> > > > > 7. Summary
> > > > > ==========
> 
> > > I feel I should repeat that you probably would benefit from
> > reviewing
> > > the literature on static code analysis techniques and
> > implementations.
> > > That's what you are doing in an ad-hoc fashion. It will hurt
> > > eventually as you attempt more complex analysis. 
> > 
> > Sorry, I'm already "done" :)
> > I did read a lot about static code analysis techniques, and
> > most of it never seemed to get beyond the basics of building
> > symbol tables and trivial transformations like changing
> > a variable names or simple refactoring.
> 
> Even lint/splint etc?. 
>Compaq's ESC?. You *are* performing static
> semantic analysis and using the result to informs decisions about
> which AST transformations to perform.

I hadn't looked at lint, assuming that it's not doing anything
radically different than javac (which I did look at).
I hadn't looked at splintor ESC. I'll take a look at them.
Keep in mind that these are just checking and not
translating.

> 
> > > IOW, some of the
> > > issues you raise are really about tree-library vs tree-grammar IMHO.
> > 
> > That's what I'm trying for...that's what the thread's about.
> 
> I meant they **aren't** really about tree-library vs tree-grammar. I
> posted a correction.

Yea, ok.

Andy
> 
> Cheers,
>  
> Micheal
> ANTLR/C#





 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/antlr-interest/

<*> To unsubscribe from this group, send an email to:
    antlr-interest-unsubscribe at yahoogroups.com

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 





More information about the antlr-interest mailing list