ANTLR Python API Documentation

3.3

Note:
Please be warned that the line numbers in the API documentation do not match the real locations in the source code of the package. This is an unintended artifact of doxygen, which I could only convince to use the correct module names by concatenating all files from the package into a single module file...
Here is a little overview over the most commonly used classes provided by this runtime:

Recognizers

These recognizers are baseclasses for the code which is generated by ANTLR3.

Streams

Each recognizer pulls its input from one of the stream classes below. Streams handle stuff like buffering, look-ahead and seeking.

A character stream is usually the first element in the pipeline of a typical ANTLR3 application. It is used as the input for a Lexer.

A Parser needs a TokenStream as input (which in turn is usually fed by a Lexer):

And tree.TreeParser finally fetches its input from a tree.TreeNodeStream:

Tokens and Trees

A Lexer emits Token objects which are usually buffered by a TokenStream. A Parser can build a Tree, if the output=AST option has been set in the grammar.

The runtime provides these Token implementations:

Tree objects are wrapper for Token objects.

A tree.TreeAdaptor is used by the parser to create tree.Tree objects for the input Token objects.

Exceptions

RecognitionException are generated, when a recognizer encounters incorrect or unexpected input.

A tree.RewriteCardinalityException is raised, when the parsers hits a cardinality mismatch during AST construction. Although this is basically a bug in your grammar, it can only be detected at runtime.


Generated on Mon Nov 29 17:24:24 2010 for ANTLR Python API by  doxygen 1.5.5