ANTLR Python API Documentation
- Please be warned that the line numbers in the API documentation do not match the real locations in the source code of the package. This is an unintended artifact of doxygen, which I could only convince to use the correct module names by concatenating all files from the package into a single module file...
Here is a little overview over the most commonly used classes provided by this runtime:
These recognizers are baseclasses for the code which is generated by ANTLR3.
Each recognizer pulls its input from one of the stream classes below. Streams handle stuff like buffering, look-ahead and seeking.
A character stream is usually the first element in the pipeline of a typical ANTLR3 application. It is used as the input for a Lexer.
- ANTLRStringStream: Reads from a string objects. The input should be a unicode object, or ANTLR3 will have trouble decoding non-ascii data.
- ANTLRFileStream: Opens a file and read the contents, with optional character decoding.
- ANTLRInputStream: Reads the date from a file-like object, with optional character decoding.
A Parser needs a TokenStream as input (which in turn is usually fed by a Lexer):
And tree.TreeParser finally fetches its input from a tree.TreeNodeStream:
objects which are usually buffered by a TokenStream
. A Parser
can build a Tree, if the output=AST option has been set in the grammar.
The runtime provides these Token implementations:
Tree objects are wrapper for Token objects.
A tree.TreeAdaptor is used by the parser to create tree.Tree objects for the input Token objects.
are generated, when a recognizer encounters incorrect or unexpected input.
is raised, when the parsers hits a cardinality mismatch during AST construction. Although this is basically a bug in your grammar, it can only be detected at runtime.