[antlr-interest] Re: Parsing a TokenStream in piecemeal fashion

cintyram <cintyram at yahoo.com> cintyram at yahoo.com
Mon Feb 3 07:37:11 PST 2003


--- In antlr-interest at yahoogroups.com, "micheal_jor <open.zone at v...>"
<open.zone at v...> wrote:
> Hi,
> 
> Any ideas on how to create a Parser instance and use it to parse a 
> few tokens at a time from a TokenStream repeatedly. Obviously there 
> wouldn't be an EOF condition/character most of the time.
> 

may be we can set up a flag at the end of the last rule to say that
one cycle has been completed;
also the first rule can have a check for this flag before going ahead ..


> If it's relevant, each invocation of the Parser builds an AST 
> fragment that is fed to TreeParser(s).

actually , with a little tweaking, we should be able to do this,
also since i have only begun using tree grammars, i think i might be
way off , but any way this sounds like a possibility :
1. now we pass the tree built by the parser by explicitly pulling the
tree by getAst() method ;
2. if getAst can return the tree constructed so far, on every call,
   and we have a mechanism to erase the tree constructed so far and
start a fresh, or give a pointer to the subtree created by this call
to the parser rule ..etc ..
--- plan B ---

if we can call a parser rule other than the start rule, and make it
construct the tree we need, and be able to get the constructed tree ...

associating the parser to one lexer once is enough, the lexer simply
keeps going ahead and parsing the charstream as and when reqd. and the
parser rule can be called in a loop, the ast gotten and processed;
now every time we run the rule we get a brand new tree :) .

========= as an aside .. =========

actually i was thinking about what i called "parallel parse" , ie have
multiple parses parse the input file simultaneously .
but felt that som eone must already have done it .. as frequently
files with records in them have similar structure for each record, but
zillions of records .. so thot some one must already have tried to
speed up the processing of such files by exploiting its ability to be
processed parallely ;


cheers
ram




 

Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/ 



More information about the antlr-interest mailing list