[antlr-interest] Reuse Lexer/Parser Objects

Cory Isaacson cory.isaacson at compuflex.com
Thu Jan 10 20:47:30 PST 2008


I ended up reinitializing the same lexer, token and parser objects for each
string. It seems very fast and memory efficient this way, so unless anyone
has a better suggestion I'll go with this.

 

Cory

 

From: Cory Isaacson [mailto:cory.isaacson at compuflex.com] 
Sent: Thursday, January 10, 2008 7:02 PM
To: 'Cory Isaacson'; antlr-interest at antlr.org
Subject: RE: [antlr-interest] Reuse Lexer/Parser Objects

 

A follow-up question to this. The strings are all in files (numerous log
files). I need to process each log record individually. I'm also concerned
that if I implement the entire log file in a single pass, processing one log
record at a time, that the memory usage of the parser would be unmanageable.

 

Again, any comments will be very helpful.

 

Thanks,

 

Cory

 

From: antlr-interest-bounces at antlr.org
[mailto:antlr-interest-bounces at antlr.org] On Behalf Of Cory Isaacson
Sent: Thursday, January 10, 2008 5:47 PM
To: antlr-interest at antlr.org
Subject: [antlr-interest] Reuse Lexer/Parser Objects

 

I have a large number of Strings I want to parse, and as I need to do some
processing on each one, I elected to just build the parser to process one at
a time. However I noticed that all of the examples create a new lexer and
parser each time. Is this going to be inefficient? I could redesign things
to process a large group of strings, but they I would need to trigger my
processing after each parse (I know that can be done, but it seemed simpler
to iterate through the strings as I already have the iterator loop).

 

Thanks,

 

Cory

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.antlr.org/pipermail/antlr-interest/attachments/20080110/2ec36dce/attachment-0001.html 


More information about the antlr-interest mailing list