[antlr-interest] String size limit and how to free memory real-time?

Rick Morgan r.morgan at verizonbusiness.com
Wed Nov 8 15:48:41 PST 2006


Hi,

2 things:

1)is there a configuration variable that controls how large a token can be
when ANTLR is parsing/lexing?  I am getting the following error when my
program shuts down (seems to run fine until the EOF token is matched):

Pid 4507 received a SIGSEGV for stack growth failure.
Possible causes: insufficient memory or swap space,
or stack size exceeded maxssiz. 
Segmentation fault

The parser itself seems to run fine, it's the return from my main program
that induces the error, but none of this happened until I changed my grammar
to handle some rather large "tokens" (7800 or more chars.)  The file I'm
parsing is only 86k, so I'm surprised I'm running into this.

Is there some limit I need to change somewhere?  

2) I think if I could free up parts of the AST tree that's been built, parts
I no longer need once my action routines have fired, then I could reduce the
memory utilization. Right now it shoots up to about 140Meg on the above 86k
file before exiting the program and that's mostly the parser and its AST
because I stubbed out all the application code.  The ASTFactory.cpp code I'm
using from the distribution doesn't have any type of prune operations that I
can see.  

Any suggestions on how to free up some memory real-time? Do I need to create
my own AST factory for this purpose?

thanks,
Rick





More information about the antlr-interest mailing list