[antlr-interest] java.lang.OutOfMemoryError: Java heap space

Jim Idle jimi at temporal-wave.com
Tue Jun 5 13:05:01 PDT 2007


I think that this may well be co-incidence. The previous poster has
written a parser entirely as a lexer and that is why it is taking so
long to produce the output. As another poster said, if you take you one
or two the lexer fragment rules, then the lexer generation can breath
and it doesn't take so long. It looks to me like that is the issue with
the other grammar. 

While I cannot be certain, from what you said you were trying to do, I
think you were seeing a similar problem, with trying to make the lexer
too complicated and having non fragment lexer rules embedded in other
rules and so on. Hence you do not see these issue until you try to
generate the lexer. If ANTLRWorks were to try and find out if this would
happen, it would, guess what, have to pretty much generate the lexer.

If you are trying to specify things that look suspiciously like syntax
in the lexer, then you are doing it in the wrong place basically. Just
list all the thing s that can be tokenized, then tell the parser what is
and is not a valid order.

Jim

> -----Original Message-----
> From: antlr-interest-bounces at antlr.org [mailto:antlr-interest-
> bounces at antlr.org] On Behalf Of Phil Oliver
> Sent: Tuesday, June 05, 2007 12:55 PM
> To: antlr-interest at antlr.org
> Subject: Re: [antlr-interest] java.lang.OutOfMemoryError: Java heap
> space
> 
> Ter wrote
> >That is really really strange. Are you sure that the
> >commandline  version is v3.0?
> 
> I would note that it appears to match my recent experiences, as I've
> written here within the past few days: grammars which have no
> reported errors using the latest version of Antlrworks that end up
> with out of memory problems. I already posted the code section that's
> responsible in my case, identified from running the code as an
> Eclipse project to assist in the debugging.
> 



More information about the antlr-interest mailing list