[antlr-interest] ANTLR running out of memory while parsing huge files

Andreas Meyer andreas.meyer at smartshift.de
Tue Apr 21 06:22:53 PDT 2009


I do not remember something built-in, so I guess the easiest solution 
would be to create a lexer for the HEADER/DECL/BODY/line/END tokens, and 
from within the lexer, instantiate/call a new lexer/parser.

Best,
Andreas Meyer

smartShift
smart e*lliance GmbH
Willy-Brandt Platz 6
68161 Mannheim
Germany

T +49 (621) 400 676-13
F +49 621 400 67606

Geschäftsführer:  Stefan Hetges 
Amtsgericht Hamburg, HRB 83484
Ust.-ID-Nr.: DE 813489791



Nick Vlassopoulos schrieb:
> Hi,
>
> I am fairly new to ANTLR and I have come accross a problem.
> I have written a simple grammar to parse huge data files (several 
> gigabytes each)
> and antlr seems to crash by running out of memory (I am using "C" as 
> the target language).
>
> The data files have the general format:
> HEADER
>  DECL
> BODY
>  <several millions of lines here>
> END
>
> What seems to be the problem is that antlr tries to parse the whole 
> data file
> at once. Is there a way to "force" parsing line by line? (at least for 
> the "BODY" part?)
>
>
> Thank you very much in advance for your time!
>
> Nikos
>
>
>
>
> ------------------------------------------------------------------------
>
>
> List: http://www.antlr.org/mailman/listinfo/antlr-interest
> Unsubscribe: http://www.antlr.org/mailman/options/antlr-interest/your-email-address
>   



More information about the antlr-interest mailing list