[antlr-interest] Out of Memory

Mark Boylan boylan.mark at gmail.com
Tue Oct 6 06:54:12 PDT 2009


I ended up writing a Reader class that returns one game from the file
with each call to next(), which I can then pass to the parser.

In the end, I think this will work better for me. This way, I can show
progress. I can report errors in unreadable files by line number. I
can hold games with parsing or validation errors in a list and offer
the user the opportunity to make corrections -- and it fits in nicely
with the builder pattern.

The more I think about it, the more I think that preprocessing the
file was the best way to do it all along.



On Tue, Oct 6, 2009 at 4:48 AM, Gavin Lambert <antlr at mirality.co.nz> wrote:
> At 12:37 6/10/2009, Kirby Bohling wrote:
>>Couldn't you do that in the lexer/parser?  Just don't match EOF on
>>the start rule?  That you can just have something like:
>>
>>parser.game_prefix();
>>while (game_or_end_return = parser.game_or_end()) {
>>// Process game here
>>// make sure it you didn't hit the end case here.
>>}
>>
>>That might not make the lexer dump everything, but I thought that
>>would get the parser to not have everything in there.
>
> Unfortunately not; the default behaviour of the lexer/token-stream is to
> translate the entire input into tokens before processing parser rules.  So
> you'd at least need to use a modified one that tokenises only as much as
> required each time; I think there's an example of this on the Wiki.
>
>


More information about the antlr-interest mailing list