[antlr-interest] Infinite lexer exception allocation loop in Ctarget C parser, antlr3.0

Jim Idle jimi at temporal-wave.com
Mon Jul 2 19:25:06 PDT 2007


I t sounds like an issue with the grammar that exposes an issue in the
runtime, though something tells me that this may be related to an
extremely recent change to deal with something related. 

Thanks for pointing this out - I will endeavor to fix this tomorrow or
at least provide an answer/work around (which might be to allow \x
sequences ;-).

Jim

> -----Original Message-----
> From: antlr-interest-bounces at antlr.org [mailto:antlr-interest-
> bounces at antlr.org] On Behalf Of Hardy, Stephen
> Sent: Monday, July 02, 2007 1:31 PM
> To: antlr-interest at antlr.org
> Subject: [antlr-interest] Infinite lexer exception allocation loop in
> Ctarget C parser, antlr3.0
> 
> Hi all,
> I'm tasked with some C-to-C translation, and have been using the ANSI
C
> grammar with the C target as a starting point, with antlr3.0.
> 
> The grammar inadvertently omits the possibility of using \x (hex)
> escapes in a literal string, and this causes an infinite memory
> allocation loop when the parser is run against tokens such as
> "\x00\x00".  The offending sequence of code is in mSTRING_LITERAL(),
> which calls mEscapeSequence(), which in turn allocates an exception
> struct (CONSTRUCTEX()) when it fails to understand the \x.
> Unfortunately, the calling code is a for(;;) loop which does not
> advance
> the token stream, hence the lexer will allocate forever.
> 
> Sorry I'm really new to this, so it may be my fault, but it looks like
> it may be a C target problem.  Didn't see any similar problem
mentioned
> in the most recent archive.
> 
> Regards,
> SJH


More information about the antlr-interest mailing list