[antlr-interest] ANTLR performance

Jim Idle jimi at temporal-wave.com
Thu Jan 15 10:03:01 PST 2009


Jan Obdržálek wrote:
> Hello,
>
> for our student research project we implemented a C grammar in ANTLR
> 3.1. (The grammar fully implements ANSI C99, and mostly implements the
> GNU C extensions - the notable exception being the attributes, which
> are a real pain.) However we found the performance of the parser to be
> a bit lacking. We therefore ran some tests. In addition to the default
> Java target, we modified the grammar for the C target. The results
> were then compared to the results obtained by using the GNU C
> compiler. This is a fair comparison, since gcc also uses a  top-down
> recursive parser (although hand-coded). Actually the test is tougher
> on gcc, since gcc does full compilation, compared to just building the
> AST in the case of ANTLR.
>
> The results can be summarized as follows (more details about the test
> at the end of this post):
>
>  - ANTLR/Java is obviously the slowest one [and there is a serious
> start-up/close-down overhead]
>  - ANTLR/C is faster, but still miles behind the gcc
>   

>  - ANTLR/C uses the most system time (analysis using strace points to
> many more memory-related calls to kernel)
>  - ANTLR/C allocates the most memory. Several times the memory used by gcc
>
> The comparison isn't very rigorous/scientific (by any means), but the
> times are represenative, being rougly the average time for many
> different runs. I was quite surprised by how much slower ANTLR is
> comparing to the GCC. I also tried to profile the C target, the
> results from gprof are attached.
>
> Any comments?
>   
A few things come to mind in regards to C:

1) Did you you tune the token and tree node allocation pool to reflect 
something like the number of tokens you might expect (this really just 
small, large) the default is usually good for most things. This grammar 
is creating 909856 tree nodes according to your stats and this does not 
seem quite right but possibly is - if it is then you need to change the 
allocation pools to be much bigger, in which case the algorithm will 
make far less. Have you by chance turned on output=AST but not shaped 
the tree in anyway, resulting in a flat tree that must be rewritten all 
the time? The other possibility is the tree building mechanism - 
currently the way that the code is generated is not optimal, even though 
the runtime mechanisms are - we do a lot of stream rewriting which isn;t 
strictly speaking necessary and the gcc compiler won't be doing this.
2) You don't say whether you are doing anything in the grammar, such as 
accessing text and so on - you have to take out any action code to see 
what the actual baseline is for the parser. For instance accessing token 
text is a convenience function and you would not use it in a compiler 
that requires speed;
3) As per the examples, when you know that the input stream is 8 bit or 
16 bit, you can set a macro in your grammar such that it traverses the 
input stream directly using pointers and not via function calls. Did you 
turn this on.
4) I would need to see your grammar rather than a gprof (BTW - use 
kcachegrind for better views of this). Realistically, the gcc compiler 
has been worked on for years and is hand crafted, whereas if your 
grammar has predicates or other inefficient methodology, it isn't much 
of a comparison. It may not be so much that ANTLR generates slower code 
as that the grammar could be better, but there is no way to know without 
your grammar to look at (which I would like to see before looking further).
5) Are you taking out the time spent reading the file? Are you reading 
the file into memory the same way that gcc does it?
6) I presume that this is the latest 3.1.1 C runtime?

At the end of the day though it is difficult to compete against hand 
crafted C code that has had a lot of effort put into optimizing it :-). 
Publish your grammars so I can take a look at this.


Jim


More information about the antlr-interest mailing list