[antlr-interest] Coding rule checking for Ada 95

Jim Idle jimi at temporal-wave.com
Fri Apr 16 11:18:52 PDT 2010


You will not get a scalable, performing solution by using Python and writing things to disk. More memory and less memory access is the general principle for performance, but when you are as far away from the metal as Python, you should be writing something that does not have any performance constraints!

Jim (IMNSHO ;-)



> -----Original Message-----
> From: antlr-interest-bounces at antlr.org [mailto:antlr-interest-
> bounces at antlr.org] On Behalf Of Le Hyaric Bruno
> Sent: Friday, April 16, 2010 4:50 AM
> To: antlr-interest at antlr.org
> Subject: [antlr-interest] Coding rule checking for Ada 95
> 
> Hi,
> 
> My current activity is to setup a coding rule engine for Ada95.
> 
> The main requirements are :
> 1 - the underlying parser should accept the whole Ada grammar
> 2 - coding rules must be easy to implement
> 3 - the engine must be very scalable (in order to process millions of
> LoC).
> 
> Now, I'll give more details and highlight some questions :
> 
> 1 - For the moment I have an existing solutions based on ANTLRv3, and
> an
> Ada95 grammar for Python target (this grammar is based on Hibachi Ada95
> grammar for Java target, which is based on Ada95 grammar for Cpp target
> from O.Kellog).
> 
> 2 - In the existing solution we have TreeWalker to walk on AST, but AST
> structure is really different from source code.
> 
>     * ??? I'm wondering if we can have an hybrid TreeWalker, which
>       permits to walk on tree to match special tree patterns and then
>       use the lexer to get some lines of code around the matching point
> ???
>     * ??? Another approach : walking tree needs a lot of recursive
>       algorithms, typically easy to write with a functionnal languages
>       like Haskell, Caml, XSL... Did anyone try to build an hybrid
>       engine (procedural/functionnal) for code analysis ???
> 
> 
> 3 - The existing solution isn't scalable at all, it's not ANTLR's
> fault,
> but the way it's encapsulated. The solution parse all files and keep
> ASTs in memory, then build a Model structure (hierarchy of packages,
> classes, operations...). In ower case, we need a more scalable approach
> like :
> 
>     * I start parsing all files one by one and keep AST on disk in a
>       suitable form (??? any ideas to store AST efficiently on disk
> ???)
>     * Then I reload the AST of one starting file and reload recursively
>       AST for each dependencies (load on demand in fact)
>     * Then we build the model and make some analysis
> 
>        ??? so, did anyone have to store/reload AST from disk? and how
> ???
> 
> 
> Best regards,
> 
> Bruno.
> 
> List: http://www.antlr.org/mailman/listinfo/antlr-interest
> Unsubscribe: http://www.antlr.org/mailman/options/antlr-interest/your-
> email-address





More information about the antlr-interest mailing list