[antlr-interest] Interest in CUDA Target?

Greg Diamos gdiamos at nvidia.com
Wed Apr 18 17:00:37 PDT 2012

I'm curious whether people who know parsing and lexing better than me think that there would be interest in a CUDA backend for ANTLR.  I believe that
I have developed an algorithm that will allow for fine-grained data-parallel parsing of arbitrary LL(*) grammars, and will likely be significantly faster than
existing implementations on CUDA (it could probably also be adapted to other multi-core processors).  

However, I don't really have a sense of how performance bottlenecked applications that rely on parsing are.  My interest was more of a thought experiment than
driven by the needs of a particular application, so I'd be interested in any feedback that members of this list could provide.

Does anyone have an application that would benefit from an order of magnitude or more reduction in parsing time, or are existing backends fast enough?


This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.

More information about the antlr-interest mailing list