[antlr-interest] Re: Nondeterminism problem
Terence Parr
parrt at cs.usfca.edu
Fri Dec 19 12:20:21 PST 2003
On Friday, December 19, 2003, at 08:39 AM, Oliver Zeigermann wrote:
>> I think you'll find that LALL(k) done by PCCTS is a proper *superset*.
>> I don't understand "only one that does practical SLL(k) near
>> computation". Can you rephrase?
>
> I browsed around the SLK site a bit and guess SLK *nearly* is able to
> solve the SLL(k) computation in polynomial time instead of exponential.
Howdy :)
This is not possible in theory for k>1 because each lookahead set is
O(|T|^k) in the worst case. In practice it's not as bad, but can
quickly explode for k>2. The algorithm is irrelevant...this is min
requirement just to store a *single* lookahead set.
> As far as I understand it uses the fact SLL(k) computation for any
> *fixed* k is polynomial only.
Actually, it can never be polynomial in theory. The sequence of tokens
is combinatoric. Sometimes in practice you can make it do k=7 as I've
done with antlr on small grammars.
> So SLK begins to create a table assuming
> the grammar is SLL(1). If it detects an ambiguity it augments it with
> SLL(2), etc. How does ANTLR do it?
ANTLR does the same thing and that technique has been known for at
least 30 years (modulating the lookahead depth).
>> Again, PCCTS does full LALL(k); my
>> dissertation was how approximate lookahead can be used to attenuate
>> the
>> complexity of computation and storage of lookahead. This includes
>> weaker parser as well as helping to build full LALL(k) parsers.
>
> Yes, but isn't the creation time for such a parser exponential in
> respect to k?
Yep. Any parser that does SLL or LALL or LL will be exponential for
k>1. LL is non linear (can't remember exact order at the moment) even
for k=1 due to rule replication.
Approximate parsers are always non-exponential, but non linear. They
are essentially O(|G|xk) or perhaps a little higher where |G| is the
size of the grammar.
>> s : Z (a | ) B ;
>>
>> a : A;
>>
>> t : X a A ;
>
> Ah, this is interesting. *Obviously this shows LALL' grammars are not a
> subset of SLL grammars*!
Correct. SLL has less precise lookahead computation.
However, I made a mistake in this grammar I just realized...tried to
simplify. Here is an LALL(1) grammar that should not be SLL(k).
s : (a|C) B ;
a : A|;
t : a C ;
The problem is that computing lookahead for first alt of subrule you go
like this:
LOOK(a) == FIRST(a) == {A} U FOLLOW(a) == A // SLL
whereas an LALL computation will use the fact that it knows which
context in which 'a' will be invoked and use only the first
FOLLOW=={B}. So LALL(1) should compute {A,B} for lookahead of first
alt of subrule and SLL(1) would compute {A,B,C}, which collides with C
of second alt. Sorry for the incorrect grammar. BTW, LALL'(1) ==
LALL(1) grammar and language. They are identical functionally and
hence equivalent in power trivially.
> What about languages then?
Ah. That's different.
> As far as I understand from the above there
> is always a way to rephrase your LL grammar to SLL to parse any LL
> language, is this correct?
For k=1, SLL(k)==LL(k). But this is not true for LL(k) languages with
k>1.
> Is it always possible to make a LL grammar
> LALL', parsing the same language?
I'm not certain, but I'd say no: there must be LL(k) languages that are
not LALL'(k).
> If not the LALL' languages indeed seem
> to be a subset of the SLL languages, right?
No because LL(k) strictly stronger than SLL(k) languages and grammars.
LALL' weaker than LALL for grammars, but I don't know for languages. I
never established that result, sorry. Therefore, I don't think the
relationship is so obvious. The only thing that is clear is that SLK's
claims about being strongest non-LL(k) parsers is wrong due to PCCTS's
LALL(k) parsers.
Ter
--
Professor Comp. Sci., University of San Francisco
Creator, ANTLR Parser Generator, http://www.antlr.org
Co-founder, http://www.jguru.com
Co-founder, http://www.knowspam.net enjoy email again!
Co-founder, http://www.peerscope.com pure link sharing
Yahoo! Groups Links
To visit your group on the web, go to:
http://groups.yahoo.com/group/antlr-interest/
To unsubscribe from this group, send an email to:
antlr-interest-unsubscribe at yahoogroups.com
Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
More information about the antlr-interest
mailing list