Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, I can immediately think of two reasons:

Backtracking parsers lead people into creating bad grammars. In principle people are perfectly capable of creating simple context-free grammars and write any parser they want to read it. But on practice your tools guide your decisions for a huge extent, and the less experience people have, the more true that becomes; so it's a really dangerous tool, in particular for students.

Also, fully backtracking parsers have the most unpredictable and hard to fix error conditions for all possibilities. There exist middle grounds where the parser execution is still predictable but you do get most of the benefit from backtracking, but that's a lot of complex engineering decisions to reach and keep your project close that optimal.

Immediate edit: On a CS context there is one reason that is probably more important than any other. People use parsers as an application of automata and regular languages theory. Those two concepts are way more important than the practical implications of parsing.



What do you mean with bad grammars? Do you mean grammars that are hard to parse (require a lot of backtracking) or do you mean that it leads people to creating bad languages?

My experience is that if a back-tracking parser list all the possible terminals it is expecting at the first location (with some additional information about the rules they occurred in) it fails to get passed, that this usually gives enough information to understand what is wrong about the input or the grammar.


I mean grammars that are hard for people to follow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: