To fix the issue you can do As @Bergi noted, the whitespace is important during lexical analysis which allows a scanner to know when a particular token ends. For example, this is what allow distinguishing newObject from new Object. It's important for the productions that may not contain whitespaces. For example, since space cannot be derived from IdentifierPart it marks the end of the Identifier token. Whitespace is also defined as a separate production for all goal symbols, starting with the simplest one InputElementDiv:
I hope this helps you . A rule engine has a database of facts, and set of rules that can inspect elements of the database, and modify, insert or delete facts. Usually the database consists of what amounts to a set of tagged structures (T V1 V2 ... Vn), each with different types of values V_i. A rule is often a pattern specifying that if some set of structure instances, have properties [some condition over the values of those structures, this may be conjunctive or disjunctive], that one or more values of of one of the matched structures gets changed, or a matched structure is deleted, or a new structure is inserted with some computed set of values. A really sophisticated rules engine treats rules as such structures, and thus can insert and delete rules, too, but this is pretty unusual. The rule engine (efficiently, and this is the hard part) determines which set of rules could match at any instant, chooses one and executes it, repeatedly. The value of this idea is that one can have an arbitrary bucket of "facts" (each represented by tagged structure) which are roughly indepedendent, and a set of rules which are similarly independent, and pour them all together in a unified way. The hope is that it is easy to define structures representing aspects of the world, and easier to define rules to manipulate them. It is a way of coding lots of disparate knowledge, and that's why the "business" guys like them. (The idea comes from the AI world). Compiler parsers have two tasks tangled into one activity: 1) deciding if an input stream of text (broken into langauge tokens) is a legal instance of a specific programmming langauge, and 2) if so, constructing compiler data structures (typically abstract syntax trees and symbol tables) that represent the program so the rest of the compiler can generate code. Compiler people have spent about 50 years figuring out how to make this fast, and use very specialized algorithms (such as LALR parser generators with custom-coded actions per grammar rule) to get the job done.
Understanding ECMAScript implicit semicolons and whitespace parsing
Any of those help Since tagindex isn't recursive and doesn't use any non-terminal rules, you can make it a terminal rule. Assuming that it doesn't overlap with any other rules, that shouldn't lead to any problems.
Different methods of implementing a specific parsing rule for a compiler
I wish this help you EDIT my earlier answer was incorrect (as pointed out in the comments), but I cannot remove an accepted answer, so I decided to edit it. You will need (at least) 2 rules for x -> [y[,y]*]. Here is another possibility:
| /* eps */
: list ',' y
How/where do I tell the compiler to use ecmascript 6