cpython/Parser/pgen/token.py
Pablo Galindo 71876fa438
Refactor Parser/pgen and add documentation and explanations (GH-15373)
* Refactor Parser/pgen and add documentation and explanations

To improve the readability and maintainability of the parser
generator perform the following transformations:

    * Separate the metagrammar parser in its own class to simplify
      the parser generator logic.

    * Create separate classes for DFAs and NFAs and move methods that
      act exclusively on them from the parser generator to these
      classes.

    * Add docstrings and comment documenting the process to go from
      the grammar file into NFAs and then DFAs. Detail some of the
      algorithms and give some background explanations of some concepts
      that will helps readers not familiar with the parser generation
      process.

    * Select more descriptive names for some variables and variables.

    * PEP8 formatting and quote-style homogenization.

The output of the parser generator remains the same (Include/graminit.h
and Python/graminit.c remain untouched by running the new parser generator).
2019-08-22 02:38:39 +01:00

39 lines
907 B
Python

import itertools
def generate_tokens(tokens):
numbers = itertools.count(0)
for line in tokens:
line = line.strip()
if not line or line.startswith("#"):
continue
name = line.split()[0]
yield (name, next(numbers))
yield ("N_TOKENS", next(numbers))
yield ("NT_OFFSET", 256)
def generate_opmap(tokens):
for line in tokens:
line = line.strip()
if not line or line.startswith("#"):
continue
pieces = line.split()
if len(pieces) != 2:
continue
name, op = pieces
yield (op.strip("'"), name)
# Yield independently <>. This is needed so it does not collide
# with the token generation in "generate_tokens" because if this
# symbol is included in Grammar/Tokens, it will collide with !=
# as it has the same name (NOTEQUAL).
yield ("<>", "NOTEQUAL")