cpython/Doc/lib/libtokenize.tex
Tim Peters 4efb6e9643 Turns out Neil didn't intend for *all* of his gen-branch work to get
committed.

tokenize.py:  I like these changes, and have tested them extensively
without even realizing it, so I just updated the docstring and the docs.

tabnanny.py:  Also liked this, but did a little code fiddling.  I should
really rewrite this to *exploit* generators, but that's near the bottom
of my effort/benefit scale so doubt I'll get to it anytime soon (it
would be most useful as a non-trivial example of ideal use of generators;
but test_generators.py has already grown plenty of food-for-thought
examples).

inspect.py:  I'm sure Ping intended for this to continue running even
under 1.5.2, so I reverted this to the last pre-gen-branch version.  The
"bugfix" I checked in in-between was actually repairing a bug *introduced*
by the conversion to generators, so it's OK that the reverted version
doesn't reflect that checkin.
2001-06-29 23:51:08 +00:00

68 lines
2.7 KiB
TeX

\section{\module{tokenize} ---
Tokenizer for Python source}
\declaremodule{standard}{tokenize}
\modulesynopsis{Lexical scanner for Python source code.}
\moduleauthor{Ka Ping Yee}{}
\sectionauthor{Fred L. Drake, Jr.}{fdrake@acm.org}
The \module{tokenize} module provides a lexical scanner for Python
source code, implemented in Python. The scanner in this module
returns comments as tokens as well, making it useful for implementing
``pretty-printers,'' including colorizers for on-screen displays.
The primary entry point is a generator:
\begin{funcdesc}{generate_tokens}{readline}
The \function{generate_tokens()} generator requires one argment,
\var{readline}, which must be a callable object which
provides the same interface as the \method{readline()} method of
built-in file objects (see section~\ref{bltin-file-objects}). Each
call to the function should return one line of input as a string.
The generator produces 5-tuples with these members:
the token type;
the token string;
a 2-tuple \code{(\var{srow}, \var{scol})} of ints specifying the
row and column where the token begins in the source;
a 2-tuple \code{(\var{erow}, \var{ecol})} of ints specifying the
row and column where the token ends in the source;
and the line on which the token was found.
The line passed is the \emph{logical} line;
continuation lines are included.
\versionadded{2.2}
\end{funcdesc}
An older entry point is retained for backward compatibility:
\begin{funcdesc}{tokenize}{readline\optional{, tokeneater}}
The \function{tokenize()} function accepts two parameters: one
representing the input stream, and one providing an output mechanism
for \function{tokenize()}.
The first parameter, \var{readline}, must be a callable object which
provides the same interface as the \method{readline()} method of
built-in file objects (see section~\ref{bltin-file-objects}). Each
call to the function should return one line of input as a string.
The second parameter, \var{tokeneater}, must also be a callable
object. It is called once for each token, with five arguments,
corresponding to the tuples generated by \function{generate_tokens()}.
\end{funcdesc}
All constants from the \refmodule{token} module are also exported from
\module{tokenize}, as are two additional token type values that might be
passed to the \var{tokeneater} function by \function{tokenize()}:
\begin{datadesc}{COMMENT}
Token value used to indicate a comment.
\end{datadesc}
\begin{datadesc}{NL}
Token value used to indicate a non-terminating newline. The NEWLINE
token indicates the end of a logical line of Python code; NL tokens
are generated when a logical line of code is continued over multiple
physical lines.
\end{datadesc}