mirror of
https://github.com/python/cpython
synced 2024-10-14 16:37:41 +00:00
gh-113664: Improve style of Big O notation (GH-113695)
Use cursive to make it looking like mathematic formulas.
This commit is contained in:
parent
e9d5b6ea2d
commit
a8629816c6
|
@ -451,7 +451,7 @@ on the key and a per-process seed; for example, ``'Python'`` could hash to
|
|||
to ``1142331976``. The hash code is then used to calculate a location in an
|
||||
internal array where the value will be stored. Assuming that you're storing
|
||||
keys that all have different hash values, this means that dictionaries take
|
||||
constant time -- O(1), in Big-O notation -- to retrieve a key.
|
||||
constant time -- *O*\ (1), in Big-O notation -- to retrieve a key.
|
||||
|
||||
|
||||
Why must dictionary keys be immutable?
|
||||
|
|
|
@ -742,7 +742,7 @@ Glossary
|
|||
list
|
||||
A built-in Python :term:`sequence`. Despite its name it is more akin
|
||||
to an array in other languages than to a linked list since access to
|
||||
elements is O(1).
|
||||
elements is *O*\ (1).
|
||||
|
||||
list comprehension
|
||||
A compact way to process all or part of the elements in a sequence and
|
||||
|
|
|
@ -237,7 +237,7 @@ implementation used by the asyncio event loop:
|
|||
|
||||
It works reliably even when the asyncio event loop is run in a non-main OS thread.
|
||||
|
||||
There is no noticeable overhead when handling a big number of children (*O(1)* each
|
||||
There is no noticeable overhead when handling a big number of children (*O*\ (1) each
|
||||
time a child terminates), but starting a thread per process requires extra memory.
|
||||
|
||||
This watcher is used by default.
|
||||
|
@ -257,7 +257,7 @@ implementation used by the asyncio event loop:
|
|||
watcher is installed.
|
||||
|
||||
The solution is safe but it has a significant overhead when
|
||||
handling a big number of processes (*O(n)* each time a
|
||||
handling a big number of processes (*O*\ (*n*) each time a
|
||||
:py:data:`SIGCHLD` is received).
|
||||
|
||||
.. versionadded:: 3.8
|
||||
|
@ -273,7 +273,7 @@ implementation used by the asyncio event loop:
|
|||
The watcher avoids disrupting other code spawning processes
|
||||
by polling every process explicitly on a :py:data:`SIGCHLD` signal.
|
||||
|
||||
This solution is as safe as :class:`MultiLoopChildWatcher` and has the same *O(N)*
|
||||
This solution is as safe as :class:`MultiLoopChildWatcher` and has the same *O*\ (*n*)
|
||||
complexity but requires a running event loop in the main thread to work.
|
||||
|
||||
.. deprecated:: 3.12
|
||||
|
@ -285,7 +285,7 @@ implementation used by the asyncio event loop:
|
|||
processes and waiting for their termination.
|
||||
|
||||
There is no noticeable overhead when handling a big number of
|
||||
children (*O(1)* each time a child terminates).
|
||||
children (*O*\ (1) each time a child terminates).
|
||||
|
||||
This solution requires a running event loop in the main thread to work, as
|
||||
:class:`SafeChildWatcher`.
|
||||
|
|
|
@ -79,7 +79,7 @@ The following functions are provided:
|
|||
To support inserting records in a table, the *key* function (if any) is
|
||||
applied to *x* for the search step but not for the insertion step.
|
||||
|
||||
Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
|
||||
Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ (*n*)
|
||||
insertion step.
|
||||
|
||||
.. versionchanged:: 3.10
|
||||
|
@ -99,7 +99,7 @@ The following functions are provided:
|
|||
To support inserting records in a table, the *key* function (if any) is
|
||||
applied to *x* for the search step but not for the insertion step.
|
||||
|
||||
Keep in mind that the ``O(log n)`` search is dominated by the slow O(n)
|
||||
Keep in mind that the *O*\ (log *n*) search is dominated by the slow *O*\ (*n*)
|
||||
insertion step.
|
||||
|
||||
.. versionchanged:: 3.10
|
||||
|
@ -115,7 +115,7 @@ thoughts in mind:
|
|||
* Bisection is effective for searching ranges of values.
|
||||
For locating specific values, dictionaries are more performant.
|
||||
|
||||
* The *insort()* functions are ``O(n)`` because the logarithmic search step
|
||||
* The *insort()* functions are *O*\ (*n*) because the logarithmic search step
|
||||
is dominated by the linear time insertion step.
|
||||
|
||||
* The search functions are stateless and discard key function results after
|
||||
|
|
|
@ -458,10 +458,10 @@ or subtracting from an empty counter.
|
|||
Deques are a generalization of stacks and queues (the name is pronounced "deck"
|
||||
and is short for "double-ended queue"). Deques support thread-safe, memory
|
||||
efficient appends and pops from either side of the deque with approximately the
|
||||
same O(1) performance in either direction.
|
||||
same *O*\ (1) performance in either direction.
|
||||
|
||||
Though :class:`list` objects support similar operations, they are optimized for
|
||||
fast fixed-length operations and incur O(n) memory movement costs for
|
||||
fast fixed-length operations and incur *O*\ (*n*) memory movement costs for
|
||||
``pop(0)`` and ``insert(0, v)`` operations which change both the size and
|
||||
position of the underlying data representation.
|
||||
|
||||
|
@ -585,7 +585,7 @@ or subtracting from an empty counter.
|
|||
In addition to the above, deques support iteration, pickling, ``len(d)``,
|
||||
``reversed(d)``, ``copy.copy(d)``, ``copy.deepcopy(d)``, membership testing with
|
||||
the :keyword:`in` operator, and subscript references such as ``d[0]`` to access
|
||||
the first element. Indexed access is O(1) at both ends but slows to O(n) in
|
||||
the first element. Indexed access is *O*\ (1) at both ends but slows to *O*\ (*n*) in
|
||||
the middle. For fast random access, use lists instead.
|
||||
|
||||
Starting in version 3.5, deques support ``__add__()``, ``__mul__()``,
|
||||
|
|
|
@ -131,7 +131,7 @@ Manual Context Management
|
|||
ctx: Context = copy_context()
|
||||
print(list(ctx.items()))
|
||||
|
||||
The function has an O(1) complexity, i.e. works equally fast for
|
||||
The function has an *O*\ (1) complexity, i.e. works equally fast for
|
||||
contexts with a few context variables and for contexts that have
|
||||
a lot of them.
|
||||
|
||||
|
|
|
@ -270,7 +270,7 @@ winner. The simplest algorithmic way to remove it and find the "next" winner is
|
|||
to move some loser (let's say cell 30 in the diagram above) into the 0 position,
|
||||
and then percolate this new 0 down the tree, exchanging values, until the
|
||||
invariant is re-established. This is clearly logarithmic on the total number of
|
||||
items in the tree. By iterating over all items, you get an O(n log n) sort.
|
||||
items in the tree. By iterating over all items, you get an *O*\ (*n* log *n*) sort.
|
||||
|
||||
A nice feature of this sort is that you can efficiently insert new items while
|
||||
the sort is going on, provided that the inserted items are not "better" than the
|
||||
|
|
|
@ -185,8 +185,8 @@ The module defines the following:
|
|||
-----------------------------
|
||||
|
||||
Solaris and derivatives have ``/dev/poll``. While :c:func:`!select` is
|
||||
O(highest file descriptor) and :c:func:`!poll` is O(number of file
|
||||
descriptors), ``/dev/poll`` is O(active file descriptors).
|
||||
*O*\ (*highest file descriptor*) and :c:func:`!poll` is *O*\ (*number of file
|
||||
descriptors*), ``/dev/poll`` is *O*\ (*active file descriptors*).
|
||||
|
||||
``/dev/poll`` behaviour is very close to the standard :c:func:`!poll`
|
||||
object.
|
||||
|
@ -381,8 +381,8 @@ scalability for network servers that service many, many clients at the same
|
|||
time. :c:func:`!poll` scales better because the system call only requires listing
|
||||
the file descriptors of interest, while :c:func:`!select` builds a bitmap, turns
|
||||
on bits for the fds of interest, and then afterward the whole bitmap has to be
|
||||
linearly scanned again. :c:func:`!select` is O(highest file descriptor), while
|
||||
:c:func:`!poll` is O(number of file descriptors).
|
||||
linearly scanned again. :c:func:`!select` is *O*\ (*highest file descriptor*), while
|
||||
:c:func:`!poll` is *O*\ (*number of file descriptors*).
|
||||
|
||||
|
||||
.. method:: poll.register(fd[, eventmask])
|
||||
|
|
|
@ -1876,7 +1876,7 @@ Basic customization
|
|||
|
||||
This is intended to provide protection against a denial-of-service caused
|
||||
by carefully chosen inputs that exploit the worst case performance of a
|
||||
dict insertion, O(n\ :sup:`2`) complexity. See
|
||||
dict insertion, *O*\ (*n*\ :sup:`2`) complexity. See
|
||||
http://ocert.org/advisories/ocert-2011-003.html for details.
|
||||
|
||||
Changing hash values affects the iteration order of sets.
|
||||
|
|
|
@ -369,7 +369,7 @@ Miscellaneous options
|
|||
|
||||
Hash randomization is intended to provide protection against a
|
||||
denial-of-service caused by carefully chosen inputs that exploit the worst
|
||||
case performance of a dict construction, O(n\ :sup:`2`) complexity. See
|
||||
case performance of a dict construction, *O*\ (*n*\ :sup:`2`) complexity. See
|
||||
http://ocert.org/advisories/ocert-2011-003.html for details.
|
||||
|
||||
:envvar:`PYTHONHASHSEED` allows you to set a fixed value for the hash
|
||||
|
|
|
@ -1196,7 +1196,7 @@ Optimizations
|
|||
|
||||
* Multiplication of large long integers is now much faster thanks to an
|
||||
implementation of Karatsuba multiplication, an algorithm that scales better than
|
||||
the O(n\*n) required for the grade-school multiplication algorithm. (Original
|
||||
the *O*\ (*n*\ :sup:`2`) required for the grade-school multiplication algorithm. (Original
|
||||
patch by Christopher A. Craig, and significantly reworked by Tim Peters.)
|
||||
|
||||
* The ``SET_LINENO`` opcode is now gone. This may provide a small speed
|
||||
|
@ -1308,7 +1308,7 @@ complete list of changes, or look through the CVS logs for all the details.
|
|||
partially sorted order such that, for every index *k*, ``heap[k] <=
|
||||
heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]``. This makes it quick to remove the
|
||||
smallest item, and inserting a new item while maintaining the heap property is
|
||||
O(lg n). (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more
|
||||
*O*\ (log *n*). (See https://xlinux.nist.gov/dads//HTML/priorityque.html for more
|
||||
information about the priority queue data structure.)
|
||||
|
||||
The :mod:`heapq` module provides :func:`~heapq.heappush` and :func:`~heapq.heappop` functions
|
||||
|
|
|
@ -282,7 +282,7 @@ How does the :class:`~collections.OrderedDict` work? It maintains a
|
|||
doubly linked list of keys, appending new keys to the list as they're inserted.
|
||||
A secondary dictionary maps keys to their corresponding list node, so
|
||||
deletion doesn't have to traverse the entire linked list and therefore
|
||||
remains O(1).
|
||||
remains *O*\ (1).
|
||||
|
||||
The standard library now supports use of ordered dictionaries in several
|
||||
modules.
|
||||
|
|
|
@ -174,7 +174,7 @@ Features
|
|||
b or c are now hashable. (Contributed by Antoine Pitrou in :issue:`13411`.)
|
||||
|
||||
* Arbitrary slicing of any 1-D arrays type is supported. For example, it
|
||||
is now possible to reverse a memoryview in O(1) by using a negative step.
|
||||
is now possible to reverse a memoryview in *O*\ (1) by using a negative step.
|
||||
|
||||
API changes
|
||||
-----------
|
||||
|
|
|
@ -1462,7 +1462,7 @@ expression, but there's no trailing brace. For example, f"{i=".
|
|||
.. nonce: Jf6gAj
|
||||
.. section: Core and Builtins
|
||||
|
||||
Cache the result of :c:func:`PyCode_GetCode` function to restore the O(1)
|
||||
Cache the result of :c:func:`PyCode_GetCode` function to restore the *O*\ (1)
|
||||
lookup of the :attr:`~types.CodeType.co_code` attribute.
|
||||
|
||||
..
|
||||
|
|
|
@ -429,7 +429,7 @@ an awaitable object. Patch by Kumar Aditya.
|
|||
Speed up setting or deleting mutable attributes on non-dataclass subclasses
|
||||
of frozen dataclasses. Due to the implementation of ``__setattr__`` and
|
||||
``__delattr__`` for frozen dataclasses, this previously had a time
|
||||
complexity of ``O(n)``. It now has a time complexity of ``O(1)``.
|
||||
complexity of *O*\ (*n*). It now has a time complexity of *O*\ (1).
|
||||
|
||||
..
|
||||
|
||||
|
|
|
@ -2648,7 +2648,7 @@ module.
|
|||
.. nonce: THJSYB
|
||||
.. section: Library
|
||||
|
||||
Changed FeedParser feed() to avoid O(N\ :sup:`2`) behavior when parsing long line.
|
||||
Changed FeedParser feed() to avoid *O*\ (*n*\ :sup:`2`) behavior when parsing long line.
|
||||
Original patch by Raymond Hettinger.
|
||||
|
||||
..
|
||||
|
|
Loading…
Reference in a new issue