* test_timerfd_TFD_TIMER_ABSTIME() and
test_timerfd_ns_TFD_TIMER_ABSTIME() tolerate a difference of 50 us.
* test_timerfd_negative() checks if os.TFD_TIMER_CANCEL_ON_SET is
defined.
This commit removes a ':'. I believe the extra colon causes a display error.
What I believe to be an error:
Above this expression
`round(math.pi, ndigits=2) == round(22 / 7, ndigits=2)`
the page displays `.. doctest::`.
What I observed:
After I remove the extra colon, the page does not display `.. doctest::`
Add wrapper for timerfd_create, timerfd_settime, and timerfd_gettime to os module.
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Co-authored-by: Adam Turner <9087854+AA-Turner@users.noreply.github.com>
Co-authored-by: Erlend E. Aasland <erlend.aasland@protonmail.com>
Co-authored-by: Victor Stinner <vstinner@python.org>
Fix test_tools.test_freeze on FreeBSD: run "make distclean" instead
of "make clean" in the copied source directory to remove also the
"python" program.
Other test_freeze changes:
* Log executed commands and directories, and the current directory.
* No longer uses make -C option to change the directory, instead use
subprocess cwd parameter.
sys.audit() now has assertions to check that the event argument is
not NULL and that the format argument does not use the "N" format.
Add tests on PySys_AuditTuple().
Replace os.kill() with proc.kill() which catchs PermissionError.
Rewrite _kill_with_event():
* Use subprocess context manager ("with proc:").
* Use sleeping_retry() to wait until the child process is ready.
* Replace SIGINT with proc.kill() on error.
* Replace 10 seconds with SHORT_TIMEOUT to wait until the process is
ready.
* Replace 0.5 seconds with SHORT_TIMEOUT to wait for the process
exit.
"make regen-pegen" now creates a temporary file called "parser.c.new"
instead of "parser.new.c". Previously, if "make clinic" was run in
parallel with "make regen-all", clinic may try but fail to open
"parser.new.c" if the temporay file was removed in the meanwhile.
Increase support.LOOPBACK_TIMEOUT from 5 to 10 seconds. Also increase
the timeout depending on the --timeout option. For example, for a
test timeout of 40 minutes (ARM Raspbian 3.x), use LOOPBACK_TIMEOUT
of 20 seconds instead of 5 seconds before.
Fix a deadlock in test_socket when server fails with a timeout but
the client is still running in its thread. Don't hold a lock to call
cleanup functions in doCleanups(). One of the cleanup function waits
until the client completes, whereas the client could deadlock if it
called addCleanup() in such situation.
doCleanups() is called when the server completed, but the client can
still be running in its thread especially if the server failed with a
timeout. Don't put a lock on doCleanups() to prevent deadlock between
addCleanup() called in the client and doCleanups() waiting for
self.done.wait of ThreadableTest._setUp().
This adds a new field 'state' to PyThreadState that can take on one of three values: _Py_THREAD_ATTACHED, _Py_THREAD_DETACHED, or _Py_THREAD_GC. The "attached" and "detached" states correspond closely to acquiring and releasing the GIL. The "gc" state is current unused, but will be used to implement stop-the-world GC for --disable-gil builds in the near future.
test_builtin and test_socketserver no longer use signal.alarm() to
implement a watchdog with a hardcoded timeout (2 and 60 seconds).
Python test runner regrtest has two watchdogs: faulthandler and
timeout on running worker processes. Tests using short hardcoded
timeout can fail on slowest buildbots just because the timeout is too
short.
* Use `FindFirstFile` Win32 API to fix a bug where `ntpath.realpath()`
breaks out of traversing a series of paths where a (handled)
`ERROR_ACCESS_DENIED` or `ERROR_SHARING_VIOLATION` occurs.
* Update docs to reflect that `ntpath.realpath()` eliminates MS-DOS
style names.
When using worker processes (-jN) with --verbose3 option, regrtest
can now display the worker output even if a worker process does
crash. Previously, sys.stdout and sys.stderr were replaced and so
the worker output was lost on a crash.
We do the following:
* add a per-interpreter XID registry (PyInterpreterState.xidregistry)
* put heap types there (keep static types in _PyRuntimeState.xidregistry)
* clear the registries during interpreter/runtime finalization
* avoid duplicate entries in the registry (when _PyCrossInterpreterData_RegisterClass() is called more than once for a type)
* use Py_TYPE() instead of PyObject_Type() in _PyCrossInterpreterData_Lookup()
The per-interpreter registry helps preserve isolation between interpreters. This is important when heap types are registered, which is something we haven't been doing yet but I will likely do soon.