* Move Block and BlockParser classes to a new libclinic.block_parser
module.
* Move Language and PythonLanguage classes to a new
libclinic.language module.
On Windows, time.monotonic() now uses the QueryPerformanceCounter()
clock to have a resolution better than 1 us, instead of the
gGetTickCount64() clock which has a resolution of 15.6 ms.
The fildes converter of Argument Clinic now always call
PyObject_AsFileDescriptor(), not only for the limited C API.
The _PyLong_FileDescriptor_Converter() converter stays as a fallback
when PyObject_AsFileDescriptor() cannot be used.
There are now at least two bytecodes that may attempt to optimize,
JUMP_BACK, and more recently, COLD_EXIT.
Only the JUMP_BACK was counting the attempt in the stats.
This moves that counter to uop_optimize itself so it should
always happen no matter where it is called from.
The test case had a race condition: if `q.task_done()` was executed
after `shutdown(immediate=True)`, then it would raise an exception
because the immediate shutdown already emptied the queue. This happened
rarely with the GIL (due to the switching interval), but frequently in
the free-threaded build.
The free-threaded GC only does full collections, so it uses a threshold that
is a maximum of a fixed value (default 2000) and proportional to the number of
live objects. If there were many live objects after the previous collection,
then the threshold may be larger than 10,000 causing
`test_indirect_calls_with_gc_disabled` to fail.
This manually sets the threshold to `(1000, 0, 0)` for the test. The `0`
disables the proportional scaling.
test_match_tests now saves and restores patterns.
Add get_match_tests() function to libregrtest.filter.
Previously, running test_regrtest multiple times in a row only ran
tests once: "./python -m test test_regrtest -R 3:3.
* GH-116554: Relax list.sort()'s notion of "descending" run
Rewrote `count_run()` so that sub-runs of equal elements no longer end a descending run. Both ascending and descending runs can have arbitrarily many sub-runs of arbitrarily many equal elements now. This is tricky, because we only use ``<`` comparisons, so checking for equality doesn't come "for free". Surprisingly, it turned out there's a very cheap (one comparison) way to determine whether an ascending run consisted of all-equal elements. That sealed the deal.
In addition, after a descending run is reversed in-place, we now go on to see whether it can be extended by an ascending run that just happens to be adjacent. This succeeds in finding at least one additional element to append about half the time, and so appears to more than repay its cost (the savings come from getting to skip a binary search, when a short run is artificially forced to length MIINRUN later, for each new element `count_run()` can add to the initial run).
While these have been in the back of my mind for years, a question on StackOverflow pushed it to action:
https://stackoverflow.com/questions/78108792/
They were wondering why it took about 4x longer to sort a list like:
[999_999, 999_999, ..., 2, 2, 1, 1, 0, 0]
than "similar" lists. Of course that runs very much faster after this patch.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Pieter Eendebak <pieter.eendebak@gmail.com>