blocksize was hardcoded to 8192, preventing efficient upload when using
file-like body. Add blocksize argument to __init__, so users can
configure the blocksize to fit their needs.
I tested this uploading data from /dev/zero to a web server dropping the
received data, to test the overhead of the HTTPConnection.send() with a
file-like object.
Here is an example 10g upload with the default buffer size (8192):
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 17.53 seconds (584.00m/s)
real 0m17.574s
user 0m8.887s
sys 0m5.971s
Same with 512k blocksize:
$ time ~/src/cpython/release/python upload-httplib.py 10 https://localhost:8000/
Uploaded 10.00g in 6.60 seconds (1551.15m/s)
real 0m6.641s
user 0m3.426s
sys 0m2.162s
In real world usage the difference will be smaller, depending on the
local and remote storage and the network.
See https://github.com/nirs/http-bench for more info.
While technically a purely internal change, bpo-31845 was
a fairly significant externally visible bug caused by
these changes (environment variable based configuration
was being ignored due to a change in the relative order
of reading the environment and reading command line settings,
and the test suite was only testing the command line options)
Hence this note to essentially say "If you see odd startup
problems in 3.7 that you've never seen in previous releases,
it's probably our fault, so let us know, and we'll fix it".
All Blake2 params have to be encoded in little-endian byte order. For
the two multi-byte integer params, leaf_length and node_offset, that
means that assigning a native-endian integer to them appears to work on
little-endian platforms, but gives the wrong result on big-endian. The
current libb2 API doesn't make that very clear, and @sneves is working
on new API functions in the GH issue above. In the meantime, we can work
around the problem by explicitly assigning little-endian values to the
parameter block.
See https://github.com/BLAKE2/libb2/issues/12.
* bpo-31310: multiprocessing's semaphore tracker should be launched again if crashed
* Avoid mucking with process state in test.
Add a warning if the semaphore process died, as semaphores may then be leaked.
* Add NEWS entry
* bpo-31308: If multiprocessing's forkserver dies, launch it again when necessary.
* Fix test on Windows
* Add NEWS entry
* Adopt a different approach: ignore SIGINT and SIGTERM, as in semaphore tracker.
* Fix comment
* Make sure the test doesn't muck with process state
* Also test previously-started processes
* Update 2017-08-30-17-59-36.bpo-31308.KbexyC.rst
* Avoid masking SIGTERM in forkserver. It's not necessary and causes a race condition in test_many_processes.
When a single .c file contains several functions and/or methods with
the same name, a safety _METHODDEF #define statement is generated
only for one of them.
This fixes the bug by using the full name of the function to avoid
duplicates rather than just the name.
* bpo-28643: Record profile-opt build progress with stamp files
The profile-opt makefile target is expensive to build. Since the
makefile does not contain complete dependency information for this
target, much extra work can get done if the build is interrupted and
re-started. Even running "make" a second time will result in a huge
amount of redundant work.
As a minimal fix (rather than removing recursive "make" and adding a
proper dependency graph), split the profile-opt target into parts:
- ensure tree is clean (profile-clean-stamp)
- build with profile generation enabled (profile-gen-stamp)
- run task to generate profile information (profile-run-stamp)
- build optimized Python using above information (profile-opt)
We use "stamp" files to record completion of the steps. Running
"make clean" will not remove the profile-run-stamp file.
Other minor changes:
- remove the "build_all_use_profile" target. I don't expect callers
of the makefile to use this target so that should be safe.
- remove execution of "profile-removal" at end of "profile-opt". I
don't see any reason to not to keep the profile information, given
the cost to generate it. Removing the "profile-run-stamp" file
will force re-generation of it.
Add new time functions:
* time.clock_gettime_ns()
* time.clock_settime_ns()
* time.monotonic_ns()
* time.perf_counter_ns()
* time.process_time_ns()
* time.time_ns()
Add new _PyTime functions:
* _PyTime_FromTimespec()
* _PyTime_FromNanosecondsObject()
* _PyTime_FromTimeval()
Other changes:
* Add also os.times() tests to test_os.
* pytime_fromtimeval() and pytime_fromtimeval() now return
_PyTime_MAX or _PyTime_MIN on overflow, rather than undefined
behaviour
* _PyTime_FromNanoseconds() parameter type changes from long long to
_PyTime_t
Replace occurence of nested comments in blake2 reference implementation
with preprocessor directive for disabling unused code.
`blake2s-load-xop.h` is conditionally pulled in only on chips with XOP
support, among others the AMD Bulldozer. The malformed comments in the
source file breaks the build of `hashlib`'s `_blake2` on GCC 6.3.0.
Official reference code on github uses `#if` so this change should be
uncontroversial.
Modify the code to use ncurses is_pad() instead of checking WINDOW
_flags field. If your platform does not provide the is_pad(), the
existing way that checks the field will be enabled.
Note: This change does not drop support for platforms where do not
have both WINDOW _flags field and is_pad().
Cleanup pymalloc:
* Rename _PyObject_Alloc() to pymalloc_alloc()
* Rename _PyObject_FreeImpl() to pymalloc_free()
* Rename _PyObject_Realloc() to pymalloc_realloc()
* pymalloc_alloc() and pymalloc_realloc() don't fallback on the raw
allocator anymore, it now must be done by the caller
* Add "success" and "failed" labels to pymalloc_alloc() and
pymalloc_free()
* pymalloc_alloc() and pymalloc_free() don't update
num_allocated_blocks anymore: it should be done in the caller
* _PyObject_Calloc() is now responsible to fill the memory block
allocated by pymalloc with zeros
* Simplify pymalloc_alloc() prototype
* _PyObject_Realloc() now calls _PyObject_Malloc() rather than
calling directly pymalloc_alloc()
_PyMem_DebugRawAlloc() and _PyMem_DebugRawRealloc():
* document the layout of a memory block
* don't increase the serial number if the allocation failed
* check for integer overflow before computing the total size
* add a 'data' variable to make the code easiler to follow
test_setallocators() of _testcapimodule.c now test also the context.