No need to store them in `i32`, the JPEG norm specifies that they are
not bigger than 16 bits for extended JPEGs. So, in this patch, we
replace `i32` with `i16`. It almost divides memory usage by two :^)
When creating a copy of the font containing only the glyphs that are in
use, we previously looped over all possible code points, instead of the
range of code points that are actually in use (and allocated) in the
font. This is a problem, since we index into the array of widths to find
out if a given glyph is used. This array is only as long as the number
of glyphs the font was created with, causing an out of bounds read when
that number is less than our maximum.
Loads of changes that are tightly connected... :/
* Change lambdas to static functions
* Add spec docs to those functions
* Keep the current scope around as a parameter
* Add wrapping classes for some Certificate members
* Parse ec and ecdsa data from certificates
The spec is at best misleading here, suggesting that max_symbol should
be set to "num_code_lengths" if it's not explicitly stored.
But num_code_lengths doesn't mean the num_code_lengths mentioned a few
lines further up in the spec, but alphabet_size!
(I had to cheat and look at libwebp instead of the spec for this: See
vp8l_dec.c, ReadHuffmanCode() which passes alphabet_size to
ReadHuffmanCodeLengths() as num_symbols, and ReadHuffmanCodeLengths()
then sets max_symbol to that.)
I haven't yet found a file that uses max_symbol, so this isn't actually
tested. But it's close to what's in libwebp, so maybe it works!
This improves the decompression time of `clang-15.0.7.src.tar.xz` from
41 seconds down to about 5 seconds.
The reason for this very significant improvement is that LZMA, the
underlying compression of XZ, fills its range decompressor one byte at a
time, causing a lot of overhead at the syscall barrier.
I was originally thinking in the wrong direction when adding this limit,
we can at most read from the buffer until we reach the current write
head. Since that write head is the reference point for the distance,
we need to limit ourselves to that instead of the seekback limit (which
is the maximum of how far back the distance can be).
Missing:
* Transform support (used by virtually all lossless webp files)
* Meta prefix / entropy image support
Working:
* Decoding of regular image streams
* Color cache
This happens to be enough to be able to decode
Tests/LibGfx/test-inputs/extended-lossless.webp
The canonical prefix code is very similar to deflate's, enough so that
this can use Compress::CanonicalCode (and take advantage of all the
recent performance improvements there).
The current way we handle sync commands is very ugly and depends on lot
of preconditions. Now that we have an end_io handler for a request, we
can use WaitQueue to do sync commands more elegantly.
This does depend on block layer sending one request at a time but this
change is a step forward towards better IO handling.
There was a private variable named m_current_request which was used to
track a single request at a time. This guarantee is given by the block
layer where we wait on each IO. This design will break down in the
driver once the block layer removes that constraint.
Redesign the IO handling in a completely asynchronous way by maintaining
requests up to queue depth. NVMeIO struct is introduced to track an IO
submitted along with other information such whether the IO is still
being processed and an endio callback which will be called during the
end of a request.
A hashmap private variable is created which will key based on the
command id of a request with a value of NVMeIO. endio handler will come
in handy if we are doing a sync request and we want to wake up the wait
queue during the end.
This change also simplified the code by removing some special condition
in submit_sqe function, etc that were marked as FIXME for a long time.
Using sq_tail as cid makes an inherent assumption that we send only
one IO at a time. Use an atomic variable instead for command id of a
submission queue entry.
As sq_tail is not used as cid anymore, remove m_prev_sq_tail which used
to hold the last used sq_tail value.
The SID was duplicated between the process credentials and protected
data. And to make matters worse, the credentials SID was not updated in
sys$setsid.
This patch fixes this by removing the SID from protected data and
updating the credentials SID everywhere.
This closes two race windows:
- ProcessGroup removed itself from the "all process groups" list in its
destructor. It was possible to walk the list between the last unref()
and the destructor invocation, and grab a pointer to a ProcessGroup
that was about to get deleted.
- sys$setsid() could end up creating a process group that already
existed, as there was a race window between checking if the PGID
is used, and actually creating a ProcessGroup with that PGID.
Now that it's no longer using LockRefPtr, we can actually move it into
protected data. (LockRefPtr couldn't be stored there because protected
data is immutable at times, and LockRefPtr uses some of its own bits
for locking.)