Update osxkeychain backend with features required for the recent
credential subsystem.
* ba/osxkeychain-updates:
osxkeychain: store new attributes
osxkeychain: erase matching passwords only
osxkeychain: erase all matching credentials
osxkeychain: replace deprecated SecKeychain API
The strategy to compact multiple tables of reftables after many
operations accumulate many entries has been improved to avoid
accumulating too many tables uncollected.
* jt/reftable-geometric-compaction:
reftable/stack: use geometric table compaction
reftable/stack: add env to disable autocompaction
reftable/stack: expose option to disable auto-compaction
Adjust to an upcoming changes to GNU make that breaks our Makefiles.
* tb/make-indent-conditional-with-non-spaces:
Makefile(s): do not enforce "all indents must be done with tab"
Makefile(s): avoid recipe prefix in conditional statements
vreportf(), which is usede by error() and friends, has been taught
to give the error message printf-format string when its vsnprintf()
call fails, instead of showing nothing useful to identify the
nature of the error.
* rs/usage-fallback-to-show-message-format:
usage: report vsnprintf(3) failure
The codepaths that reach date_mode_from_type() have been updated to
pass "struct date_mode" by value to make them thread safe.
* rs/date-mode-pass-by-value:
date: make DATE_MODE thread-safe
The userdiff patterns for C# has been updated.
Acked-by: Johannes Sixt <j6t@kdbg.org>
cf. <c2154457-3f2f-496e-9b8b-c8ea7257027b@kdbg.org>
* sj/userdiff-c-sharp:
userdiff: better method/property matching for C#
Document and apply workaround for a buggy version of dash that
mishandles "local var=val" construct.
* jc/local-extern-shell-rules:
t1016: local VAR="VAL" fix
t0610: local VAR="VAL" fix
t: teach lint that RHS of 'local VAR=VAL' needs to be quoted
t: local VAR="VAL" (quote ${magic-reference})
t: local VAR="VAL" (quote command substitution)
t: local VAR="VAL" (quote positional parameters)
CodingGuidelines: quote assigned value in 'local var=$val'
CodingGuidelines: describe "export VAR=VAL" rule
On some systems PATH_MAX is not a hard limit. Support longer paths by
building them on the heap instead of using static buffers.
Take care to work around (arguably buggy) implementations of free(3)
that change errno by calling it only after using the errno value.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When rerere handles a conflict with an unmatched opening conflict marker
in a file with other conflicts, it will fail create a preimage and also
fail allocate the status member of struct rerere_dir. Currently the
status member is allocated after the error handling. This will lead to a
SEGFAULT when the status member is accessed during cleanup of the failed
parse.
Additionally, in subsequent executions of rerere, after removing the
MERGE_RR.lock manually, rerere crashes for a similar reason. MERGE_RR
points to a conflict id that has no preimage, therefore the status
member is not allocated and a SEGFAULT happens when trying to check if a
preimage exists.
Solve this by making sure the status field is allocated correctly and add
tests to prevent the bug from reoccurring.
This does not fix the root cause, failing to parse stray conflict
markers, but I don't think we can do much better than recognizing it,
printing an error, and moving on gracefully.
Signed-off-by: Marcel Röthke <marcel@roethke.info>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The variable that holds the value read from the core.excludefile
configuration variable used to leak, which has been corrected.
* jc/unleak-core-excludesfile:
config: do not leak excludes_file
Fix was added to work around a regression in libcURL 8.7.0 (which has
already been fixed in their tip of the tree).
* jk/libcurl-8.7-regression-workaround:
remote-curl: add Transfer-Encoding header only for older curl
INSTALL: bump libcurl version to 7.21.3
http: reset POSTFIELDSIZE when clearing curl handle
The "shared repository" test in the t0610 reftable test failed
under restrictive umask setting (e.g. 007), which has been
corrected.
* ps/t0610-umask-fix:
t0610: execute git-pack-refs(1) with specified umask
t0610: make `--shared=` tests reusable
"git add -u <pathspec>" and "git commit [-i] <pathspec>" did not
diagnose a pathspec element that did not match any files in certain
situations, unlike "git add <pathspec>" did.
* gt/add-u-commit-i-pathspec-check:
builtin/add: error out when passing untracked path with -u
builtin/commit: error out when passing untracked path with -i
revision: optionally record matches with pathspec elements
A config parser callback function fell through instead of returning
after recognising and processing a variable, wasting cycles, which
has been corrected.
* ds/fetch-config-parse-microfix:
fetch: return when parsing submodule.recurse
A file descriptor leak in an error codepath, used when "git apply
--reject" fails to create the *.rej file, has been corrected.
* rs/apply-reject-fd-leakfix:
apply: don't leak fd on fdopen() error
"git apply" has been updated to lift the hardcoded pathname length
limit, which in turn allowed a mksnpath() function that is no
longer used.
* rs/apply-lift-path-length-limit:
path: remove mksnpath()
apply: avoid fixed-size buffer in create_one_file()
Windows binary used to decide the use of unix-domain socket at
build time, but it learned to make the decision at runtime instead.
* ma/win32-unix-domain-socket:
Win32: detect unix socket support at runtime
nfvasprintf() has a 8KB limit, but it's not relevant, as its result is
combined with other strings and added to a 1KB buffer by its caller.
That 1KB limit is not mentioned in RFC 9051, which specifies IMAP.
While 1KB is plenty for user names, passwords and mailbox names,
there's no point in limiting our commands like that. Call xstrvfmt()
instead of open-coding it and use strbuf to format the command to
send, as we need its length. Fail hard if it exceeds INT_MAX, because
socket_write() can't take more than that.
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When "git bisect" finds the first bad commit and shows it to the user,
it calls "git diff-tree" to do so, whose output is meant to be stable
and deliberately ignores end-user customizations.
As the output is supposed to be consumed by humans, replace this with
a call to "git show". This command honors configuration options (such
as "log.date" and "log.mailmap") and other UI improvements (renames
are detected).
Pass some hard-coded options to "git show" to make the output similar
to the one we are replacing, such as showing a patch summary only.
Reported-by: Michael Osipov <michael.osipov@innomotics.com>
Signed-off-By: Peter Krefting <peter@softwolves.pp.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In git-replay documentation, linkgit to git-rev-parse is missing the
man section, which breaks its rendering.
Add section number as done in other references to this command.
Signed-off-by: Yehezkel Bernat <YehezkelShB@gmail.com>
Acked-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
b195aa00c1 (git-compat-util: suppress unavoidable Apple-specific
deprecation warnings, 2014-12-16) started to define
__AVAILABILITY_MACROS_USES_AVAILABILITY in git-compat-util.h. On
current versions it is already defined (e.g. on macOS 14.4.1). Undefine
it before redefining it to avoid a compilation error.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 0fea6b73f1 (Merge branch 'tb/multi-pack-verbatim-reuse', 2024-01-12)
we have introduced multi-pack verbatim reuse of objects. This series has
introduced a new BTMP chunk, which encodes information about bitmapped
objects in the multi-pack index. Starting with dab60934e3 (pack-bitmap:
pass `bitmapped_pack` struct to pack-reuse functions, 2023-12-14) we use
this information to figure out objects which we can reuse from each of
the packfiles.
One thing that we glossed over though is backwards compatibility with
repositories that do not yet have BTMP chunks in their multi-pack index.
In that case, `nth_bitmapped_pack()` would return an error, which causes
us to emit a warning followed by another error message. These warnings
are visible to users that fetch from a repository:
```
$ git fetch
...
remote: error: MIDX does not contain the BTMP chunk
remote: warning: unable to load pack: 'pack-f6bb7bd71d345ea9fe604b60cab9ba9ece54ffbe.idx', disabling pack-reuse
remote: Enumerating objects: 40, done.
remote: Counting objects: 100% (40/40), done.
remote: Compressing objects: 100% (39/39), done.
remote: Total 40 (delta 5), reused 0 (delta 0), pack-reused 0 (from 0)
...
```
While the fetch succeeds the user is left wondering what they did wrong.
Furthermore, as visible both from the warning and from the reuse stats,
pack-reuse is completely disabled in such repositories.
What is quite interesting is that this issue can even be triggered in
case `pack.allowPackReuse=single` is set, which is the default value.
One could have expected that in this case we fall back to the old logic,
which is to use the preferred packfile without consulting BTMP chunks at
all. But either we fail with the above error in case they are missing,
or we use the first pack in the multi-pack-index. The former case
disables pack-reuse altogether, whereas the latter case may result in
reusing objects from a suboptimal packfile.
Fix this issue by partially reverting the logic back to what we had
before this patch series landed. Namely, in the case where we have no
BTMP chunks or when `pack.allowPackReuse=single` are set, we use the
preferred pack instead of consulting the BTMP chunks.
Helped-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When seeking a reftable record in a block we need to position the
iterator _before_ the sought-after record so that the next call to
`block_iter_next()` would yield that record. To achieve this, the loop
that performs the linear seeks to restore the previous position once it
has found the record.
This is done by advancing two `block_iter`s: one to check whether the
next record is our sought-after record, and one that we update after
every iteration. This of course involves quite a lot of copying and also
leads to needless memory allocations.
Refactor the code to get rid of the `next` iterator and the copying this
involves. Instead, we can restore the previous offset such that the call
to `next` will return the correct record.
Next to being simpler conceptually this also leads to a nice speedup.
The following benchmark parser 10k refs out of 100k existing refs via
`git-rev-list --no-walk`:
Benchmark 1: rev-list: print many refs (HEAD~)
Time (mean ± σ): 170.2 ms ± 1.7 ms [User: 86.1 ms, System: 83.6 ms]
Range (min … max): 166.4 ms … 180.3 ms 500 runs
Benchmark 2: rev-list: print many refs (HEAD~)
Time (mean ± σ): 161.6 ms ± 1.6 ms [User: 78.1 ms, System: 83.0 ms]
Range (min … max): 158.4 ms … 172.3 ms 500 runs
Summary
rev-list: print many refs (HEAD) ran
1.05 ± 0.01 times faster than rev-list: print many refs (HEAD~)
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When calling `inflateInit()` and `inflate()`, the zlib library will
allocate several data structures for the underlying `zstream` to keep
track of various information. Thus, when inflating repeatedly, it is
possible to optimize memory allocation patterns by reusing the `zstream`
and then calling `inflateReset()` on it to prepare it for the next chunk
of data to inflate.
This is exactly what the reftable code is doing: when iterating through
reflogs we need to potentially inflate many log blocks, but we discard
the `zstream` every single time. Instead, as we reuse the `block_reader`
for each of the blocks anyway, we can initialize the `zstream` once and
then reuse it for subsequent inflations.
Refactor the code to do so, which leads to a significant reduction in
the number of allocations. The following measurements were done when
iterating through 1 million reflog entries. Before:
HEAP SUMMARY:
in use at exit: 13,473 bytes in 122 blocks
total heap usage: 23,028 allocs, 22,906 frees, 162,813,552 bytes allocated
After:
HEAP SUMMARY:
in use at exit: 13,473 bytes in 122 blocks
total heap usage: 302 allocs, 180 frees, 88,352 bytes allocated
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The reftable format stores log blocks in a compressed format. Thus,
whenever we want to read such a block we first need to decompress it.
This is done by calling the convenience function `uncompress2()` of the
zlib library, which is a simple wrapper that manages the lifecycle of
the `zstream` structure for us.
While nice for one-off inflation of data, when iterating through reflogs
we will likely end up inflating many such log blocks. This requires us
to reallocate the state of the `zstream` every single time, which adds
up over time. It would thus be great to reuse the `zstream` instead of
discarding it after every inflation.
Open-code the call to `uncompress2()` such that we can start reusing the
`zstream` in the subsequent commit. Note that our open-coded variant is
different from `uncompress2()` in two ways:
- We do not loop around `inflate()` until we have processed all input.
As our input is limited by the maximum block size, which is 16MB, we
should not hit limits of `inflate()`.
- We use `Z_FINISH` instead of `Z_NO_FLUSH`. Quoting the `inflate()`
documentation: "inflate() should normally be called until it returns
Z_STREAM_END or an error. However if all decompression is to be
performed in a single step (a single call of inflate), the parameter
flush should be set to Z_FINISH."
Furthermore, "Z_FINISH also informs inflate to not maintain a
sliding window if the stream completes, which reduces inflate's
memory footprint."
Other than that this commit is expected to be functionally equivalent
and does not yet reuse the `zstream`.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The reftable backend stores reflog entries in a compressed format and
thus needs to uncompress blocks before one can read records from it.
For each reflog block we thus have to allocate an array that we can
decompress the block contents into. This block is being discarded
whenever the table iterator moves to the next block. Consequently, we
reallocate a new array on every block, which is quite wasteful.
Refactor the code to reuse the uncompressed block data when moving the
block reader to a new block. This significantly reduces the number of
allocations when iterating through many compressed blocks. The following
measurements are done with `git reflog list` when listing 100k reflogs.
Before:
HEAP SUMMARY:
in use at exit: 13,473 bytes in 122 blocks
total heap usage: 45,755 allocs, 45,633 frees, 254,779,456 bytes allocated
After:
HEAP SUMMARY:
in use at exit: 13,473 bytes in 122 blocks
total heap usage: 23,028 allocs, 22,906 frees, 162,813,547 bytes allocated
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The table iterator has to iterate towards the next block once it has
yielded all records of the current block. This is done by creating a new
table iterator, initializing it to the next block, releasing the old
iterator and then copying over the data.
Refactor the code to instead advance the table iterator in place. This
is simpler and unlocks some optimizations in subsequent patches. Also,
it allows us to avoid some allocations.
The following measurements show a single matching ref out of 1 million
refs. Before this change:
HEAP SUMMARY:
in use at exit: 13,603 bytes in 125 blocks
total heap usage: 7,235 allocs, 7,110 frees, 301,481 bytes allocated
After:
HEAP SUMMARY:
in use at exit: 13,603 bytes in 125 blocks
total heap usage: 315 allocs, 190 frees, 107,027 bytes allocated
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The table iterator allows the caller to iterate through all records in a
reftable table. To do so it iterates through all blocks of the desired
type one by one, where for each block it creates a new block iterator
and yields all its entries.
One of the things that is somewhat confusing in this context is who owns
the block reader that is being used to read the blocks and pass them to
the block iterator. Intuitively, as the table iterator is responsible
for iterating through the blocks, one would assume that this iterator is
also responsible for managing the lifecycle of the reader. And while it
somewhat is, the block reader is ultimately stored inside of the block
iterator.
Refactor the code such that the block reader is instead fully managed by
the table iterator. Instead of passing the reader to the block iterator,
we now only end up passing the block data to it. Despite clearing up the
lifecycle of the reader, it will also allow for better reuse of the
reader in subsequent patches.
The following benchmark prints a single matching ref out of 1 million
refs. Before:
HEAP SUMMARY:
in use at exit: 13,603 bytes in 125 blocks
total heap usage: 6,607 allocs, 6,482 frees, 509,635 bytes allocated
After:
HEAP SUMMARY:
in use at exit: 13,603 bytes in 125 blocks
total heap usage: 7,235 allocs, 7,110 frees, 301,481 bytes allocated
Note that while there are more allocation and free calls now, the
overall number of bytes allocated is significantly lower. The number of
allocations will be reduced significantly by the next patch though.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a new function `block_reader_release()` that releases
resources acquired by the block reader. This function will be extended
in a subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Function definitions and declaration of `struct block_reader` and
`struct block_iter` are somewhat mixed up, making it hard to see which
functions belong together. Rearrange them.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `block_iter_seek()` is merely a simple wrapper around
`block_reader_seek()`. Merge those two functions into a new function
`block_iter_seek_key()` that more clearly says what it is actually
doing.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `block_reader_start()` does not really apply to the block
reader, but to the block iterator. It's name is thus somewhat confusing.
Rename it to `block_iter_seek_start()` to clarify.
We will rename `block_reader_seek()` in similar spirit in the next
commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When advice.waitingForEditor configuration is not set to false, we show
a hint telling that we are waiting for user's editor to close the file
when we launch an editor and wait for it to return control back to us.
We give the message on an incomplete line, expecting that we can go back
to the beginning of the line and clear the message when the editor returns.
However, it is possible that the editor exits with an error status, in
which case we show an error message and then return to our caller. In
such a case, the error message is given where the terminal cursor
happens to be, which is most likely after the "we are waiting for your
editor" message on the same line.
Clear the line before showing the error.
While we're here, make the error message follow our CodingGuideLines.
Signed-off-by: Rubén Justo <rjusto@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The former is somewhat imprecise. The latter became out of sync with the
behavior in e814c39c2f (fast-import: refactor parsing of spaces,
2014-06-18).
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
NUL cannot appear in paths. Even disregarding filesystem path
limitations, the tree object format delimits with NUL, so such a path
cannot be encoded by Git.
When a quoted path is unquoted, it could possibly contain NUL from
"\000". Forbid it so it isn't truncated.
fast-import still has other issues with NUL, but those will be addressed
later.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Simply saying “C-style” string quoting is imprecise, as only a subset of
C escapes are supported. Document the exact escapes.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It describes what characters cannot be in an unquoted path, but not
their semantics. Reframe it as a definition of unquoted paths. From the
perspective of the parser, whether it starts with `"` is what defines
whether it will parse it as quoted or unquoted.
The restrictions on characters in unquoted paths (with starting-", LF,
and spaces) are explained in the quoted paragraph. Move it to the
unquoted paragraph and reword.
The restriction that the source paths of filecopy and filerename cannot
contain SP is only stated in their respective sections. Restate it in
the <path> section.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The strbuf in `note_change_n` is to copy the remainder of `p` before
potentially invalidating it when reading the next line. However, `p` is
not used after that point. It has been unused since the function was
created in a8dd2e7d2b (fast-import: Add support for importing commit
notes, 2009-10-09) and looks to be a fossil from adapting
`file_change_m`. Remove it.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Ever since filerename was added in f39a946a1f (Support wholesale
directory renames in fast-import, 2007-07-09) and filecopy in b6f3481bb4
(Teach fast-import to recursively copy files/directories, 2007-07-15),
both have produced an error when the destination path is empty. Later,
when support for targeting the root directory with an empty string was
added in 2794ad5244 (fast-import: Allow filemodify to set the root,
2010-10-10), this had the effect of allowing the quoted empty string
(`""`), but forbidding its unquoted variant (``). This seems to have
been intended as simple data validation for parsing two paths, rather
than a syntax restriction, because it was not extended to the other
operations.
All other occurrences of paths (in filemodify, filedelete, the source of
filecopy and filerename, and ls) allow both.
For most of this feature's lifetime, the documentation has not
prescribed the use of quoted empty strings. In e5959106d6
(Documentation/fast-import: put explanation of M 040000 <dataref> "" in
context, 2011-01-15), its documentation was changed from “`<path>` may
also be an empty string (`""`) to specify the root of the tree” to “The
root of the tree can be represented by an empty string as `<path>`”.
Thus, we should assume that some front-ends have depended on this
behavior.
Remove this restriction for the destination paths of filecopy and
filerename and change tests targeting the root to test `""` and ``.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, one case would not write the path to the strbuf: when the
path is unquoted and at the end of the string. It was essentially
copy-on-write. However, with the logic simplification of the previous
commit, this case was eliminated and the strbuf is always populated.
Directly use the strbufs now instead of an alias.
Since this already changes all the lines that use the strbufs, rename
them from `uq` to be more descriptive. That they are unquoted is not
their most important property, so name them after what they carry.
Additionally, `file_change_m` no longer needs to copy the path before
reading inline data.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Path parsing in fast-import is inconsistent and many unquoting errors
are suppressed or not checked.
<path> appears in the grammar in these places:
filemodify ::= 'M' SP <mode> (<dataref> | 'inline') SP <path> LF
filedelete ::= 'D' SP <path> LF
filecopy ::= 'C' SP <path> SP <path> LF
filerename ::= 'R' SP <path> SP <path> LF
ls ::= 'ls' SP <dataref> SP <path> LF
ls-commit ::= 'ls' SP <path> LF
and fast-import.c parses them in five different ways:
1. For filemodify and filedelete:
Try to unquote <path>. If it unquotes without errors, use the
unquoted version; otherwise, treat it as literal bytes to the end of
the line (including any number of SP).
2. For filecopy (source) and filerename (source):
Try to unquote <path>. If it unquotes without errors, use the
unquoted version; otherwise, treat it as literal bytes up to, but not
including, the next SP.
3. For filecopy (dest) and filerename (dest):
Like 1., but an unquoted empty string is forbidden.
4. For ls:
If <path> starts with `"`, unquote it and report parse errors;
otherwise, treat it as literal bytes to the end of the line
(including any number of SP).
5. For ls-commit:
Unquote <path> and report parse errors.
(It must start with `"` to disambiguate from ls.)
In the first three, any errors from trying to unquote a string are
suppressed, so a quoted string that contains invalid escapes would be
interpreted as literal bytes. For example, `"\xff"` would fail to
unquote (because hex escapes are not supported), and it would instead be
interpreted as the byte sequence '"', '\\', 'x', 'f', 'f', '"', which is
certainly not intended. Some front-ends erroneously use their language's
standard quoting routine instead of matching Git's, which could silently
introduce escapes that would be incorrectly parsed due to this and lead
to data corruption.
The documentation states “To use a source path that contains SP the path
must be quoted.”, so it is expected that some implementations depend on
spaces being allowed in paths in the final position. Thus we have two
documented ways to parse paths, so simplify the implementation to that.
Now we have:
1. `parse_path_eol` for filemodify, filedelete, filecopy (dest),
filerename (dest), ls, and ls-commit:
If <path> starts with `"`, unquote it and report parse errors;
otherwise, treat it as literal bytes to the end of the line
(including any number of SP).
2. `parse_path_space` for filecopy (source) and filerename (source):
If <path> starts with `"`, unquote it and report parse errors;
otherwise, treat it as literal bytes up to, but not including, the
next SP. It must be followed by SP.
There remain two special cases: The dest <path> in filecopy and rename
cannot be an unquoted empty string (this will be addressed subsequently)
and <path> in ls-commit must be quoted to disambiguate it from ls.
Signed-off-by: Thalia Archibald <thalia@archibald.dev>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Code clean-up by splitting code responsible for writing midx files
into its own file.
* tb/midx-write:
midx-write.c: use `--stdin-packs` when repacking
midx-write.c: check count of packs to repack after grouping
midx-write.c: factor out common want_included_pack() routine
midx-write: move writing-related functions from midx.c
t-prio-queue test has been cleaned up by using C99 compound
literals; this is meant to also serve as a weather-balloon to smoke
out folks with compilers who have trouble compiling code that uses
the feature.
* rs/t-prio-queue-cleanup:
t-prio-queue: simplify using compound literals
Reftable code clean-up and some bugfixes.
* ps/reftable-binsearch-updates:
reftable/block: avoid decoding keys when searching restart points
reftable/record: extract function to decode key lengths
reftable/block: fix error handling when searching restart points
reftable/block: refactor binary search over restart points
reftable/refname: refactor binary search over refnames
reftable/basics: improve `binsearch()` test
reftable/basics: fix return type of `binsearch()` to be `size_t`
"git checkout/switch --detach foo", after switching to the detached
HEAD state, gave the tracking information for the 'foo' branch,
which was pointless.
Tested-by: M Hickford <mirth.hickford@gmail.com>
cf. <CAGJzqsmE9FDEBn=u3ge4LA3ha4fDbm4OWiuUbMaztwjELBd7ug@mail.gmail.com>
* jc/checkout-detach-wo-tracking-report:
checkout: omit "tracking" information on a detached HEAD