Commit graph

68921 commits

Author SHA1 Message Date
Taylor Blau
a0789512c5 The thirteenth batch
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-18 18:48:53 -05:00
Taylor Blau
e87a229d57 Merge branch 'en/sparse-checkout-design'
Design doc.

* en/sparse-checkout-design:
  sparse-checkout.txt: new document with sparse-checkout directions
2022-11-18 18:44:01 -05:00
Taylor Blau
26734da056 Merge branch 'jk/branch-delete-detached'
Fix a bug where `git branch -d` did not work on an orphaned HEAD.

* jk/branch-delete-detached:
  branch: gracefully handle '-d' on orphan HEAD
2022-11-18 18:44:00 -05:00
Taylor Blau
35a62bb579 Merge branch 'mh/credential-unrecognized-attrs'
Docfix.

* mh/credential-unrecognized-attrs:
  docs: clarify that credential discards unrecognised attributes
2022-11-18 18:43:59 -05:00
Taylor Blau
a92fce4c50 Merge branch 'vd/skip-cache-tree-update'
Avoid calling 'cache_tree_update()' when doing so would be redundant.

* vd/skip-cache-tree-update:
  rebase: use 'skip_cache_tree_update' option
  read-tree: use 'skip_cache_tree_update' option
  reset: use 'skip_cache_tree_update' option
  unpack-trees: add 'skip_cache_tree_update' option
  cache-tree: add perf test comparing update and prime
2022-11-18 18:43:56 -05:00
Taylor Blau
3f98d7ab1b Merge branch 'mh/increase-credential-cache-timeout'
Update the credential-cache documentation to provide a more realistic
example.

* mh/increase-credential-cache-timeout:
  Documentation: increase example cache timeout to 1 hour
2022-11-18 18:43:55 -05:00
Taylor Blau
35dc2cf03f Merge branch 'vd/update-refs-delete'
`git rebase --update-refs` would delete references when all `update-ref`
commands in the sequencer were removed, which has been corrected.

* vd/update-refs-delete:
  rebase --update-refs: avoid unintended ref deletion
2022-11-18 18:43:11 -05:00
Taylor Blau
ad9096881d Merge branch 'tb/repack-expire-to'
"git repack" learns to send cruft objects out of the way into
packfiles outside the repository.

* tb/repack-expire-to:
  builtin/repack.c: implement `--expire-to` for storing pruned objects
  builtin/repack.c: write cruft packs to arbitrary locations
  builtin/repack.c: pass "cruft_expiration" to `write_cruft_pack`
  builtin/repack.c: pass "out" to `prepare_pack_objects`
2022-11-18 18:43:09 -05:00
Taylor Blau
e53598a5ab Merge branch 'ab/sha-makefile-doc'
Makefile comments updates and reordering to clarify knobs used to
choose SHA implementations.

* ab/sha-makefile-doc:
  Makefile: discuss SHAttered in *_SHA{1,256} discussion
  Makefile: document default SHA-1 backend on OSX
  Makefile & test-tool: replace "DC_SHA1" variable with a "define"
  Makefile: document SHA-1 and SHA-256 default and selection order
  Makefile: document default SHA-256 backend
  Makefile: rephrase the discussion of *_SHA1 knobs
  Makefile: create and use sections for "define" flag listing
  Makefile: correct DC_SHA1 documentation
  INSTALL: remove discussion of SHA-1 backends
  Makefile: always (re)set DC_SHA1 on fallback
2022-11-18 18:43:07 -05:00
Taylor Blau
69c1d609ba Merge branch 'ab/misc-hook-submodule-run-command'
Various test updates.

* ab/misc-hook-submodule-run-command:
  run-command tests: test stdout of run_command_parallel()
  submodule tests: reset "trace.out" between "grep" invocations
  hook tests: fix redirection logic error in 96e7225b31
2022-11-18 18:43:04 -05:00
Eric Wong
7025f54c40 delta-islands: free island-related data after use
On my use case involving 771 islands of Linux on kernel.org,
this reduces memory usage by around 25MB.  The bulk of that
comes from free_remote_islands, since free_config_regexes only
saves around 40k.

This memory is saved early in the memory-intensive pack process,
making it available for the remainder of the long process.

Signed-off-by: Eric Wong <e@80x24.org>
Co-authored-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-18 18:30:49 -05:00
Jeff King
8db2dad7a0 parse_object(): check on-disk type of suspected blob
In parse_object(), we try to handle blobs by streaming rather than
loading them entirely into memory. The most common case here will be
that we haven't seen the object yet and check oid_object_info(), which
tells us we have a blob.

But we trigger this code on one other case: when we have an in-memory
object struct with type OBJ_BLOB (and without its "parsed" flag set,
since otherwise we'd return early from the function). This indicates
that some other part of the code suspected we have a blob (e.g., it was
mentioned by a tree or tag) but we haven't yet looked at the on-disk
copy.

In this case before hitting the streaming path, we check if we have the
object on-disk at all. This is mostly pointless extra work, as the
streaming path would complain if it couldn't open the object (albeit
with the message "hash mismatch", which is a little misleading).

But it's also insufficient to catch all problems. The streaming code
will only tell us "yes, the on-disk object matches the oid". But it
doesn't actually confirm that what we found was indeed a blob, and
neither does repo_has_object_file().

One way to improve this would be to teach stream_object_signature() to
check the type (either by returning it to us to check, or taking an
"expected" type). But there's an even simpler fix here: if we suspect
the object is a blob, just call oid_object_info() to confirm that we
have it on-disk, and that it really is a blob.

This is slightly less efficient than teaching stream_object_signature()
to do it (since it has to open the object already). But this case very
rarely comes up. In practice, we usually don't have any clue what the
type is, in which case we already call oid_object_info(). This
"suspected" case happens only when some other code created an object
struct but didn't actually parse the blob, which is actually tricky to
trigger at all (see the discussion of the test below).

I reworked the conditional a bit so that instead of:

  if ((suspected_blob && oid_object_info() == OBJ_BLOB)
      (no_clue && oid_object_info() == OBJ_BLOB)

we have the simpler:

  if ((suspected_blob || no_clue) && oid_object_info() == OBJ_BLOB)

This is shorter, but also reflects what we really want say, which is
"have we ruled out this being a blob; if not, check it on-disk".

In either case, if oid_object_info() fails to tell us it's a blob, we'll
skip the streaming code path and call repo_read_object_file(), just as
before. And if we really do have a mismatch with the existing object
struct, we'll eventually call lookup_commit(), etc, via
parse_object_buffer(), which will complain that it doesn't match our
existing obj->type.

So this fixes one of the lingering expect_failure cases from 0616617c7e
(t: introduce tests for unexpected object types, 2019-04-09).  That test
works by peeling a tag that claims to point to a blob (triggering us to
create the struct), but really points to something else, which we later
discover when we call parse_object() as part of the actual traversal).
Prior to this commit, we'd quietly check the sha1 and mark the blob as
"parsed". Now we correctly complain about the mismatch.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-18 13:59:31 -05:00
Jeff King
04fb96219a parse_object(): drop extra "has" check before checking object type
When parsing an object of unknown type, we check to see if it's a blob,
so we can use our streaming code path. This uses oid_object_info() to
check the type, but before doing so we call repo_has_object_file(). This
latter is pointless, as oid_object_info() will already fail if the
object is missing. Checking it ahead of time just complicates the code
and is a waste of resources (albeit small).

Let's drop the redundant check.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-18 13:59:31 -05:00
Patrick Steinhardt
bcec6780b2 receive-pack: only use visible refs for connectivity check
When serving a push, git-receive-pack(1) needs to verify that the
packfile sent by the client contains all objects that are required by
the updated references. This connectivity check works by marking all
preexisting references as uninteresting and using the new reference tips
as starting point for a graph walk.

Marking all preexisting references as uninteresting can be a problem
when it comes to performance. Git forges tend to do internal bookkeeping
to keep alive sets of objects for internal use or make them easy to find
via certain references. These references are typically hidden away from
the user so that they are neither advertised nor writeable. At GitLab,
we have one particular repository that contains a total of 7 million
references, of which 6.8 million are indeed internal references. With
the current connectivity check we are forced to load all these
references in order to mark them as uninteresting, and this alone takes
around 15 seconds to compute.

We can optimize this by only taking into account the set of visible refs
when marking objects as uninteresting. This means that we may now walk
more objects until we hit any object that is marked as uninteresting.
But it is rather unlikely that clients send objects that make large
parts of objects reachable that have previously only ever been hidden,
whereas the common case is to push incremental changes that build on top
of the visible object graph.

This provides a huge boost to performance in the mentioned repository,
where the vast majority of its refs hidden. Pushing a new commit into
this repo with `transfer.hideRefs` set up to hide 6.8 million of 7 refs
as it is configured in Gitaly leads to a 4.5-fold speedup:

    Benchmark 1: main
      Time (mean ± σ):     30.977 s ±  0.157 s    [User: 30.226 s, System: 1.083 s]
      Range (min … max):   30.796 s … 31.071 s    3 runs

    Benchmark 2: pks-connectivity-check-hide-refs
      Time (mean ± σ):      6.799 s ±  0.063 s    [User: 6.803 s, System: 0.354 s]
      Range (min … max):    6.729 s …  6.850 s    3 runs

    Summary
      'pks-connectivity-check-hide-refs' ran
        4.56 ± 0.05 times faster than 'main'

As we mostly go through the same codepaths even in the case where there
are no hidden refs at all compared to the code before there is no change
in performance when no refs are hidden:

    Benchmark 1: main
      Time (mean ± σ):     48.188 s ±  0.432 s    [User: 49.326 s, System: 5.009 s]
      Range (min … max):   47.706 s … 48.539 s    3 runs

    Benchmark 2: pks-connectivity-check-hide-refs
      Time (mean ± σ):     48.027 s ±  0.500 s    [User: 48.934 s, System: 5.025 s]
      Range (min … max):   47.504 s … 48.500 s    3 runs

    Summary
      'pks-connectivity-check-hide-refs' ran
        1.00 ± 0.01 times faster than 'main'

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:52 -05:00
Patrick Steinhardt
5ff36c9b6b rev-parse: add --exclude-hidden= option
Add a new `--exclude-hidden=` option that is similar to the one we just
added to git-rev-list(1). Given a section name `uploadpack` or `receive`
as argument, it causes us to exclude all references that would be hidden
by the respective `$section.hideRefs` configuration.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:52 -05:00
Patrick Steinhardt
8c1bc2a71a revision: add new parameter to exclude hidden refs
Users can optionally hide refs from remote users in git-upload-pack(1),
git-receive-pack(1) and others via the `transfer.hideRefs`, but there is
not an easy way to obtain the list of all visible or hidden refs right
now. We'll require just that though for a performance improvement in our
connectivity check.

Add a new option `--exclude-hidden=` that excludes any hidden refs from
the next pseudo-ref like `--all` or `--branches`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:52 -05:00
Patrick Steinhardt
1e9f273ac0 revision: introduce struct to handle exclusions
The functions that handle exclusion of refs work on a single string
list. We're about to add a second mechanism for excluding refs though,
and it makes sense to reuse much of the same architecture for both kinds
of exclusion.

Introduce a new `struct ref_exclusions` that encapsulates all the logic
related to excluding refs and move the `struct string_list` that holds
all wildmatch patterns of excluded refs into it. Rename functions that
operate on this struct to match its name.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:52 -05:00
Patrick Steinhardt
05b9425960 revision: move together exclusion-related functions
Move together the definitions of functions that handle exclusions of
refs so that related functionality sits in a single place, only.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:51 -05:00
Patrick Steinhardt
9b67eb6fbe refs: get rid of global list of hidden refs
We're about to add a new argument to git-rev-list(1) that allows it to
add all references that are visible when taking `transfer.hideRefs` et
al into account. This will require us to potentially parse multiple sets
of hidden refs, which is not easily possible right now as there is only
a single, global instance of the list of parsed hidden refs.

Refactor `parse_hide_refs_config()` and `ref_is_hidden()` so that both
take the list of hidden references as input and adjust callers to keep a
local list, instead. This allows us to easily use multiple hidden-ref
lists. Furthermore, it allows us to properly free this list before we
exit.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:51 -05:00
Patrick Steinhardt
5eeb9aa208 refs: fix memory leak when parsing hideRefs config
When parsing the hideRefs configuration, we first duplicate the config
value so that we can modify it. We then subsequently append it to the
`hide_refs` string list, which is initialized with `strdup_strings`
enabled. As a consequence we again reallocate the string, but never
free the first duplicate and thus have a memory leak.

While we never clean up the static `hide_refs` variable anyway, this is
no excuse to make the leak worse by leaking every value twice. We are
also about to change the way this variable will be handled so that we do
indeed start to clean it up. So let's fix the memory leak by using the
`string_list_append_nodup()` so that we pass ownership of the allocated
string to `hide_refs`.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-17 16:22:51 -05:00
Michael J Gruber
3c9b01f0bf notes: avoid empty line in template
When `git notes` prepares the template it adds an empty newline between
the comment header and the content:

>
> #
> # Write/edit the notes for the following object:
>
> # commit 0f3c55d4c2
> # etc

This is wrong structurally because that newline is part of the comment,
too, and thus should be commented. Also, it throws off some positioning
strategies of editors and plugins, and it differs from how we do commit
templates.

Change this to follow the standard set by `git commit`:

>
> #
> # Write/edit the notes for the following object:
> #
> # commit 0f3c55d4c2
>

Tests pass unchanged after this code change.

Signed-off-by: Michael J Gruber <git@grubix.eu>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-16 14:57:32 -05:00
Ævar Arnfjörð Bjarmason
23fb328c8d t7610: use "file:///dev/null", not "/dev/null", fixes MinGW
On MinGW the "/dev/null" is translated to "nul" on command-lines, even
though as in this case it'll never end up referring to an actual file.

So on Windows the fix for the previous "example.com" timeout issue in
8354cf752e (t7610: fix flaky timeout issue, don't clone from
example.com, 2022-11-05) would yield:

  fatal: repo URL: 'nul' must be absolute or begin with ./|../

Let's evade this yet again by prefixing this with "file://", which
makes this pass in the Windows CI.

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-15 20:05:02 -05:00
Taylor Blau
03744bbdc4 builtin/gc.c: fix use-after-free in maintenance_unregister()
While trying to fix a move based on an uninitialized value (along with a
declaration after the first statement), be0fd57228
(maintenance --unregister: fix uninit'd data use &
-Wdeclaration-after-statement, 2022-11-15) unintentionally introduced a
use-after-free.

The problem arises when `maintenance_unregister()` sees a non-NULL
`config_file` string and thus tries to call
git_configset_get_value_multi() to lookup the corresponding values.

We store the result off, and then call git_configset_clear(), which
frees the pointer that we just stored. We then try to read that
now-freed pointer a few lines below, and there we have our
use-after-free:

    $ ./t7900-maintenance.sh -vxi --run=23 --valgrind
    [...]
    + git maintenance unregister --config-file ./other
    ==3048727== Invalid read of size 8
    ==3048727==    at 0x1869CA: maintenance_unregister (gc.c:1590)
    ==3048727==    by 0x188F42: cmd_maintenance (gc.c:2651)
    ==3048727==    by 0x128C62: run_builtin (git.c:466)
    ==3048727==    by 0x12907E: handle_builtin (git.c:721)
    ==3048727==    by 0x1292EC: run_argv (git.c:788)
    ==3048727==    by 0x12988E: cmd_main (git.c:926)
    ==3048727==    by 0x21ED39: main (common-main.c:57)
    ==3048727==  Address 0x4b38bc8 is 24 bytes inside a block of size 64 free'd
    ==3048727==    at 0x484617B: free (vg_replace_malloc.c:872)
    ==3048727==    by 0x2D207E: free_individual_entries (hashmap.c:188)
    ==3048727==    by 0x2D2153: hashmap_clear_ (hashmap.c:207)
    ==3048727==    by 0x270B5C: git_configset_clear (config.c:2375)
    ==3048727==    by 0x1869AC: maintenance_unregister (gc.c:1585)
    ==3048727==    by 0x188F42: cmd_maintenance (gc.c:2651)
    ==3048727==    by 0x128C62: run_builtin (git.c:466)
    ==3048727==    by 0x12907E: handle_builtin (git.c:721)
    ==3048727==    by 0x1292EC: run_argv (git.c:788)
    ==3048727==    by 0x12988E: cmd_main (git.c:926)
    ==3048727==    by 0x21ED39: main (common-main.c:57)
    [...]

Resolve this via a partial-revert of be0fd57228. The config_set struct
now gets a zero initialization, which makes free()-ing it a noop even
without calling git_configset_init(). When we do initialize it to a
non-zero value, it is only free()'d after our last read of `list`.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-15 13:56:11 -05:00
Ævar Arnfjörð Bjarmason
be0fd57228 maintenance --unregister: fix uninit'd data use & -Wdeclaration-after-statement
Since (maintenance: add option to register in a specific config,
2022-11-09) we've been unable to build with "DEVELOPER=1" without
"DEVOPTS=no-error", as the added code triggers a
"-Wdeclaration-after-statement" warning.

And worse than that, the data handed to git_configset_clear() is
uninitialized, as can be spotted with e.g.:

	./t7900-maintenance.sh -vixd --run=23 --valgrind
	[...]
	+ git maintenance unregister --force
	Conditional jump or move depends on uninitialised value(s)
	   at 0x6B5F1E: git_configset_clear (config.c:2367)
	   by 0x4BA64E: maintenance_unregister (gc.c:1619)
	   by 0x4BD278: cmd_maintenance (gc.c:2650)
	   by 0x409905: run_builtin (git.c:466)
	   by 0x40A21C: handle_builtin (git.c:721)
	   by 0x40A58E: run_argv (git.c:788)
	   by 0x40AF68: cmd_main (git.c:926)
	   by 0x5D39FE: main (common-main.c:57)
	 Uninitialised value was created by a stack allocation
	   at 0x4BA22C: maintenance_unregister (gc.c:1557)

Let's fix both of these issues, and also move the scope of the
variable to the "if" statement it's used in, to make it obvious where
it's used.

Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-15 12:31:53 -05:00
Ronan Pigott
1f80129d61 maintenance: add option to register in a specific config
maintenance register currently records the maintenance repo exclusively
within the user's global configuration, but other configuration files
may be relevant when running maintenance if they are included from the
global config. This option allows the user to choose where maintenance
repos are recorded.

Signed-off-by: Ronan Pigott <ronan@rjp.ie>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 22:39:25 -05:00
Ronan Pigott
13d5bbdf72 for-each-repo: interpolate repo path arguments
This is a quality of life change for git-maintenance, so repos can be
recorded with the tilde syntax. The register subcommand will not record
repos in this format by default.

Signed-off-by: Ronan Pigott <ronan@rjp.ie>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 22:39:25 -05:00
Taylor Blau
eea7033409 The twelfth batch
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 19:56:07 -05:00
Taylor Blau
3c5d0ce3f5 Merge branch 'vh/my-first-contribution-typo'
Documentation fix.

* vh/my-first-contribution-typo:
  Documentation: fix typo
2022-11-14 19:53:55 -05:00
Taylor Blau
859899ddc1 Merge branch 'ks/partialclone-casing'
Documentation fix.

* ks/partialclone-casing:
  repository-version.txt: partialClone casing change
2022-11-14 19:53:43 -05:00
Taylor Blau
dc8be3971c Merge branch 'mh/password-can-be-pat'
Documentation update to git-credential(1).

* mh/password-can-be-pat:
  Documentation/gitcredentials.txt: mention password alternatives
2022-11-14 19:53:42 -05:00
Taylor Blau
69eb1be693 Merge branch 'js/ci-set-output'
Update the actions/github-script dependency in CI to avoid a
deprecation warning.

* js/ci-set-output:
  ci: use a newer `github-script` version
2022-11-14 19:53:38 -05:00
Taylor Blau
311bf13147 Merge branch 'ab/rev-info-init'
Progress on being able to initialize a rev_info struct with a macro.

* ab/rev-info-init:
  revisions API: extend the nascent REV_INFO_INIT macro
2022-11-14 19:53:37 -05:00
Taylor Blau
d0c3853034 Merge branch 'al/trace2-clearing-skip-worktree'
Add trace2 counters to the region to clear skip worktree bits in a
sparse checkout.

* al/trace2-clearing-skip-worktree:
  index: raise a bug if the index is materialised more than once
  index: add trace2 region for clear skip worktree
2022-11-14 19:53:34 -05:00
Taylor Blau
561f3948a5 Merge branch 'do/modernize-t7001'
Modernize test script to avoid "test -f" and friends.

* do/modernize-t7001:
  t7001-mv.sh: modernizing test script using functions
2022-11-14 19:53:31 -05:00
M Hickford
dabb9d875f Docs: describe how a credential-generating helper works
Previously the docs only described storage helpers.

A concrete example: Git Credential Manager can generate credentials
for GitHub and GitLab via OAuth.
https://github.com/GitCredentialManager/git-credential-manager

Signed-off-by: M Hickford <mirth.hickford@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 18:18:59 -05:00
Vlad-Stefan Harbuz
c5353c4552 Documentation: fix typo
Signed-off-by: Vlad-Stefan Harbuz <vlad@vladh.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 18:14:58 -05:00
Glen Choo
b637a41ebe http: redact curl h2h3 headers in info
With GIT_TRACE_CURL=1 or GIT_CURL_VERBOSE=1, sensitive headers like
"Authorization" and "Cookie" get redacted. However, since [1], curl's
h2h3 module (invoked when using HTTP/2) also prints headers in its
"info", which don't get redacted. For example,

  echo 'github.com	TRUE	/	FALSE	1698960413304	o	foo=bar' >cookiefile &&
  GIT_TRACE_CURL=1 GIT_TRACE_CURL_NO_DATA=1 git \
    -c 'http.cookiefile=cookiefile' \
    -c 'http.version=' \
    ls-remote https://github.com/git/git refs/heads/main 2>output &&
  grep 'cookie' output

produces output like:

  23:04:16.920495 http.c:678              == Info: h2h3 [cookie: o=foo=bar]
  23:04:16.920562 http.c:637              => Send header: cookie: o=<redacted>

Teach http.c to check for h2h3 headers in info and redact them using the
existing header redaction logic. This fixes the broken redaction logic
that we noted in the previous commit, so mark the redaction tests as
passing under HTTP2.

[1] f8c3724aa9

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 17:42:46 -05:00
Jeff King
73c49a4474 t: run t5551 tests with both HTTP and HTTP/2
We have occasionally seen bugs that affect Git running only against an
HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http:
match headers case-insensitively when redacting, 2021-09-22). But since
we have no test coverage using HTTP/2, we only uncover these bugs in the
wild.

That commit gives a recipe for converting our Apache setup to support
HTTP/2, but:

  - it's not necessarily portable

  - we don't want to just test HTTP/2; we really want to do a variety of
    basic tests for _both_ protocols

This patch handles both problems by running a duplicate of t5551
(labeled as t5559 here) with an alternate-universe setup that enables
HTTP/2. So we'll continue to run t5551 as before, but run the same
battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given
platform, then t5559 should bail during the webserver setup, and
gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from
"auto" to "yes", where the point is to complain when webserver setup
fails).

In theory other http-related test scripts could benefit from the same
duplication, but doing t5551 should give us a reasonable check of basic
functionality, and would have caught both bugs we've seen in the wild
with HTTP/2.

A few notes on the implementation:

  - a script enables the server side config by calling enable_http2
    before starting the webserver. This avoids even trying to load any
    HTTP/2 config for t5551 (which is what lets it keep working with
    regular HTTP even on systems that don't support it). This also sets
    a prereq which can be used by individual tests.

  - As discussed in b66c77a64e, the http2 module isn't compatible with
    the "prefork" mpm, so we need to pick something else. I chose
    "event" here, which works on my Debian system, but it's possible
    there are platforms which would prefer something else. We can adjust
    that later if somebody finds such a platform.

  - The test "large fetch-pack requests can be sent using chunked
    encoding" makes sure we use a chunked transfer-encoding by looking
    for that header in the trace. But since HTTP/2 has its own streaming
    mechanisms, we won't find such a header. We could skip the test
    entirely by marking it with !HTTP2. But there's some value in making
    sure that the fetch itself succeeded. So instead, we'll confirm that
    either we're using HTTP2 _or_ we saw the expected chunked header.

  - the redaction tests fail under HTTP/2 with recent versions of curl.
    This is a bug! I've marked them with !HTTP2 here to skip them under
    t5559 for the moment. Using test_expect_failure would be more
    appropriate, but would require a bunch of boilerplate. Since we'll
    be fixing them momentarily, let's just skip them for now to keep the
    test suite bisectable, and we can re-enable them in the commit that
    fixes the bug.

  - one alternative layout would be to push most of t5551 into a
    lib-t5551.sh script, then source it from both t5551 and t5559.
    Keeping t5551 intact seemed a little simpler, as its one less level
    of indirection for people fixing bugs/regressions in the non-HTTP/2
    tests.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 17:42:46 -05:00
Jonathan Tan
e62f779ae6 Doc: document push.recurseSubmodules=only
Git learned pushing submodules without pushing the superproject by
the user specifying --recurse-submodules=only through 6c656c3fe4
("submodules: add RECURSE_SUBMODULES_ONLY value", 2016-12-20) and
225e8bf778 ("push: add option to push only submodules", 2016-12-20).
For users who use this feature regularly, it is desirable to have an
equivalent configuration.

It turns out that such a configuration (push.recurseSubmodules=only) is
already supported, even though it is neither documented nor mentioned
in the commit messages, due to the way the --recurse-submodules=only
feature was implemented (a function used to parse --recurse-submodules
was updated to support "only", but that same function is used to parse
push.recurseSubmodules too). What is left is to document it and test it,
which is what this commit does.

There is a possible point of confusion when recursing into a submodule
that itself has the push.recurseSubmodules=only configuration, because
if a repository has only its submodules pushed and not itself, its
superproject can never be pushed. Therefore, treat such configurations
as being "on-demand", and print a warning message.

Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-14 16:55:50 -05:00
M Hickford
7fd54b6238 docs: clarify that credential discards unrecognised attributes
It was previously unclear how unrecognised attributes are handled.

Signed-off-by: M Hickford <mirth.hickford@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-12 23:57:34 -05:00
Johannes Schindelin
a90085b68c tests(scalar): tighten the stale scalar.repo test some
As pointed out by Stolee, the previous incarnation of this test case was
not stringent enough: we want to verify that _only_ the stale entries
are removed (previously, the test case would have succeeded even if all
entries had been removed).

Let's rectify this and verify that the other entries are left intact.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 17:24:36 -05:00
Kousik Sanagavarapu
29c550f0af repository-version.txt: partialClone casing change
Remotes are considered "promisor" if extensions.partialClone and some
other configuration variables are set. The casing for this in
Documentation/technical/repository-version.txt is not proper and may
cause confusion. This change corrects this casing.

Signed-off-by: Kousik Sanagavarapu <five231003@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 17:23:12 -05:00
Ævar Arnfjörð Bjarmason
0d12792f5f Makefile: don't create a ".build/.build/" for cocci, fix output
Fix a couple of issues in the recently merged 0f3c55d4c2 (Merge
branch 'ab/coccicheck-incremental' into next, 2022-11-08):

In copying over the "contrib/coccinelle/" rules to
".build/contrib/coccinelle/" we inadvertently ended up with a
".build/.build/contrib/coccinelle/" as well. We'd generate the
per-file patches in the former, and keep the rule and overall result
in the latter. E.g. running:

	make contrib/coccinelle/free.cocci.patch COCCI_SOURCES="attr.c grep.c"

Would, per "tree -a .build" yield the following result:

	.build
	├── .build
	│   └── contrib
	│       └── coccinelle
	│           └── free.cocci.patch
	│               ├── attr.c
	│               ├── attr.c.log
	│               ├── grep.c
	│               └── grep.c.log
	└── contrib
	    └── coccinelle
	        ├── FOUND_H_SOURCES
	        ├── free.cocci
	        └── free.cocci.patch

Now we'll instead generate all of our files in
".build/contrib/coccinelle/". Fixing this required renaming the
directory where we keep our per-file patches, as we'd otherwise
conflict with the result.

Now the per-file patch directory is named e.g. "free.cocci.d". And the
end result will now be:

	.build
	└── contrib
	    └── coccinelle
	        ├── FOUND_H_SOURCES
	        ├── free.cocci
	        ├── free.cocci.d
	        │   ├── attr.c.patch
	        │   ├── attr.c.patch.log
	        │   ├── grep.c.patch
	        │   └── grep.c.patch.log
	        └── free.cocci.patch

The per-file patches now have a ".patch" file suffix, which fixes
another issue reported against 0f3c55d4c2: The summary output was
confusing. Before for the "make" command above we'd emit:

	[...]
	MKDIR -p .build/contrib/coccinelle
	CP contrib/coccinelle/free.cocci .build/contrib/coccinelle/free.cocci
	GEN .build/contrib/coccinelle/FOUND_H_SOURCES
	MKDIR -p .build/.build/contrib/coccinelle/free.cocci.patch
	SPATCH .build/.build/contrib/coccinelle/free.cocci.patch/grep.c
	SPATCH .build/.build/contrib/coccinelle/free.cocci.patch/attr.c
	SPATCH CAT $^ >.build/contrib/coccinelle/free.cocci.patch
	CP .build/contrib/coccinelle/free.cocci.patch contrib/coccinelle/free.cocci.patch

But now we'll instead emit (identical output at the start omitted):

	[...]
	MKDIR -p .build/contrib/coccinelle/free.cocci.d
	SPATCH grep.c >.build/contrib/coccinelle/free.cocci.d/grep.c.patch
	SPATCH attr.c >.build/contrib/coccinelle/free.cocci.d/attr.c.patch
	SPATCH CAT .build/contrib/coccinelle/free.cocci.d/**.patch >.build/contrib/coccinelle/free.cocci.patch
	CP .build/contrib/coccinelle/free.cocci.patch contrib/coccinelle/free.cocci.patch

I.e. we have an "SPATCH" line that makes it clear that we're running
against the "{attr,grep}.c" file. The "SPATCH CAT" is then altered to
correspond to it, showing that we're concatenating the
"free.cocci.d/**.patch" files into one generated "free.cocci.patch" at
the end.

Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 17:21:45 -05:00
Đoàn Trần Công Danh
e9011b6092 bisect--helper: parse subcommand with OPT_SUBCOMMAND
As of it is, we're parsing subcommand with OPT_CMDMODE, which will
continue to parse more options even if the command has been found.

When we're running "git bisect run" with a command that expecting
a "--log" or "--no-log" arguments, or one of those "--bisect-..."
arguments, bisect--helper may mistakenly think those options are
bisect--helper's option.

We may fix those problems by passing "--" when calling from
git-bisect.sh, and skip that "--" in bisect--helper. However, it may
interfere with user's "--".

Let's parse subcommand with OPT_SUBCOMMAND since that API was born for
this specific use-case.

Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Signed-off-by: Đoàn Trần Công Danh <congdanhqx@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 17:04:57 -05:00
Đoàn Trần Công Danh
464ce0aba8 bisect--helper: move all subcommands into their own functions
In a later change, we will use OPT_SUBCOMMAND to parse sub-commands to
avoid consuming non-option opts.

Since OPT_SUBCOMMAND needs a function pointer to operate,
let's move it now.

Signed-off-by: Đoàn Trần Công Danh <congdanhqx@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 17:04:54 -05:00
Đoàn Trần Công Danh
58786d73ba bisect--helper: remove unused options
'git-bisect.sh' used to have a 'bisect_next_check' to check if we have
both good/bad, old/new terms set or not.  In commit 129a6cf344
(bisect--helper: `bisect_next_check` shell function in C, 2019-01-02),
a subcommand for bisect--helper was introduced to port the check to C.
Since d1bbbe45df (bisect--helper: reimplement `bisect_run` shell
function in C, 2021-09-13), all users of 'bisect_next_check' was
re-implemented in C, this subcommand was no longer used but we forgot
to remove '--bisect-next-check'.

'git-bisect.sh' also used to have a 'bisect_write' function, whose
third positional parameter was a "nolog" flag.  This flag was only used
when 'bisect_start' invoked 'bisect_write' to write the starting good
and bad revisions.  Then 0f30233a11 (bisect--helper: `bisect_write`
shell function in C, 2019-01-02) ported it to C as a command mode of
'bisect--helper', which (incorrectly) added the '--no-log' option,
and convert the only place ('bisect_start') that call 'bisect_write'
with 'nolog' to 'git bisect--helper --bisect-write' with 'nolog'
instead of '--no-log', since 'bisect--helper' has command modes not
subcommands, all other command modes see and handle that option as well.
This bogus state didn't last long, however, because in the same patch
series 06f5608c14 (bisect--helper: `bisect_start` shell function
partially in C, 2019-01-02) the C reimplementation of bisect_start()
started calling the bisect_write() C function, this time with the
right 'nolog' function parameter. From then on there was no need for
the '--no-log' option in 'bisect--helper'. Eventually all bisect
subcommands were ported to C as 'bisect--helper' command modes, each
calling the bisect_write() C function instead, but when the
'--bisect-write' command mode was removed in 68efed8c8a
(bisect--helper: retire `--bisect-write` subcommand, 2021-02-03) it
forgot to remove that '--no-log' option.
'--no-log' option had never been used and it's unused now.

Let's remove --bisect-next-check and --no-log from option parsing.

Signed-off-by: Đoàn Trần Công Danh <congdanhqx@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 17:04:52 -05:00
Eric Sunshine
48d69d8f2f chainlint: prefix annotated test definition with line numbers
When chainlint detects problems in a test, it prints out the name of the
test script, the name of the problematic test, and a copy of the test
definition with "?!FOO?!" annotations inserted at the locations where
problems were detected. Taken together this information is sufficient
for the test author to identify the problematic code in the original
test definition. However, in a lengthy script or a lengthy test
definition, the author may still end up using the editor's search
feature to home in on the exact problem location.

To further assist the test author, display line numbers along with the
annotated test definition, thus allowing the author to jump directly to
each problematic line.

Suggested-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 16:56:21 -05:00
Eric Sunshine
bf42f0a030 chainlint: latch line numbers at which each token starts and ends
When chainlint detects problems in a test, it prints out the name of the
test script, the name of the problematic test, and a copy of the test
definition with "?!FOO?!" annotations inserted at the locations where
problems were detected. Taken together this information is sufficient
for the test author to identify the problematic code in the original
test definition. However, in a lengthy script or a lengthy test
definition, the author may still end up using the editor's search
feature to home in on the exact problem location.

To further assist the test author, an upcoming change will display line
numbers along with the annotated test definition, thus allowing the
author to jump directly to each problematic line. As preparation,
upgrade Lexer to latch the line numbers at which each token starts and
ends, and return that information with the token itself.

Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 16:56:21 -05:00
Eric Sunshine
5451877f87 chainlint: sidestep impoverished macOS "terminfo"
Although the macOS Terminal.app is "xterm"-compatible, its corresponding
"terminfo" entries -- such as "xterm", "xterm-256color", and
"xterm-new"[1] -- neglect to mention capabilities which Terminal.app
actually supports (such as "dim text"). This oversight on Apple's part
ends up penalizing users of "good citizen" console programs which
consult "terminfo" to tailor their output based upon reported terminal
capabilities (as opposed to programs which assume that the terminal
supports ANSI codes). The same problem is present in other Apple
"terminfo" entries, such as "nsterm"[2], with which macOS Terminal.app
may be configured.

Sidestep this Apple problem by imbuing get_colors() with specific
knowledge of capabilities common to "xterm" and "nsterm", rather than
trusting "terminfo" to report them correctly. Although hard-coding such
knowledge is ugly, "xterm" support is nearly ubiquitous these days, and
Git itself sets precedence by assuming support for ANSI color codes. For
other terminal types, fall back to querying "terminfo" via `tput` as
usual.

FOOTNOTES

[1] iTerm2 FAQ suggests "xterm-new": https://iterm2.com/faq.html

[2] Neovim documentation recommends terminal type "nsterm" with
    Terminal.app: https://neovim.io/doc/user/term.html#terminfo

Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 16:56:21 -05:00
Phillip Wood
688d82f254 sequencer: tighten label lookups
The `label` command creates a ref refs/rewritten/<label> that the
`reset` and `merge` commands resolve by calling lookup_label(). That
uses lookup_commit_reference_by_name() to look up the label ref. As
lookup_commit_reference_by_name() uses the dwim rules when looking up
the label it will look for a branch named
refs/heads/refs/rewritten/<label> and return that instead of an error if
the branch exists and the label does not. Fix this by using read_ref()
followed by lookup_commit_object() when looking up labels.

Signed-off-by: Phillip Wood <phillip.wood@dunelm.org.uk>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-10 23:36:24 -05:00