The function `format_display()` is used to print a single reference
update to a buffer which will then ultimately be printed by the caller.
This architecture causes us to duplicate some logic across the different
callsites of this function. This makes it hard to follow the code as
some parts of the logic are located in one place, while other parts of
the logic are located in a different place. Furthermore, by having the
logic scattered around it becomes quite hard to implement a new output
format for the reference updates.
We can make the logic a whole lot easier to understand by making the
`format_display()` function self-contained so that it handles formatting
and printing of the references. This will eventually allow us to easily
implement a completely different output format, but also opens the door
to conditionally print to either stdout or stderr depending on the
output format.
As a first step towards that goal we move the formatting directive used
by both callers to print a single reference update into this function.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before printing the name of the local references that would be updated
by a fetch we first prettify the reference name. This is done at the
calling side so that `format_display()` never sees the full name of the
local reference. This restricts our ability to introduce new output
formats that might want to print the full reference name.
Right now, all callsites except one are prettifying the reference name
anyway. And the only callsite that doesn't passes `FETCH_HEAD` as the
hardcoded reference name to `format_display()`, which would never be
changed by a call to `prettify_refname()` anyway. So let's refactor the
code to pass in the full local reference name and then prettify it in
the formatting code.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The git-fetch(1) command supports printing references either in "full"
or "compact" format depending on the `fetch.ouput` config key. The
format that is to be used is tracked in a global variable.
De-globalize the variable by moving it into the `display_state`
structure.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In order to print references in proper columns we need to calculate the
width of the reference column before starting to print the references.
This is done with the help of a global variable `refcol_width`.
Refactor the code to instead use a new structure `display_state` that
contains the computed width and plumb it through the stack as required.
This is only the first step towards de-globalizing the state required to
print references.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A new "fetch.hideRefs" option can be used to exclude specified refs
from "rev-list --objects --stdin --not --all" traversal for
checking object connectivity, most useful when there are many
unrelated histories in a single repository.
* ew/fetch-hiderefs:
fetch: support hideRefs to speed up connectivity checks
More work towards -Wunused.
* jk/unused-post-2.39-part2: (21 commits)
help: mark unused parameter in git_unknown_cmd_config()
run_processes_parallel: mark unused callback parameters
userformat_want_item(): mark unused parameter
for_each_commit_graft(): mark unused callback parameter
rewrite_parents(): mark unused callback parameter
fetch-pack: mark unused parameter in callback function
notes: mark unused callback parameters
prio-queue: mark unused parameters in comparison functions
for_each_object: mark unused callback parameters
list-objects: mark unused callback parameters
mark unused parameters in signal handlers
run-command: mark error routine parameters as unused
mark "pointless" data pointers in callbacks
ref-filter: mark unused callback parameters
http-backend: mark unused parameters in virtual functions
http-backend: mark argc/argv unused
object-name: mark unused parameters in disambiguate callbacks
serve: mark unused parameters in virtual functions
serve: use repository pointer to get config
ls-refs: drop config caching
...
Code clean-up to clarify the rule that "git-compat-util.h" must be
the first to be included.
* en/header-cleanup:
diff.h: remove unnecessary include of object.h
Remove unnecessary includes of builtin.h
treewide: replace cache.h with more direct headers, where possible
replace-object.h: move read_replace_refs declaration from cache.h to here
object-store.h: move struct object_info from cache.h
dir.h: refactor to no longer need to include cache.h
object.h: stop depending on cache.h; make cache.h depend on object.h
ident.h: move ident-related declarations out of cache.h
pretty.h: move has_non_ascii() declaration from commit.h
cache.h: remove dependence on hex.h; make other files include it explicitly
hex.h: move some hex-related declarations from cache.h
hash.h: move some oid-related declarations from cache.h
alloc.h: move ALLOC_GROW() functions from cache.h
treewide: remove unnecessary cache.h includes in source files
treewide: remove unnecessary cache.h includes
treewide: remove unnecessary git-compat-util.h includes in headers
treewide: ensure one of the appropriate headers is sourced first
It seems a user would expect this option would work regardless
of whether it's fetching from a single remote, many remotes,
or recursing into submodules.
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
With roughly 800 remotes all fetching into their own
refs/remotes/$REMOTE/* island, the connectivity check[1] gets
expensive for each fetch on systems which lack sufficient RAM to
cache objects.
To do a no-op fetch on one $REMOTE out of hundreds, hideRefs now
allows the no-op fetch to take ~30 seconds instead of ~20 minutes
on a noisy, RAM-constrained machine (localhost, so no network latency):
git -c fetch.hideRefs=refs \
-c fetch.hideRefs='!refs/remotes/$REMOTE/' \
fetch $REMOTE
[1] `git rev-list --objects --stdin --not --all --quiet --alternate-refs'
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our parallel process API takes several callbacks via function pointers
in the run_process_paralell_opts struct. Not every callback needs every
parameter; let's mark the unused ones to make -Wunused-parameter happy.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
prior to 51243f9 (run-command API: don't fall back on online_cpus(),
2022-10-12) `git fetch --multiple --jobs=0` would choose some default amount
of jobs, similar to `git -c fetch.parallel=0 fetch --multiple`. While our
documentation only ever promised that `fetch.parallel` would fall back to a
"sensible default", it makes sense to do the same for `--jobs`. So fall back
to online_cpus() and not BUG() out.
This fixes https://github.com/git-for-windows/git/issues/4302
Reported-by: Drew Noakes <drnoakes@microsoft.com>
Signed-off-by: Matthias Aßhauer <mha1993@live.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The bundle-URI subsystem adds support for creation-token heuristics
to help incremental fetches.
* ds/bundle-uri-5:
bundle-uri: test missing bundles with heuristic
bundle-uri: store fetch.bundleCreationToken
fetch: fetch from an external bundle URI
bundle-uri: drop bundle.flag from design doc
clone: set fetch.bundleURI if appropriate
bundle-uri: download in creationToken order
bundle-uri: parse bundle.<id>.creationToken values
bundle-uri: parse bundle.heuristic=creationToken
t5558: add tests for creationToken heuristic
bundle: verify using check_connected()
bundle: test unbundling with incomplete history
When a user specifies a URI via 'git clone --bundle-uri', that URI may
be a bundle list that advertises a 'bundle.heuristic' value. In that
case, the Git client stores a 'fetch.bundleURI' config value storing
that URI.
Teach 'git fetch' to check for this config value and download bundles
from that URI before fetching from the Git remote(s). Likely, the bundle
provider has configured a heuristic (such as "creationToken") that will
allow the Git client to download only a portion of the bundles before
continuing the fetch.
Since this URI is completely independent of the remote server, we want
to be sure that we connect to the bundle URI before creating a
connection to the Git remote. We do not want to hold a stateful
connection for too long if we can avoid it.
To test that this works correctly, extend the previous tests that set
'fetch.bundleURI' to do follow-up fetches. The bundle list is updated
incrementally at each phase to demonstrate that the heuristic avoids
downloading older bundles. This includes the middle fetch downloading
the objects in bundle-3.bundle from the Git remote, and therefore not
needing that bundle in the third fetch.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fetching in parallel from a remote group with a duplicated remote results
in the following:
error: cannot lock ref '<ref>': is at <oid> but expected <oid>
This doesn't happen in serial since fetching from the same remote that
has already been fetched from is a noop. Therefore, remove any duplicated
remotes after remote groups are parsed.
Signed-off-by: Calvin Wan <calvinwan@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Simplify the run-command API.
* rs/no-more-run-command-v:
replace and remove run_command_v_opt()
replace and remove run_command_v_opt_cd_env_tr2()
replace and remove run_command_v_opt_tr2()
replace and remove run_command_v_opt_cd_env()
use child_process members "args" and "env" directly
use child_process member "args" instead of string array variable
sequencer: simplify building argument list in do_exec()
bisect--helper: factor out do_bisect_run()
bisect: simplify building "checkout" argument list
am: simplify building "show" argument list
run-command: fix return value comment
merge: remove always-the-same "verbose" arguments
Replace the remaining calls of run_command_v_opt() with run_command()
calls and explict struct child_process variables. This is more verbose,
but not by much overall. The code becomes more flexible, e.g. it's easy
to extend to conditionally add a new argument.
Then remove the now unused function and its own flag names, simplifying
the run-command API.
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Have the users of the "run_processes_parallel_tr2()" function use
"run_processes_parallel()" instead. In preceding commits the latter
was refactored to take a "struct run_process_parallel_opts" argument,
since the only reason for "run_processes_parallel_tr2()" to exist was
to take arguments that are now a part of that struct we can do away
with it.
See ee4512ed48 (trace2: create new combined trace facility,
2019-02-22) for the addition of the "*_tr2()" variant of the function,
it was used by every caller except "t/helper/test-run-command.c"..
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a "jobs = 0" is passed let's BUG() out rather than fall back on
online_cpus(). The default behavior was added when this API was
implemented in c553c72eed (run-command: add an asynchronous parallel
child processor, 2015-12-15).
Most of our code in-tree that scales up to "online_cpus()" by default
calls that function by itself. Keeping this default behavior just for
the sake of two callers means that we'd need to maintain this one spot
where we're second-guessing the config passed down into pp_init().
The preceding commit has an overview of the API callers that passed
"jobs = 0". There were only two of them (actually three, but they
resolved to these two config parsing codepaths).
The "fetch.parallel" caller already had a test for the
"fetch.parallel=0" case added in 0353c68818 (fetch: do not run a
redundant fetch from submodule, 2022-05-16), but there was no such
test for "submodule.fetchJobs". Let's add one here.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the "run_processes_parallel{,_tr2}()" functions to return void,
instead of int. Ever since c553c72eed (run-command: add an
asynchronous parallel child processor, 2015-12-15) they have
unconditionally returned 0.
To get a "real" return value out of this function the caller needs to
get it via the "task_finished_fn" callback, see the example in hook.c
added in 96e7225b31 (hook: add 'run' subcommand, 2021-12-22).
So the "result = " and "if (!result)" code added to "builtin/fetch.c"
d54dea77db (fetch: let --jobs=<n> parallelize --multiple, too,
2019-10-05) has always been redundant, we always took that "if"
path. Likewise the "ret =" in "t/helper/test-run-command.c" added in
be5d88e112 (test-tool run-command: learn to run (parts of) the
testsuite, 2019-10-04) wasn't used, instead we got the return value
from the "if (suite.failed.nr > 0)" block seen in the context.
Subsequent commits will alter this API interface, getting rid of this
always-zero return value makes it easier to understand those changes.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A couple of bugfixes with code clean-up.
* jk/list-objects-filter-cleanup:
list-objects-filter: convert filter_spec to a strbuf
list-objects-filter: add and use initializers
list-objects-filter: handle null default filter spec
list-objects-filter: don't memset after releasing filter struct
"git fetch" over protocol v2 sent an incorrect ref prefix request
to the server and made "git pull" with configured fetch refspec
that does not cover the remote branch to merge with fail, which has
been corrected.
* jk/proto-v2-ref-prefix-fix:
fetch: add branch.*.merge to default ref-prefix extension
fetch: stop checking for NULL transport->remote in do_fetch()
Undoes 'jk/unused-annotation' topic and redoes it to work around
Coccinelle rules misfiring false positives in unrelated codepaths.
* ab/unused-annotation:
git-compat-util.h: use "deprecated" for UNUSED variables
git-compat-util.h: use "UNUSED", not "UNUSED(var)"
Annotate function parameters that are not used (but cannot be
removed for structural reasons), to prepare us to later compile
with -Wunused warning turned on.
* jk/unused-annotation:
is_path_owned_by_current_uid(): mark "report" parameter as unused
run-command: mark unused async callback parameters
mark unused read_tree_recursive() callback parameters
hashmap: mark unused callback parameters
config: mark unused callback parameters
streaming: mark unused virtual method parameters
transport: mark bundle transport_options as unused
refs: mark unused virtual method parameters
refs: mark unused reflog callback parameters
refs: mark unused each_ref_fn parameters
git-compat-util: add UNUSED macro
In 7e2619d8ff (list_objects_filter_options: plug leak of filter_spec
strings, 2022-09-08), we noted that the filter_spec string_list was
inconsistent in how it handled memory ownership of strings stored in the
list. The fix there was a bit of a band-aid to set the "strdup_strings"
variable right before adding anything.
That works OK, and it lets the users of the API continue to
zero-initialize the struct. But it makes the code a bit hard to follow
and accident-prone, as any other spots appending the filter_spec need to
think about whether to set the strdup_strings value, too (there's one
such spot in partial_clone_get_default_filter_spec(), which is probably
a possible memory leak).
So let's do that full cleanup now. We'll introduce a
LIST_OBJECTS_FILTER_INIT macro and matching function, and use them as
appropriate (though it is for the "_options" struct, this matches the
corresponding list_objects_filter_release() function).
This is harder than it seems! Many other structs, like
git_transport_data, embed the filter struct. So they need to initialize
it themselves even if the rest of the enclosing struct is OK with
zero-initialization. I found all of the relevant spots by grepping
manually for declarations of list_objects_filter_options. And then doing
so recursively for structs which embed it, and ones which embed those,
and so on.
I'm pretty sure I got everything, but there's no change that would alert
the compiler if any topics in flight added new declarations. To catch
this case, we now double-check in the parsing function that things were
initialized as expected and BUG() if appropriate.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When running "git pull" with no arguments, we'll do a default "git
fetch" and then try to merge the branch specified by the branch.*.merge
config. There's code in get_ref_map() to treat that "merge" branch as
something we want to fetch, even if it is not otherwise covered by the
default refspec.
This works fine with the v0 protocol, as the server tells us about all
of the refs, and get_ref_map() is the ultimate decider of what we fetch.
But in the v2 protocol, we send the ref-prefix extension to the server,
asking it to limit the ref advertisement. And we only tell it about the
default refspec for the remote; we don't mention the branch.*.merge
config at all.
This usually doesn't matter, because the default refspec matches
"refs/heads/*", which covers all branches. But if you explicitly use a
narrow refspec, then "git pull" on some branches may fail. The server
doesn't advertise the branch, so we don't fetch it, and "git pull"
thinks that it went away upstream.
We can fix this by including any branch.*.merge entries for the current
branch in the list of ref-prefixes we pass to the server. This only
needs to happen when using the default configured refspec (since
command-line refspecs are already added, and take precedence in deciding
what we fetch). We don't otherwise need to replicate any of the "what to
fetch" logic in get_ref_map(). These ref-prefixes are an optimization,
so it's OK if we tell the server to advertise the branch.*.merge ref,
even if we're not going to pull it. We'll just choose not to fetch it.
The test here is based on one constructed by Johannes. I modified the
branch names to trigger the ref-prefix issue (and be more descriptive),
and to confirm that "git pull" actually updated the local ref, which
should be more robust than just checking stderr.
Reported-by: Lana Deere <lana.deere@gmail.com>
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This field will never be NULL; if it were, we'd segfault earlier in the
function when we unconditionally check transport->remote->fetch_tags.
Likewise, many other functions dereference it unconditionally.
This is a small simplification, but it will make things easier as we
extend this conditional in the next patch.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As reported in [1] the "UNUSED(var)" macro introduced in
2174b8c75d (Merge branch 'jk/unused-annotation' into next,
2022-08-24) breaks coccinelle's parsing of our sources in files where
it occurs.
Let's instead partially go with the approach suggested in [2] of
making this not take an argument. As noted in [1] "coccinelle" will
ignore such tokens in argument lists that it doesn't know about, and
it's less of a surprise to syntax highlighters.
This undoes the "help us notice when a parameter marked as unused is
actually use" part of 9b24034754 (git-compat-util: add UNUSED macro,
2022-08-19), a subsequent commit will further tweak the macro to
implement a replacement for that functionality.
1. https://lore.kernel.org/git/220825.86ilmg4mil.gmgdl@evledraar.gmail.com/
2. https://lore.kernel.org/git/220819.868rnk54ju.gmgdl@evledraar.gmail.com/
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The namespaces used by "log --decorate" from "refs/" hierarchy by
default has been tightened.
* ds/decorate-filter-tweak:
fetch: use ref_namespaces during prefetch
maintenance: stop writing log.excludeDecoration
log: create log.initialDecorationSet=all
log: add --clear-decorations option
log: add default decoration filter
log-tree: use ref_namespaces instead of if/else-if
refs: use ref_namespaces for replace refs base
refs: add array of ref namespaces
t4207: test coloring of grafted decorations
t4207: modernize test
refs: allow "HEAD" as decoration filter
Hashmap comparison functions must conform to a particular callback
interface, but many don't use all of their parameters. Especially the
void cmp_data pointer, but some do not use keydata either (because they
can easily form a full struct to pass when doing lookups). Let's mark
these to make -Wunused-parameter happy.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Functions used with for_each_ref(), etc, need to conform to the
each_ref_fn interface. But most of them don't need every parameter;
let's annotate the unused ones to quiet -Wunused-parameter.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "refs/prefetch/" namespace is used by 'git fetch --prefetch' as a
replacement of the destination of the refpsec for a remote. Git also
removes refspecs that include tags.
Instead of using string literals for the 'refs/tags/ and
'refs/prefetch/' namespaces, use the entries in the ref_namespaces
array.
This kind of change could be done in many places around the codebase,
but we are isolating only to this change because of the way the
refs/prefetch/ namespace somewhat motivated the creation of the
ref_namespaces array.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add Coccinelle rules to detect the pattern of initializing and then
finalizing a structure without using it in between at all, which
happens after code restructuring and the compilers fail to
recognize as an unused variable.
* ab/cocci-unused:
cocci: generalize "unused" rule to cover more than "strbuf"
cocci: add and apply a rule to find "unused" strbufs
cocci: have "coccicheck{,-pending}" depend on "coccicheck-test"
cocci: add a "coccicheck-test" target and test *.cocci rules
Makefile & .gitignore: ignore & clean "git.res", not "*.res"
Makefile: remove mandatory "spatch" arguments from SPATCH_FLAGS
Introduce a helper to see if a branch is already being worked on
(hence should not be newly checked out in a working tree), which
performs much better than the existing find_shared_symref() to
replace many uses of the latter.
* ds/branch-checked-out:
branch: drop unused worktrees variable
fetch: stop passing around unused worktrees variable
branch: fix branch_checked_out() leaks
branch: use branch_checked_out() when deleting refs
fetch: use new branch_checked_out() and add tests
branch: check for bisects and rebases
branch: add branch_checked_out() helper
Add a coccinelle rule to remove "struct strbuf" initialization
followed by calling "strbuf_release()" function, without any uses of
the strbuf in the same function.
See the tests in contrib/coccinelle/tests/unused.{c,res} for what it's
intended to find and replace.
The inclusion of "contrib/scalar/scalar.c" is because "spatch" was
manually run on it (we don't usually run spatch on contrib).
Per the "buggy code" comment we also match a strbuf_init() before the
xmalloc(), but we're not seeking to be so strict as to make checks
that the compiler will catch for us redundant. Saying we'll match
either "init" or "xmalloc" lines makes the rule simpler.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 12d47e3b1f (fetch: use new branch_checked_out() and add tests,
2022-06-14), fetch's update_local_ref() function stopped using its
"worktrees" parameter. It doesn't need it, since the
branch_checked_out() function examines the global worktrees under the
hood.
So we can not only drop the unused parameter from that function, but
also from its entire call chain. And as we do so all the way up to
do_fetch(), we can see that nobody uses it at all, and we can drop the
local variable there entirely.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching refs from a remote, it is possible that the refspec will
cause use to overwrite a ref that is checked out in a worktree. The
existing logic in builtin/fetch.c uses a possibly-slow mechanism. Update
those sections to use the new, more efficient branch_checked_out()
helper.
These uses were not previously tested, so add a test case that can be
used for these kinds of collisions. There is only one test now, but more
tests will be added as other consumers of branch_checked_out() are
added.
Note that there are two uses in builtin/fetch.c, but only one of the
messages is tested. This is because the tested check is run before
completing the fetch, and the untested check is not reachable without
concurrent updates to the filesystem. Thus, it is beneficial to keep
that extra check for the sake of defense-in-depth. However, we should
not attempt to test the check, as the effort required is too
complicated to be worth the effort. This use in update_local_ref()
also requires a change in the error message because we no longer have
access to the worktree struct, only the path of the worktree. This error
is so rare that making a distinction between the two is not critical.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --recurse-submodules" from multiple remotes (either from
a remote group, or "--all") used to make one extra "git fetch" in
the submodules, which has been corrected.
* jc/avoid-redundant-submodule-fetch:
fetch: do not run a redundant fetch from submodule
When 7dce19d3 (fetch/pull: Add the --recurse-submodules option,
2010-11-12) introduced the "--recurse-submodule" option, the
approach taken was to perform fetches in submodules only once, after
all the main fetching (it may usually be a fetch from a single
remote, but it could be fetching from a group of remotes using
fetch_multiple()) succeeded. Later we added "--all" to fetch from
all defined remotes, which complicated things even more.
If your project has a submodule, and you try to run "git fetch
--recurse-submodule --all", you'd see a fetch for the top-level,
which invokes another fetch for the submodule, followed by another
fetch for the same submodule. All but the last fetch for the
submodule come from a "git fetch --recurse-submodules" subprocess
that is spawned via the fetch_multiple() interface for the remotes,
and the last fetch comes from the code at the end.
Because recursive fetching from submodules is done in each fetch for
the top-level in fetch_multiple(), the last fetch in the submodule
is redundant. It only matters when fetch_one() interacts with a
single remote at the top-level.
While we are at it, there is one optimization that exists in dealing
with a group of remote, but is missing when "--all" is used. In the
former, when the group turns out to be a group of one, instead of
spawning "git fetch" as a subprocess via the fetch_multiple()
interface, we use the normal fetch_one() code path. Do the same
when handing "--all", if it turns out that we have only one remote
defined.
Reviewed-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This check was introduced in 8ee5d73137 (Fix fetch/pull when run without
--update-head-ok, 2008-10-13) in order to protect against replacing the ref
of the active branch by mistake, for example by running git fetch origin
master:master.
It was later extended in 8bc1f39f41 (fetch: protect branches checked out
in all worktrees, 2021-12-01) to scan all worktrees.
This operation is very expensive (takes about 30s in my repository) when
there are many tags or branches, and it is executed on every fetch, even if
no local heads are updated at all.
Limit it to protect only refs/heads/* to improve fetch performance.
Signed-off-by: Orgad Shaneh <orgads@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --refetch" learned to fetch everything without telling
the other side what we already have, which is useful when you
cannot trust what you have in the local object store.
* rc/fetch-refetch:
docs: mention --refetch fetch option
fetch: after refetch, encourage auto gc repacking
t5615-partial-clone: add test for fetch --refetch
fetch: add --refetch option
builtin/fetch-pack: add --refetch option
fetch-pack: add refetch
fetch-negotiator: add specific noop initializer
After invoking `fetch --refetch`, the object db will likely contain many
duplicate objects. If auto-maintenance is enabled, invoke it with
appropriate settings to encourage repacking/consolidation.
* gc.autoPackLimit: unless this is set to 0 (disabled), override the
value to 1 to force pack consolidation.
* maintenance.incremental-repack.auto: unless this is set to 0, override
the value to -1 to force incremental repacking.
Signed-off-by: Robert Coup <robert@coup.net.nz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach fetch and transports the --refetch option to force a full fetch
without negotiating common commits with the remote. Use when applying a
new partial clone filter to refetch all matching objects.
Signed-off-by: Robert Coup <robert@coup.net.nz>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When "git fetch --recurse-submodules" grabbed submodule commits
that would be needed to recursively check out newly fetched commits
in the superproject, it only paid attention to submodules that are
in the current checkout of the superproject. We now do so for all
submodules that have been run "git submodule init" on.
* gc/recursive-fetch-with-unused-submodules:
submodule: fix latent check_has_commit() bug
fetch: fetch unpopulated, changed submodules
submodule: move logic into fetch_task_create()
submodule: extract get_fetch_task()
submodule: store new submodule commits oid_array in a struct
submodule: inline submodule_commits() into caller
submodule: make static functions read submodules from commits
t5526: create superproject commits with test helper
t5526: stop asserting on stderr literally
t5526: introduce test helper to assert on fetches
Various optimization for "git fetch".
* ps/fetch-mirror-optim:
refs/files-backend: optimize reading of symbolic refs
remote: read symbolic refs via `refs_read_symbolic_ref()`
refs: add ability for backends to special-case reading of symbolic refs
fetch: avoid lookup of commits when not appending to FETCH_HEAD
upload-pack: look up "want" lines via commit-graph
"git fetch --recurse-submodules" only considers populated
submodules (i.e. submodules that can be found by iterating the index),
which makes "git fetch" behave differently based on which commit is
checked out. As a result, even if the user has initialized all submodules
correctly, they may not fetch the necessary submodule commits, and
commands like "git checkout --recurse-submodules" might fail.
Teach "git fetch" to fetch cloned, changed submodules regardless of
whether they are populated. This is in addition to the current behavior
of fetching populated submodules (which is always attempted regardless
of what was fetched in the superproject, or even if nothing was fetched
in the superproject).
A submodule may be encountered multiple times (via the list of
populated submodules or via the list of changed submodules). When this
happens, "git fetch" only reads the 'populated copy' and ignores the
'changed copy'. Amend the verify_fetch_result() test helper so that we
can assert on which 'copy' is being read.
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch" can make two separate fetches, but ref updates coming
from them were in two separate ref transactions under "--atomic",
which has been corrected.
* ps/fetch-atomic:
fetch: make `--atomic` flag cover pruning of refs
fetch: make `--atomic` flag cover backfilling of tags
refs: add interface to iterate over queued transactional updates
fetch: report errors when backfilling tags fails
fetch: control lifecycle of FETCH_HEAD in a single place
fetch: backfill tags before setting upstream
fetch: increase test coverage of fetches
When fetching from a remote repository we will by default write what has
been fetched into the special FETCH_HEAD reference. The order in which
references are written depends on whether the reference is for merge or
not, which, despite some other conditions, is also determined based on
whether the old object ID the reference is being updated from actually
exists in the repository.
To write FETCH_HEAD we thus loop through all references thrice: once for
the references that are about to be merged, once for the references that
are not for merge, and finally for all references that are ignored. For
every iteration, we then look up the old object ID to determine whether
the referenced object exists so that we can label it as "not-for-merge"
if it doesn't exist. It goes without saying that this can be expensive
in case where we are fetching a lot of references.
While this is hard to avoid in the case where we're writing FETCH_HEAD,
users can in fact ask us to skip this work via `--no-write-fetch-head`.
In that case, we do not care for the result of those lookups at all
because we don't have to order writes to FETCH_HEAD in the first place.
Skip this busywork in case we're not writing to FETCH_HEAD. The
following benchmark performs a mirror-fetch in a repository with about
two million references via `git fetch --prune --no-write-fetch-head
+refs/*:refs/*`:
Benchmark 1: HEAD~
Time (mean ± σ): 75.388 s ± 1.942 s [User: 71.103 s, System: 8.953 s]
Range (min … max): 73.184 s … 76.845 s 3 runs
Benchmark 2: HEAD
Time (mean ± σ): 69.486 s ± 1.016 s [User: 65.941 s, System: 8.806 s]
Range (min … max): 68.864 s … 70.659 s 3 runs
Summary
'HEAD' ran
1.08 ± 0.03 times faster than 'HEAD~'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ps/fetch-atomic:
fetch: make `--atomic` flag cover pruning of refs
fetch: make `--atomic` flag cover backfilling of tags
refs: add interface to iterate over queued transactional updates
fetch: report errors when backfilling tags fails
fetch: control lifecycle of FETCH_HEAD in a single place
fetch: backfill tags before setting upstream
fetch: increase test coverage of fetches
Unify more messages to help l10n.
* ja/i18n-common-messages:
i18n: fix some misformated placeholders in command synopsis
i18n: remove from i18n strings that do not hold translatable parts
i18n: factorize "invalid value" messages
i18n: factorize more 'incompatible options' messages
A couple of optimization to "git fetch".
* ps/fetch-optim-with-commit-graph:
fetch: skip computing output width when not printing anything
fetch-pack: use commit-graph when computing cutoff
"git cmd -h" outside a repository should error out cleanly for many
commands, but instead it hit a BUG(), which has been corrected.
* js/short-help-outside-repo-fix:
t0012: verify that built-ins handle `-h` even without gitdir
checkout/fetch/pull/pack-objects: allow `-h` outside a repository
When fetching with the `--prune` flag we will delete any local
references matching the fetch refspec which have disappeared on the
remote. This step is not currently covered by the `--atomic` flag: we
delete branches even though updating of local references has failed,
which means that the fetch is not an all-or-nothing operation.
Fix this bug by passing in the global transaction into `prune_refs()`:
if one is given, then we'll only queue up deletions and not commit them
right away.
This change also improves performance when pruning many branches in a
repository with a big packed-refs file: every references is pruned in
its own transaction, which means that we potentially have to rewrite
the packed-refs files for every single reference we're about to prune.
The following benchmark demonstrates this: it performs a pruning fetch
from a repository with a single reference into a repository with 100k
references, which causes us to prune all but one reference. This is of
course a very artificial setup, but serves to demonstrate the impact of
only having to write the packed-refs file once:
Benchmark 1: git fetch --prune --atomic +refs/*:refs/* (HEAD~)
Time (mean ± σ): 2.366 s ± 0.021 s [User: 0.858 s, System: 1.508 s]
Range (min … max): 2.328 s … 2.407 s 10 runs
Benchmark 2: git fetch --prune --atomic +refs/*:refs/* (HEAD)
Time (mean ± σ): 1.369 s ± 0.017 s [User: 0.715 s, System: 0.641 s]
Range (min … max): 1.346 s … 1.400 s 10 runs
Summary
'git fetch --prune --atomic +refs/*:refs/* (HEAD)' ran
1.73 ± 0.03 times faster than 'git fetch --prune --atomic +refs/*:refs/* (HEAD~)'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching references from a remote we by default also fetch all tags
which point into the history we have fetched. This is a separate step
performed after updating local references because it requires us to walk
over the history on the client-side to determine whether the remote has
announced any tags which point to one of the fetched commits.
This backfilling of tags isn't covered by the `--atomic` flag: right
now, it only applies to the step where we update our local references.
This is an oversight at the time the flag was introduced: its purpose is
to either update all references or none, but right now we happily update
local references even in the case where backfilling failed.
Fix this by pulling up creation of the reference transaction such that
we can pass the same transaction to both the code which updates local
references and to the code which backfills tags. This allows us to only
commit the transaction in case both actions succeed.
Note that we also have to start passing the transaction into
`find_non_local_tags()`: this function is responsible for finding all
tags which we need to backfill. Right now, it will happily return tags
which have already been updated with our local references. But when we
use a single transaction for both local references and backfilling then
it may happen that we try to queue the same reference update twice to
the transaction, which consequently triggers a bug. We thus have to skip
over any tags which have already been queued.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the backfilling of tags fails we do not report this error to the
caller, but only report it implicitly at a later point when reporting
updated references. This leaves callers unable to act upon the
information of whether the backfilling succeeded or not.
Refactor the function to return an error code and pass it up the
callstack. This causes us to correctly propagate the error back to the
user of git-fetch(1).
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are two different locations where we're appending to FETCH_HEAD:
first when storing updated references, and second when backfilling tags.
Both times we open the file, append to it and then commit it into place,
which is essentially duplicate work.
Improve the lifecycle of updating FETCH_HEAD by opening and committing
it once in `do_fetch()`, where we pass the structure down to the code
which wants to append to it.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The fetch code flow is a bit hard to understand right now:
1. We optionally prune all references which have vanished on the
remote side.
2. We fetch and update all other references locally.
3. We update the upstream branch in the gitconfig.
4. We backfill tags pointing into the history we have just fetched.
It is quite confusing that we fetch objects and update references in
both (2) and (4), which is further stressed by the point that we use a
`skip` goto label to jump from (3) to (4) in case we fail to update the
gitconfig as expected.
Reorder the code to first update all local references, and only after we
have done so update the upstream branch information. This improves the
code flow and furthermore makes it easier to refactor the way we update
references together.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When "git fetch --prune" failed to prune the refs it wanted to
prune, the command issued error messages but exited with exit
status 0, which has been corrected.
* tg/fetch-prune-exit-code-fix:
fetch --prune: exit with error if pruning fails
When updating references via git-fetch(1), then by default we report to
the user which references have been changed. This output is formatted in
a nice table such that the different columns are aligned. Because the
first column contains abbreviated object IDs we thus need to iterate
over all refs which have changed and compute the minimum length for
their respective abbreviated hashes. While this effort makes sense in
most cases, it is wasteful when the user passes the `--quiet` flag: we
don't print the summary, but still compute the length.
Skip computing the summary width when the user asked for us to be quiet.
This gives us a speedup of nearly 10% when doing a mirror-fetch in a
repository with thousands of references being updated:
Benchmark 1: git fetch --quiet +refs/*:refs/* (HEAD~)
Time (mean ± σ): 96.078 s ± 0.508 s [User: 91.378 s, System: 10.870 s]
Range (min … max): 95.449 s … 96.760 s 5 runs
Benchmark 2: git fetch --quiet +refs/*:refs/* (HEAD)
Time (mean ± σ): 88.214 s ± 0.192 s [User: 83.274 s, System: 10.978 s]
Range (min … max): 87.998 s … 88.446 s 5 runs
Summary
'git fetch --quiet +refs/*:refs/* (HEAD)' ran
1.09 ± 0.01 times faster than 'git fetch --quiet +refs/*:refs/* (HEAD~)'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --negotiate-only" is an internal command used by "git
push" to figure out which part of our history is missing from the
other side. It should never recurse into submodules even when
fetch.recursesubmodules configuration variable is set, nor it
should trigger "gc". The code has been tightened up to ensure it
only does common ancestry discovery and nothing else.
* gc/fetch-negotiate-only-early-return:
fetch: help translators by reusing the same message template
fetch --negotiate-only: do not update submodules
fetch: skip tasks related to fetching objects
fetch: use goto cleanup in cmd_fetch()
When we taught these commands about the sparse index, we did not account
for the fact that the `cmd_*()` functions _can_ be called without a
gitdir, namely when `-h` is passed to show the usage.
A plausible approach to address this is to move the
`prepare_repo_settings()` calls right after the `parse_options()` calls:
The latter will never return when it handles `-h`, and therefore it is
safe to assume that we have a `gitdir` at that point, as long as the
built-in is marked with the `RUN_SETUP` flag.
However, it is unfortunately not that simple. In `cmd_pack_objects()`,
for example, the repo settings need to be fully populated so that the
command-line options `--sparse`/`--no-sparse` can override them, not the
other way round.
Therefore, we choose to imitate the strategy taken in `cmd_diff()`,
where we simply do not bother to prepare and initialize the repo
settings unless we have a `gitdir`.
This fixes https://github.com/git-for-windows/git/issues/3688
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a memory leak in codepaths that use the "struct
transport_ls_refs_options" API. Since the introduction of the struct
in 39835409d1 (connect, transport: encapsulate arg in struct,
2021-02-05) the caller has been responsible for freeing it.
That commit in turn migrated code originally added in
402c47d939 (clone: send ref-prefixes when using protocol v2,
2018-07-20) and b4be74105f (ls-remote: pass ref prefixes when
requesting a remote's refs, 2018-03-15). Only some of those codepaths
were releasing the allocated resources of the struct, now all of them
will.
Mark the "t/t5511-refspec.sh" test as passing when git is compiled
with SANITIZE=leak. They'll now be listed as running under the
"GIT_TEST_PASSING_SANITIZE_LEAK=true" test mode (the "linux-leaks" CI
target). Previously 24/47 tests would fail.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use the same message when an invalid value is passed to a command line
option or a configuration variable.
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pruning refs fails, we print an error to stderr, but still
exit 0 from 'git fetch'. Since this is a genuine error, fetch
should be exiting with some non-zero exit code. Make it so.
The --prune option was introduced in f360d844de ("builtin-fetch: add
--prune option", 2009-11-10). Unfortunately it's unclear from that
commit whether ignoring the exit code was an oversight or
intentional, but it feels like an oversight.
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The error message when invoking a negotiate-only fetch without providing
any tips incorrectly refers to a --negotiate-tip=* argument. Fix this to
use the actual argument, --negotiation-tip=*.
Signed-off-by: Robert Coup <robert@coup.net.nz>
Reviewed-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Follow the example set by 12909b6b (i18n: turn "options are
incompatible" into "cannot be used together", 2022-01-05) and use
the same message string to reduce the need for translation.
Reported-by: Jiang Xin <worldhello.net@gmail.com>
Helped-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`git fetch --negotiate-only` is an implementation detail of push
negotiation and, unlike most `git fetch` invocations, does not actually
update the main repository. Thus it should not update submodules even
if submodule recursion is enabled.
This is not just slow, it is wrong e.g. push negotiation with
"submodule.recurse=true" will cause submodules to be updated because it
invokes `git fetch --negotiate-only`.
Fix this by disabling submodule recursion if --negotiate-only was given.
Since this makes --negotiate-only and --recurse-submodules incompatible,
check for this invalid combination and die.
This does not use the "goto cleanup" introduced in the previous commit
because we want to recurse through submodules whenever a ref is fetched,
and this can happen without introducing new objects.
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
cmd_fetch() does the following with the assumption that objects are
fetched:
* Run gc
* Write commit graphs (if enabled by fetch.writeCommitGraph=true)
However, neither of these tasks makes sense if objects are not fetched
e.g. `git fetch --negotiate-only` never fetches objects.
Speed up cmd_fetch() by bailing out early if we know for certain that
objects will not be fetched. cmd_fetch() can bail out early whenever
objects are not fetched, but for now this only considers
--negotiate-only.
The same optimization does not apply to `git fetch --dry-run` because
that actually fetches objects; the dry run refers to not updating refs.
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Replace an early return with 'goto cleanup' in cmd_fetch() so that the
string_list is always cleared (the string_list_clear() call is purely
cleanup; the string_list is not reused). This makes cleanup consistent
so that a subsequent commit can use 'goto cleanup' to bail out early.
Signed-off-by: Glen Choo <chooglen@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some lockfile code called free() in signal-death code path, which
has been corrected.
* ps/lockfile-cleanup-fix:
fetch: fix deadlock when cleaning up lockfiles in async signals
Similar message templates have been consolidated so that
translators need to work on fewer number of messages.
* ja/i18n-similar-messages:
i18n: turn even more messages into "cannot be used together" ones
i18n: ref-filter: factorize "%(foo) atom used without %(bar) atom"
i18n: factorize "--foo outside a repository"
i18n: refactor "unrecognized %(foo) argument" strings
i18n: factorize "no directory given for --foo"
i18n: factorize "--foo requires --bar" and the like
i18n: tag.c factorize i18n strings
i18n: standardize "cannot open" and "cannot read"
i18n: turn "options are incompatible" into "cannot be used together"
i18n: refactor "%s, %s and %s are mutually exclusive"
i18n: refactor "foo and bar are mutually exclusive"
"git fetch" and "git pull" are now declared sparse-index clean.
Also "git ls-files" learns the "--sparse" option to help debugging.
* ds/fetch-pull-with-sparse-index:
test-read-cache: remove --table, --expand options
t1091/t3705: remove 'test-tool read-cache --table'
t1092: replace 'read-cache --table' with 'ls-files --sparse'
ls-files: add --sparse option
fetch/pull: use the sparse index
When fetching packfiles, we write a bunch of lockfiles for the packfiles
we're writing into the repository. In order to not leave behind any
cruft in case we exit or receive a signal, we register both an exit
handler as well as signal handlers for common signals like SIGINT. These
handlers will then unlink the locks and free the data structure tracking
them. We have observed a deadlock in this logic though:
(gdb) bt
#0 __lll_lock_wait_private () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:95
#1 0x00007f4932bea2cd in _int_free (av=0x7f4932f2eb20 <main_arena>, p=0x3e3e4200, have_lock=0) at malloc.c:3969
#2 0x00007f4932bee58c in __GI___libc_free (mem=<optimized out>) at malloc.c:2975
#3 0x0000000000662ab1 in string_list_clear ()
#4 0x000000000044f5bc in unlock_pack_on_signal ()
#5 <signal handler called>
#6 _int_free (av=0x7f4932f2eb20 <main_arena>, p=<optimized out>, have_lock=0) at malloc.c:4024
#7 0x00007f4932bee58c in __GI___libc_free (mem=<optimized out>) at malloc.c:2975
#8 0x000000000065afd5 in strbuf_release ()
#9 0x000000000066ddb9 in delete_tempfile ()
#10 0x0000000000610d0b in files_transaction_cleanup.isra ()
#11 0x0000000000611718 in files_transaction_abort ()
#12 0x000000000060d2ef in ref_transaction_abort ()
#13 0x000000000060d441 in ref_transaction_prepare ()
#14 0x000000000060e0b5 in ref_transaction_commit ()
#15 0x00000000004511c2 in fetch_and_consume_refs ()
#16 0x000000000045279a in cmd_fetch ()
#17 0x0000000000407c48 in handle_builtin ()
#18 0x0000000000408df2 in cmd_main ()
#19 0x00000000004078b5 in main ()
The process was killed with a signal, which caused the signal handler to
kick in and try free the data structures after we have unlinked the
locks. It then deadlocks while calling free(3P).
The root cause of this is that it is not allowed to call certain
functions in async-signal handlers, as specified by signal-safety(7).
Next to most I/O functions, this list of disallowed functions also
includes memory-handling functions like malloc(3P) and free(3P) because
they may not be reentrant. As a result, if we execute such functions in
the signal handler, then they may operate on inconistent state and fail
in unexpected ways.
Fix this bug by not calling non-async-signal-safe functions when running
in the signal handler. We're about to re-raise the signal anyway and
will thus exit, so it's not much of a problem to keep the string list of
lockfiles untouched. Note that it's fine though to call unlink(2), so
we'll still clean up the lockfiles correctly.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Reviewed-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use static strings for constant parts of the sentences. They are all
turned into "cannot be used together".
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Reviewed-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git fetch --set-upstream" did not check if there is a current
branch, leading to a segfault when it is run on a detached HEAD,
which has been corrected.
* ab/fetch-set-upstream-while-detached:
pull, fetch: fix segfault in --set-upstream option
The 'git fetch' and 'git pull' commands parse the index in order to
determine if submodules exist. Without command_requires_full_index=0,
this will expand a sparse index, causing slow performance even when
there is no new data to fetch.
The .gitmodules file will never be inside a sparse directory entry, and
even if it was, the index_name_pos() method would expand the sparse
index if needed as we search for the path by name. These commands do not
iterate over the index, which is the typical thing we are careful about
when integrating with the sparse index.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a segfault in the --set-upstream option added in
24bc1a1292 (pull, fetch: add --set-upstream option, 2019-08-19) added
in v2.24.0.
The code added there did not do the same checking we do for "git
branch" itself since 8efb8899cf (branch: segfault fixes and
validation, 2013-02-23), which in turn fixed the same sort of segfault
I'm fixing now in "git branch --set-upstream-to", see
6183d826ba (branch: introduce --set-upstream-to, 2012-08-20).
The warning message I'm adding here is an amalgamation of the error
added for "git branch" in 8efb8899cf, and the error output
install_branch_config() itself emits, i.e. it trims "refs/heads/" from
the name and says "branch X on remote", not "branch refs/heads/X on
remote".
I think it would make more sense to simply die() here, but in the
other checks for --set-upstream added in 24bc1a1292 we issue a
warning() instead. Let's do the same here for consistency for now.
There was an earlier submitted alternate way of fixing this in [1],
due to that patch breaking threading with the original report at [2] I
didn't notice it before authoring this version. I think the more
detailed warning message here is better, and we should also have tests
for this behavior.
The --no-rebase option to "git pull" is needed as of the recently
merged 7d0daf3f12 (Merge branch 'en/pull-conflicting-options',
2021-08-30).
1. https://lore.kernel.org/git/20210706162238.575988-1-clemens@endorphin.org/
2. https://lore.kernel.org/git/CAG6gW_uHhfNiHGQDgGmb1byMqBA7xa8kuH1mP-wAPEe5Tmi2Ew@mail.gmail.com/
Reported-by: Clemens Fruhwirth <clemens@endorphin.org>
Reported-by: Jan Pokorný <poki@fnusa.cz>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refuse to fetch into the currently checked out branch of any working
tree, not just the current one.
Fixes this previously reported bug:
https://lore.kernel.org/git/cb957174-5e9a-5603-ea9e-ac9b58a2eaad@mathema.de/
As a side effect of using find_shared_symref, we’ll also refuse the
fetch when we’re on a detached HEAD because we’re rebasing or bisecting
on the branch in question. This seems like a sensible change.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Documentation/CodingGuidelines says “do not end error messages with a
full stop” and “do not capitalize the first word”. Clean up existing
messages, some of which we will be touching in later steps in the
series, that deviate from these rules in this file, as a preparation for
the main part of the topic.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The run-command API has been updated so that the callers can easily
ask the file descriptors open for packfiles to be closed immediately
before spawning commands that may trigger auto-gc.
* js/run-command-close-packs:
Close object store closer to spawning child processes
run_auto_maintenance(): implicitly close the object store
run-command: offer to close the object store before running
run-command: prettify the `RUN_COMMAND_*` flags
pull: release packs before fetching
commit-graph: when closing the graph, also release the slab
Optimize code that handles large number of refs in the "git fetch"
code path.
* ps/fetch-optim:
fetch: avoid second connectivity check if we already have all objects
fetch: merge fetching and consuming refs
fetch: refactor fetch refs to be more extendable
fetch-pack: optimize loading of refs via commit graph
connected: refactor iterator to return next object ID directly
fetch: avoid unpacking headers in object existence check
fetch: speed up lookup of want refs via commit-graph
Code clean up to migrate callers from older advice_config[] based
API to newer advice_if_enabled() and advice_enabled() API.
* ab/retire-advice-config:
advice: move advice.graftFileDeprecated squashing to commit.[ch]
advice: remove use of global advice_add_embedded_repo
advice: remove read uses of most global `advice_` variables
advice: add enum variants for missing advice variables
"git fetch --quiet" optimization to avoid useless computation of
info that will never be displayed.
* ps/fetch-omit-formatting-under-quiet:
fetch: skip formatting updated refs with `--quiet`
Before spawning the auto maintenance, we need to make sure that we
release all open file handles to all the `.pack` files (and MIDX files
and commit-graph files and...) so that the maintenance process has the
freedom to delete those files.
So far, we did this manually every time before calling
`run_auto_maintenance()`. With the new `close_object_store` flag, we can
do that implicitly in that function, which is more robust because future
callers won't be able to forget to close the object store.
Note: this changes behavior slightly, as we previously _always_ closed
the object store, but now we only close the object store when actually
running the auto maintenance. In practice, this should not matter (if
anything, it might speed up operations where auto maintenance is
disabled).
Suggested-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching refs, we are doing two connectivity checks:
- The first one is done such that we can skip fetching refs in the
case where we already have all objects referenced by the updated
set of refs.
- The second one verifies that we have all objects after we have
fetched objects.
We always execute both connectivity checks, but this is wasteful in case
the first connectivity check already notices that we have all objects
locally available.
Skip the second connectivity check in case we already had all objects
available. This gives us a nice speedup when doing a mirror-fetch in a
repository with about 2.3M refs where the fetching repo already has all
objects:
Benchmark #1: HEAD~: git-fetch
Time (mean ± σ): 30.025 s ± 0.081 s [User: 27.070 s, System: 4.933 s]
Range (min … max): 29.900 s … 30.111 s 5 runs
Benchmark #2: HEAD: git-fetch
Time (mean ± σ): 25.574 s ± 0.177 s [User: 22.855 s, System: 4.683 s]
Range (min … max): 25.399 s … 25.765 s 5 runs
Summary
'HEAD: git-fetch' ran
1.17 ± 0.01 times faster than 'HEAD~: git-fetch'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The functions `fetch_refs()` and `consume_refs()` must always be called
together such that we first obtain all missing objects and then update
our local refs to match the remote refs. In a subsequent patch, we'll
further require that `fetch_refs()` must always be called before
`consume_refs()` such that it can correctly assert that we have all
objects after the fetch given that we're about to move the connectivity
check.
Make this requirement explicit by merging both functions into a single
`fetch_and_consume_refs()` function.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor `fetch_refs()` code to make it more extendable by explicitly
handling error cases. The refactored code should behave the same.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The object ID iterator used by the connectivity checks returns the next
object ID via an out-parameter and then uses a return code to indicate
whether an item was found. This is a bit roundabout: instead of a
separate error code, we can just return the next object ID directly and
use `NULL` pointers as indicator that the iterator got no items left.
Furthermore, this avoids a copy of the object ID.
Refactor the iterator and all its implementations to return object IDs
directly. This brings a tiny performance improvement when doing a mirror-fetch of a repository with about 2.3M refs:
Benchmark #1: 328dc58b49919c43897240f2eabfa30be2ce32a4~: git-fetch
Time (mean ± σ): 30.110 s ± 0.148 s [User: 27.161 s, System: 5.075 s]
Range (min … max): 29.934 s … 30.406 s 10 runs
Benchmark #2: 328dc58b49919c43897240f2eabfa30be2ce32a4: git-fetch
Time (mean ± σ): 29.899 s ± 0.109 s [User: 26.916 s, System: 5.104 s]
Range (min … max): 29.696 s … 29.996 s 10 runs
Summary
'328dc58b49919c43897240f2eabfa30be2ce32a4: git-fetch' ran
1.01 ± 0.01 times faster than '328dc58b49919c43897240f2eabfa30be2ce32a4~: git-fetch'
While this 1% speedup could be labelled as statistically insignificant,
the speedup is consistent on my machine. Furthermore, this is an end to
end test, so it is expected that the improvement in the connectivity
check itself is more significant.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When updating local refs after the fetch has transferred all objects, we
do an object existence test as a safety guard to avoid updating a ref to
an object which we don't have. We do so via `oid_object_info()`: if it
returns an error, then we know the object does not exist.
One side effect of `oid_object_info()` is that it parses the object's
type, and to do so it must unpack the object header. This is completely
pointless: we don't care for the type, but only want to assert that the
object exists.
Refactor the code to use `repo_has_object_file()`, which both makes the
code's intent clearer and is also faster because it does not unpack
object headers. In a real-world repo with 2.3M refs, this results in a
small speedup when doing a mirror-fetch:
Benchmark #1: HEAD~: git-fetch
Time (mean ± σ): 33.686 s ± 0.176 s [User: 30.119 s, System: 5.262 s]
Range (min … max): 33.512 s … 33.944 s 5 runs
Benchmark #2: HEAD: git-fetch
Time (mean ± σ): 31.247 s ± 0.195 s [User: 28.135 s, System: 5.066 s]
Range (min … max): 30.948 s … 31.472 s 5 runs
Summary
'HEAD: git-fetch' ran
1.08 ± 0.01 times faster than 'HEAD~: git-fetch'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When updating our local refs based on the refs fetched from the remote,
we need to iterate through all requested refs and load their respective
commits such that we can determine whether they need to be appended to
FETCH_HEAD or not. In cases where we're fetching from a remote with
exceedingly many refs, resolving these refs can be quite expensive given
that we repeatedly need to unpack object headers for each of the
referenced objects.
Speed this up by opportunistically trying to resolve object IDs via the
commit graph. We only do so for any refs which are not in "refs/tags":
more likely than not, these are going to be a commit anyway, and this
lets us avoid having to unpack object headers completely in case the
object is a commit that is part of the commit-graph. This significantly
speeds up mirror-fetches in a real-world repository with
2.3M refs:
Benchmark #1: HEAD~: git-fetch
Time (mean ± σ): 56.482 s ± 0.384 s [User: 53.340 s, System: 5.365 s]
Range (min … max): 56.050 s … 57.045 s 5 runs
Benchmark #2: HEAD: git-fetch
Time (mean ± σ): 33.727 s ± 0.170 s [User: 30.252 s, System: 5.194 s]
Range (min … max): 33.452 s … 33.871 s 5 runs
Summary
'HEAD: git-fetch' ran
1.67 ± 0.01 times faster than 'HEAD~: git-fetch'
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When fetching, Git will by default print a list of all updated refs in a
nicely formatted table. In order to come up with this table, Git needs
to iterate refs twice: first to determine the maximum column width, and
a second time to actually format these changed refs.
While this table will not be printed in case the user passes `--quiet`,
we still go out of our way and do all these steps. In fact, we even do
more work compared to not passing `--quiet`: without the flag, we will
skip all references in the column width computation which have not been
updated, but if it is set we will now compute widths for all refs.
Fix this issue by completely skipping both preparation of the format and
formatting data for display in case the user passes `--quiet`, improving
performance especially with many refs. The following benchmark shows a
nice speedup for a quiet mirror-fetch in a repository with 2.3M refs:
Benchmark #1: HEAD~: git-fetch
Time (mean ± σ): 26.929 s ± 0.145 s [User: 24.194 s, System: 4.656 s]
Range (min … max): 26.692 s … 27.068 s 5 runs
Benchmark #2: HEAD: git-fetch
Time (mean ± σ): 25.189 s ± 0.094 s [User: 22.556 s, System: 4.606 s]
Range (min … max): 25.070 s … 25.314 s 5 runs
Summary
'HEAD: git-fetch' ran
1.07 ± 0.01 times faster than 'HEAD~: git-fetch'
While at it, this patch also fixes `adjust_refcol_width()` such that it
skips unchanged refs in case the user passed `--quiet`, where verbosity
will be negative. While this function won't be called anymore if so,
this brings the comment in line with actual code. Furthermore, needless
`verbosity >= 0` checks are now removed in `store_updated_refs()`: we
never print to the `note` buffer anymore in case `verbosity < 0`, so we
won't end up in that code block anyway.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In c4a09cc9cc (Merge branch 'hw/advise-ng', 2020-03-25), a new API for
accessing advice variables was introduced and deprecated `advice_config`
in favor of a new array, `advice_setting`.
This patch ports all but two uses which read the status of the global
`advice_` variables over to the new `advice_enabled` API. We'll deal
with advice_add_embedded_repo and advice_graft_file_deprecated
separately.
Signed-off-by: Ben Boeckel <mathstuf@gmail.com>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Bugfix for common ancestor negotiation recently introduced in "git
push" code path.
* jt/push-negotiation-fixes:
fetch: die on invalid --negotiation-tip hash
send-pack: fix push nego. when remote has refs
send-pack: fix push.negotiate with remote helper
Code recently added to support common ancestry negotiation during
"git push" did not sanity check its arguments carefully enough.
* ab/fetch-negotiate-segv-fix:
fetch: fix segfault in --negotiate-only without --negotiation-tip=*
fetch: document the --negotiate-only option
send-pack.c: move "no refs in common" abort earlier
If a full hexadecimal hash is given as a --negotiation-tip to "git
fetch", and that hash does not correspond to an object, "git fetch" will
segfault if --negotiate-only is given and will silently ignore that hash
otherwise. Make these cases fatal errors, just like the case when an
invalid ref name or abbreviated hash is given.
While at it, mark the error messages as translatable.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The recent --negotiate-only option would segfault in the call to
oid_array_for_each() in negotiate_using_fetch() unless one or more
--negotiation-tip=* options were provided.
All of the other tests for the feature combine both, but nothing was
checking this assumption, let's do that and add a test for it. Fixes a
bug in 9c1e657a8f (fetch: teach independent negotiation (no
packfile), 2021-05-04).
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>