Commit graph

350 commits

Author SHA1 Message Date
Junio C Hamano 3b9ec27919 Merge branch 'js/trace2-fetch-push'
Dev support.

* js/trace2-fetch-push:
  transport: push codepath can take arbitrary repository
  push: add trace2 instrumentation
  fetch: add trace2 instrumentation
2019-10-15 13:48:03 +09:00
Junio C Hamano 5efabc7ed9 Merge branch 'ew/hashmap'
Code clean-up of the hashmap API, both users and implementation.

* ew/hashmap:
  hashmap_entry: remove first member requirement from docs
  hashmap: remove type arg from hashmap_{get,put,remove}_entry
  OFFSETOF_VAR macro to simplify hashmap iterators
  hashmap: introduce hashmap_free_entries
  hashmap: hashmap_{put,remove} return hashmap_entry *
  hashmap: use *_entry APIs for iteration
  hashmap_cmp_fn takes hashmap_entry params
  hashmap_get{,_from_hash} return "struct hashmap_entry *"
  hashmap: use *_entry APIs to wrap container_of
  hashmap_get_next returns "struct hashmap_entry *"
  introduce container_of macro
  hashmap_put takes "struct hashmap_entry *"
  hashmap_remove takes "const struct hashmap_entry *"
  hashmap_get takes "const struct hashmap_entry *"
  hashmap_add takes "struct hashmap_entry *"
  hashmap_get_next takes "const struct hashmap_entry *"
  hashmap_entry_init takes "struct hashmap_entry *"
  packfile: use hashmap_entry in delta_base_cache_entry
  coccicheck: detect hashmap_entry.hash assignment
  diff: use hashmap_entry_init on moved_entry.ent
2019-10-15 13:48:02 +09:00
Junio C Hamano d96e31e390 Merge branch 'js/fetch-jobs'
"git fetch --jobs=<n>" allowed <n> parallel jobs when fetching
submodules, but this did not apply to "git fetch --multiple" that
fetches from multiple remote repositories.  It now does.

* js/fetch-jobs:
  fetch: let --jobs=<n> parallelize --multiple, too
2019-10-15 13:48:00 +09:00
Junio C Hamano 1d8b0dfa8a Merge branch 'ms/fetch-follow-tag-optim'
The code used in following tags in "git fetch" has been optimized.

* ms/fetch-follow-tag-optim:
  fetch: use oidset to keep the want OIDs for faster lookup
2019-10-07 11:32:57 +09:00
Eric Wong e2b5038d87 hashmap_entry: remove first member requirement from docs
Comments stating that "struct hashmap_entry" must be the first
member in a struct are no longer valid.

Suggested-by: Phillip Wood <phillip.wood123@gmail.com>
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07 10:20:12 +09:00
Eric Wong c8e424c9c9 hashmap: introduce hashmap_free_entries
`hashmap_free_entries' behaves like `container_of' and passes
the offset of the hashmap_entry struct to the internal
`hashmap_free_' function, allowing the function to free any
struct pointer regardless of where the hashmap_entry field
is located.

`hashmap_free' no longer takes any arguments aside from
the hashmap itself.

Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07 10:20:11 +09:00
Eric Wong 939af16eac hashmap_cmp_fn takes hashmap_entry params
Another step in eliminating the requirement of hashmap_entry
being the first member of a struct.

Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07 10:20:11 +09:00
Eric Wong f23a465132 hashmap_get{,_from_hash} return "struct hashmap_entry *"
Update callers to use hashmap_get_entry, hashmap_get_entry_from_hash
or container_of as appropriate.

This is another step towards eliminating the requirement of
hashmap_entry being the first field in a struct.

Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07 10:20:11 +09:00
Eric Wong b94e5c1df6 hashmap_add takes "struct hashmap_entry *"
This is less error-prone than "void *" as the compiler now
detects invalid types being passed.

Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07 10:20:10 +09:00
Eric Wong d22245a2e3 hashmap_entry_init takes "struct hashmap_entry *"
C compilers do type checking to make life easier for us.  So
rely on that and update all hashmap_entry_init callers to take
"struct hashmap_entry *" to avoid future bugs while improving
safety and readability.

Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07 10:20:09 +09:00
Johannes Schindelin d54dea77db fetch: let --jobs=<n> parallelize --multiple, too
So far, `--jobs=<n>` only parallelizes submodule fetches/clones, not
`--multiple` fetches, which is unintuitive, given that the option's name
does not say anything about submodules in particular.

Let's change that. With this patch, also fetches from multiple remotes
are parallelized.

For backwards-compatibility (and to prepare for a use case where
submodule and multiple-remote fetches may need different parallelization
limits), the config setting `submodule.fetchJobs` still only controls
the submodule part of `git fetch`, while the newly-introduced setting
`fetch.parallel` controls both (but can be overridden for submodules
with `submodule.fetchJobs`).

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-06 07:35:58 +09:00
Josh Steadmon 5fc31180d8 fetch: add trace2 instrumentation
Add trace2 regions to fetch-pack.c and builtins/fetch.c to better track
time spent in the various phases of a fetch:

* listing refs
* negotiation for protocol versions v0-v2
* fetching refs
* consuming refs

Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-03 10:13:17 +09:00
Junio C Hamano 5a5350940b Merge branch 'ds/commit-graph-on-fetch'
A configuration variable tells "git fetch" to write the commit
graph after finishing.

* ds/commit-graph-on-fetch:
  fetch: add fetch.writeCommitGraph config setting
2019-09-30 13:19:32 +09:00
Junio C Hamano 627b826834 Merge branch 'md/list-objects-filter-combo'
The list-objects-filter API (used to create a sparse/lazy clone)
learned to take a combined filter specification.

* md/list-objects-filter-combo:
  list-objects-filter-options: make parser void
  list-objects-filter-options: clean up use of ALLOC_GROW
  list-objects-filter-options: allow mult. --filter
  strbuf: give URL-encoding API a char predicate fn
  list-objects-filter-options: make filter_spec a string_list
  list-objects-filter-options: move error check up
  list-objects-filter: implement composite filters
  list-objects-filter-options: always supply *errbuf
  list-objects-filter: put omits set in filter struct
  list-objects-filter: encapsulate filter components
2019-09-18 11:50:09 -07:00
Junio C Hamano b9ac6c59b8 Merge branch 'cc/multi-promisor'
Teach the lazy clone machinery that there can be more than one
promisor remote and consult them in order when downloading missing
objects on demand.

* cc/multi-promisor:
  Move core_partial_clone_filter_default to promisor-remote.c
  Move repository_format_partial_clone to promisor-remote.c
  Remove fetch-object.{c,h} in favor of promisor-remote.{c,h}
  remote: add promisor and partial clone config to the doc
  partial-clone: add multiple remotes in the doc
  t0410: test fetching from many promisor remotes
  builtin/fetch: remove unique promisor remote limitation
  promisor-remote: parse remote.*.partialclonefilter
  Use promisor_remote_get_direct() and has_promisor_remote()
  promisor-remote: use repository_format_partial_clone
  promisor-remote: add promisor_remote_reinit()
  promisor-remote: implement promisor_remote_get_direct()
  Add initial support for many promisor remotes
  fetch-object: make functions return an error code
  t0410: remove pipes after git commands
2019-09-18 11:50:09 -07:00
Masaya Suzuki b7e2d8bca5 fetch: use oidset to keep the want OIDs for faster lookup
During git-fetch, the client checks if the advertised tags' OIDs are
already in the fetch request's want OID set. This check is done in a
linear scan. For a repository that has a lot of refs, repeating this
scan takes 15+ minutes. In order to speed this up, create a oid_set for
other refs' OIDs.

Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-16 13:02:50 -07:00
Derrick Stolee 50f26bd035 fetch: add fetch.writeCommitGraph config setting
The commit-graph feature is now on by default, and is being
written during 'git gc' by default. Typically, Git only writes
a commit-graph when a 'git gc --auto' command passes the gc.auto
setting to actualy do work. This means that a commit-graph will
typically fall behind the commits that are being used every day.

To stay updated with the latest commits, add a step to 'git
fetch' to write a commit-graph after fetching new objects. The
fetch.writeCommitGraph config setting enables writing a split
commit-graph, so on average the cost of writing this file is
very small. Occasionally, the commit-graph chain will collapse
to a single level, and this could be slow for very large repos.

For additional use, adjust the default to be true when
feature.experimental is enabled.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-03 12:06:14 -07:00
Corentin BOMPARD 24bc1a1292 pull, fetch: add --set-upstream option
Add the --set-upstream option to git pull/fetch
which lets the user set the upstream configuration
(branch.<current-branch-name>.merge and
branch.<current-branch-name>.remote) for the current branch.

A typical use-case is:

    git clone http://example.com/my-public-fork
    git remote add main http://example.com/project-main-repo
    git pull --set-upstream main master

or, instead of the last line:

    git fetch --set-upstream main master
    git merge # or git rebase

This is mostly equivalent to cloning project-main-repo (which sets
upsteam) and then "git remote add" my-public-fork, but may feel more
natural for people using a hosting system which allows forking from
the web UI.

This functionality is analog to "git push --set-upstream".

Signed-off-by: Corentin BOMPARD <corentin.bompard@etu.univ-lyon1.fr>
Signed-off-by: Nathan BERBEZIER <nathan.berbezier@etu.univ-lyon1.fr>
Signed-off-by: Pablo CHABANNE <pablo.chabanne@etu.univ-lyon1.fr>
Signed-off-by: Matthieu Moy <git@matthieu-moy.fr>
Patch-edited-by: Matthieu Moy <git@matthieu-moy.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-19 13:05:58 -07:00
Jean-Noël Avila 182f59daf0 l10n: reformat some localized strings for v2.23.0
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-06 12:43:43 -07:00
Junio C Hamano cde9a64ea3 Merge branch 'ds/fetch-disable-force-notice'
"git fetch" and "git pull" reports when a fetch results in
non-fast-forward updates to let the user notice unusual situation.
The commands learned "--no-shown-forced-updates" option to disable
this safety feature.

* ds/fetch-disable-force-notice:
  pull: add --[no-]show-forced-updates passthrough
  fetch: warn about forced updates in branch listing
  fetch: add --[no-]show-forced-updates argument
2019-07-09 15:25:46 -07:00
Junio C Hamano 892d3fb71e Merge branch 'nd/fetch-multi-gc-once'
"git fetch" that grabs from a group of remotes learned to run the
auto-gc only once at the very end.

* nd/fetch-multi-gc-once:
  fetch: only run 'gc' once when fetching multiple remotes
2019-07-09 15:25:42 -07:00
Junio C Hamano 5cb7c73589 Merge branch 'ds/close-object-store'
The commit-graph file is now part of the "files that the runtime
may keep open file descriptors on, all of which would need to be
closed when done with the object store", and the file descriptor to
an existing commit-graph file now is closed before "gc" finalizes a
new instance to replace it.

* ds/close-object-store:
  packfile: rename close_all_packs to close_object_store
  packfile: close commit-graph in close_all_packs
  commit-graph: use raw_object_store when closing
2019-07-09 15:25:37 -07:00
Junio C Hamano dbf491e9e9 Merge branch 'fc/fetch-with-import-fix'
Code restructuring during 2.20 period broke fetching tags via
"import" based transports.

* fc/fetch-with-import-fix:
  fetch: fix regression with transport helpers
  fetch: make the code more understandable
  fetch: trivial cleanup
  t5801 (remote-helpers): add test to fetch tags
  t5801 (remote-helpers): cleanup refspec stuff
2019-07-09 15:25:34 -07:00
Matthew DeVore cf9ceb5a12 list-objects-filter-options: make filter_spec a string_list
Make the filter_spec string a string_list rather than a raw C string.
The list of strings must be concatted together to make a complete
filter_spec. A future patch will use this capability to build "combine:"
filter specs gradually.

A strbuf would seem to be a more natural choice for this object, but it
unfortunately requires initialization besides just zero'ing out the
memory.  This results in all container structs, and all containers of
those structs, etc., to also require initialization. Initializing them
all would be more cumbersome that simply using a string_list, which
behaves properly when its contents are zero'd.

For the purposes of code simplification, change behavior in how filter
specs are conveyed over the protocol: do not normalize the tree:<depth>
filter specs since there should be no server in existence that supports
tree:# but not tree:#k etc.

Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Matthew DeVore <matvore@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-28 08:41:53 -07:00
Christian Couder 5e46139376 builtin/fetch: remove unique promisor remote limitation
As the infrastructure for more than one promisor remote
has been introduced in previous patches, we can remove
code that forbids the registration of more than one
promisor remote.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 14:05:37 -07:00
Christian Couder fa3d1b63e8 promisor-remote: parse remote.*.partialclonefilter
This makes it possible to specify a different partial clone
filter for each promisor remote.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 14:05:37 -07:00
Christian Couder b14ed5adaf Use promisor_remote_get_direct() and has_promisor_remote()
Instead of using the repository_format_partial_clone global
and fetch_objects() directly, let's use has_promisor_remote()
and promisor_remote_get_direct().

This way all the configured promisor remotes will be taken
into account, not only the one specified by
extensions.partialClone.

Also when cloning or fetching using a partial clone filter,
remote.origin.promisor will be set to "true" instead of
setting extensions.partialClone to "origin". This makes it
possible to use many promisor remote just by fetching from
them.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-25 14:05:37 -07:00
Derrick Stolee 377444b440 fetch: warn about forced updates in branch listing
The --[no-]show-forced-updates option in 'git fetch' can be confusing
for some users, especially if it is enabled via config setting and not
by argument. Add advice to warn the user that the (forced update)
messages were not listed.

Additionally, warn users when the forced update check takes longer
than ten seconds, and recommend that they disable the check. These
messages can be disabled by the advice.fetchShowForcedUpdates config
setting.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-21 09:38:29 -07:00
Derrick Stolee cdbd70c437 fetch: add --[no-]show-forced-updates argument
After updating a set of remove refs during a 'git fetch', we walk the
commits in the new ref value and not in the old ref value to discover
if the update was a forced update. This results in two things happening
during the command:

 1. The line including the ref update has an additional "(forced-update)"
    marker at the end.

 2. The ref log for that remote branch includes a bit saying that update
    is a forced update.

For many situations, this forced-update message happens infrequently, or
is a small bit of information among many ref updates. Many users ignore
these messages, but the calculation required here slows down their fetches
significantly. Keep in mind that they do not have the opportunity to
calculate a commit-graph file containing the newly-fetched commits, so
these comparisons can be very slow.

Add a '--[no-]show-forced-updates' option that allows a user to skip this
calculation. The only permanent result is dropping the forced-update bit
in the reflog.

Include a new fetch.showForcedUpdates config setting that allows this
behavior without including the argument in every command. The config
setting is overridden by the command-line arguments.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-21 09:38:29 -07:00
Nguyễn Thái Ngọc Duy c3d6b70338 fetch: only run 'gc' once when fetching multiple remotes
In multiple remotes mode, git-fetch is launched for n-1 remotes and the
last remote is handled by the current process. Each of these processes
will in turn run 'gc' at the end.

This is not really a problem because even if multiple 'gc --auto' is run
at the same time we still handle it correctly. It does show multiple
"auto packing in the background" messages though. And we may waste some
resources when gc actually runs because we still do some stuff before
checking the lock and moving it to background.

So let's try to avoid that. We should only need one 'gc' run after all
objects and references are added anyway. Add a new option --no-auto-gc
that will be used by those n-1 processes. 'gc --auto' will always run on
the main fetch process (*).

(*) even if we fetch remotes in parallel at some point in future, this
    should still be fine because we should "join" all those processes
    before this step.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-19 20:56:44 -07:00
Derrick Stolee 2d511cfc0b packfile: rename close_all_packs to close_object_store
The close_all_packs() method is now responsible for more than just pack-files.
It also closes the commit-graph and the multi-pack-index. Rename the function
to be more descriptive of its larger role. The name also fits because the
input parameter is a raw_object_store.

Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-12 11:33:54 -07:00
Felipe Contreras f80d922355 fetch: fix regression with transport helpers
Commit e198b3a740 changed the behavior of fetch with regards to tags.
Before, null oids where not ignored, now they are, regardless of whether
the refs have been explicitly cleared or not.

  e198b3a740 (fetch: replace string-list used as a look-up table with a hashmap)

When using a transport helper the oids can certainly be null. So now
tags are ignored and fetching them is impossible.

This patch fixes that by having a specific flag that is set only when we
explicitly want to ignore the refs, restoring the original behavior.

Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-04 11:28:58 -07:00
Felipe Contreras 9528b80b1a fetch: make the code more understandable
The comment makes it seem as if the condition is the other way around.
The exception is when the oid is null, so check for that.

Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-04 11:28:58 -07:00
Felipe Contreras a8363b5719 fetch: trivial cleanup
Create a helper function to clear an item. The way items are cleared has
changed, and will change again soon.

No functional changes.

Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-04 11:28:58 -07:00
Junio C Hamano 27cdbdd134 Merge branch 'jk/no-sigpipe-during-network-transport'
On platforms where "git fetch" is killed with SIGPIPE (e.g. OSX),
the upload-pack that runs on the other end that hangs up after
detecting an error could cause "git fetch" to die with a signal,
which led to a flakey test.  "git fetch" now ignores SIGPIPE during
the network portion of its operation (this is not a problem as we
check the return status from our write(2)s).

* jk/no-sigpipe-during-network-transport:
  fetch: ignore SIGPIPE during network operation
  fetch: avoid calling write_or_die()
2019-03-20 15:16:06 +09:00
Jeff King 143588949c fetch: ignore SIGPIPE during network operation
The default SIGPIPE behavior can be useful for a command that generates
a lot of output: if the receiver of our output goes away, we'll be
notified asynchronously to stop generating it (typically by killing the
program).

But for a command like fetch, which is primarily concerned with
receiving data and writing it to disk, an unexpected SIGPIPE can be
awkward. We're already checking the return value of all of our write()
calls, and dying due to the signal takes away our chance to gracefully
handle the error.

On Linux, we wouldn't generally see SIGPIPE at all during fetch. If the
other side of the network connection hangs up, we'll see ECONNRESET. But
on OS X, we get a SIGPIPE, and the process is killed. This causes t5570
to racily fail, as we sometimes die by signal (instead of the expected
die() call) when the server side hangs up.

Let's ignore SIGPIPE during the network portion of the fetch, which will
cause our write() to return EPIPE, giving us consistent behavior across
platforms.

This fixes the test flakiness, but note that it stops short of fixing
the larger problem. The server side hit a fatal error, sent us an "ERR"
packet, and then hung up. We notice the failure because we're trying to
write to a closed socket. But by dying immediately, we never actually
read the ERR packet and report its content to the user. This is a (racy)
problem on all platforms. So this patch lays the groundwork from which
that problem might be fixed consistently, but it doesn't actually fix
it.

Note the placement of the SIGPIPE handling. The absolute minimal change
would be to ignore SIGPIPE only when we're writing. But twiddling the
signal handler for each write call is inefficient and maintenance
burden. On the opposite end of the spectrum, we could simply declare
that fetch does not need SIGPIPE handling, since it doesn't generate a
lot of output, and we could just ignore it at the start of cmd_fetch().

This patch takes a middle ground. It ignores SIGPIPE during the network
operation (which is admittedly most of the program, since the actual
network operations are all done under the hood by the transport code).
So it's still pretty coarse.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-05 15:02:18 +09:00
Jean-Noël Avila 32ceace39f Fix typos in translatable strings for v2.21.0
Signed-off-by: Jean-Noël Avila <jn.avila@free.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-11 12:58:06 -08:00
Junio C Hamano cba595ab1a Merge branch 'jk/loose-object-cache-oid'
Code clean-up.

* jk/loose-object-cache-oid:
  prefer "hash mismatch" to "sha1 mismatch"
  sha1-file: avoid "sha1 file" for generic use in messages
  sha1-file: prefer "loose object file" to "sha1 file" in messages
  sha1-file: drop has_sha1_file()
  convert has_sha1_file() callers to has_object_file()
  sha1-file: convert pass-through functions to object_id
  sha1-file: modernize loose header/stream functions
  sha1-file: modernize loose object file functions
  http: use struct object_id instead of bare sha1
  update comment references to sha1_object_info()
  sha1-file: fix outdated sha1 comment references
2019-02-06 22:05:27 -08:00
Junio C Hamano d6cc13602b Merge branch 'nd/fetch-compact-update'
"git fetch" output cleanup.

* nd/fetch-compact-update:
  fetch: prefer suffix substitution in compact fetch.output
2019-02-05 14:26:18 -08:00
Junio C Hamano 073312b4c7 Merge branch 'js/filter-options-should-use-plain-int'
Update the protocol message specification to allow only the limited
use of scaled quantities.  This is ensure potential compatibility
issues will not go out of hand.

* js/filter-options-should-use-plain-int:
  filter-options: expand scaled numbers
  tree:<depth>: skip some trees even when collecting omits
  list-objects-filter: teach tree:# how to handle >0
2019-02-05 14:26:10 -08:00
Junio C Hamano 5d3635db19 Merge branch 'sb/submodule-recursive-fetch-gets-the-tip'
"git fetch --recurse-submodules" may not fetch the necessary commit
that is bound to the superproject, which is getting corrected.

* sb/submodule-recursive-fetch-gets-the-tip:
  fetch: ensure submodule objects fetched
  submodule.c: fetch in submodules git directory instead of in worktree
  submodule: migrate get_next_submodule to use repository structs
  repository: repo_submodule_init to take a submodule struct
  submodule: store OIDs in changed_submodule_names
  submodule.c: tighten scope of changed_submodule_names struct
  submodule.c: sort changed_submodule_names before searching it
  submodule.c: fix indentation
  sha1-array: provide oid_array_filter
2019-01-29 12:47:54 -08:00
Junio C Hamano 61c51acf7d Merge branch 'cc/fetch-error-message-fix'
Error message fix.

* cc/fetch-error-message-fix:
  fetch: fix extensions.partialclone name in error message
2019-01-29 12:47:53 -08:00
Nguyễn Thái Ngọc Duy dc40b24df4 fetch: prefer suffix substitution in compact fetch.output
I have a remote named "jch" and it has a branch with the same name. And
fetch.output is set to "compact". Fetching this remote looks like this

 From https://github.com/gitster/git
  + eb7fd39f6b...835363af2f jch                -> */jch  (forced update)
    6f11fd5edb..59b12ae96a  nd/config-move-to  -> jch/*
  * [new branch]            nd/diff-parseopt   -> jch/*
  * [new branch]            nd/the-index-final -> jch/*

Notice that the local side of branch jch starts with "*" instead of
ending with it like the rest. It's not exactly wrong. It just looks
weird.

This patch changes the find-and-replace code a bit to try finding prefix
first before falling back to strstr() which finds a substring from left
to right. Now we have something less OCD

 From https://github.com/gitster/git
  + eb7fd39f6b...835363af2f jch                -> jch/*  (forced update)
    6f11fd5edb..59b12ae96a  nd/config-move-to  -> jch/*
  * [new branch]            nd/diff-parseopt   -> jch/*
  * [new branch]            nd/the-index-final -> jch/*

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-27 16:20:34 -08:00
Josh Steadmon 87c2d9d310 filter-options: expand scaled numbers
When communicating with a remote server or a subprocess, use
expanded numbers rather than numbers with scaling suffix in the
object filter spec (e.g.  "limit:blob=1k" becomes
"limit:blob=1024").

Update the protocol docs to note that clients should always perform this
expansion, to allow for more compatibility between server
implementations.

Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-15 15:42:31 -08:00
Christian Couder e01378753d fetch: fix extensions.partialclone name in error message
There is "extensions.partialclone" and "core.partialCloneFilter", but
not "core.partialclone". Only "extensions.partialclone" is meant to
contain a remote name.

While at it, let's wrap the relevant code lines to keep them at a
reasonable length.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-14 10:49:09 -08:00
Jeff King 98374a07c9 convert has_sha1_file() callers to has_object_file()
The only remaining callers of has_sha1_file() actually have an object_id
already. They can use the "object" variant, rather than dereferencing
the hash themselves.

The code changes here were completely generated by the included
coccinelle patch.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-08 09:41:06 -08:00
Stefan Beller be76c21282 fetch: ensure submodule objects fetched
Currently when git-fetch is asked to recurse into submodules, it dispatches
a plain "git-fetch -C <submodule-dir>" (with some submodule related options
such as prefix and recusing strategy, but) without any information of the
remote or the tip that should be fetched.

But this default fetch is not sufficient, as a newly fetched commit in
the superproject could point to a commit in the submodule that is not
in the default refspec. This is common in workflows like Gerrit's.
When fetching a Gerrit change under review (from refs/changes/??), the
commits in that change likely point to submodule commits that have not
been merged to a branch yet.

Fetch a submodule object by id if the object that the superproject
points to, cannot be found. For now this object is fetched from the
'origin' remote as we defer getting the default remote to a later patch.

A list of new submodule commits are already generated in certain
conditions (by check_for_new_submodule_commits()); this new feature
invokes that function in more situations.

The submodule checks were done only when a ref in the superproject
changed, these checks were extended to also be performed when fetching
into FETCH_HEAD for completeness, and add a test for that too.

Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-12-09 10:54:19 +09:00
Junio C Hamano ab96f28ba4 Merge branch 'jk/unused-parameter-fixes'
Various functions have been audited for "-Wunused-parameter" warnings
and bugs in them got fixed.

* jk/unused-parameter-fixes:
  midx: double-check large object write loop
  assert NOARG/NONEG behavior of parse-options callbacks
  parse-options: drop OPT_DATE()
  apply: return -1 from option callback instead of calling exit(1)
  cat-file: report an error on multiple --batch options
  tag: mark "--message" option with NONEG
  show-branch: mark --reflog option as NONEG
  format-patch: mark "--no-numbered" option with NONEG
  status: mark --find-renames option with NONEG
  cat-file: mark batch options with NONEG
  pack-objects: mark index-version option as NONEG
  ls-files: mark exclude options as NONEG
  am: handle --no-patch-format option
  apply: mark include/exclude options as NONEG
2018-11-18 18:23:53 +09:00
Junio C Hamano fd4bb3806b Merge branch 'jc/war-on-string-list'
Replace three string-list instances used as look-up tables in "git
fetch" with hashmaps.

* jc/war-on-string-list:
  fetch: replace string-list used as a look-up table with a hashmap
2018-11-13 22:37:23 +09:00
Jeff King 517fe807d6 assert NOARG/NONEG behavior of parse-options callbacks
When we define a parse-options callback, the flags we put in the option
struct must match what the callback expects. For example, a callback
which does not handle the "unset" parameter should only be used with
PARSE_OPT_NONEG. But since the callback and the option struct are not
defined next to each other, it's easy to get this wrong (as earlier
patches in this series show).

Fortunately, the compiler can help us here: compiling with
-Wunused-parameters can show us which callbacks ignore their "unset"
parameters (and likewise, ones that ignore "arg" expect to be triggered
with PARSE_OPT_NOARG).

But after we've inspected a callback and determined that all of its
callers use the right flags, what do we do next? We'd like to silence
the compiler warning, but do so in a way that will catch any wrong calls
in the future.

We can do that by actually checking those variables and asserting that
they match our expectations. Because this is such a common pattern,
we'll introduce some helper macros. The resulting messages aren't
as descriptive as we could make them, but the file/line information from
BUG() is enough to identify the problem (and anyway, the point is that
these should never be seen).

Each of the annotated callbacks in this patch triggers
-Wunused-parameters, and was manually inspected to make sure all callers
use the correct options (so none of these BUGs should be triggerable).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-06 12:56:29 +09:00