Merge branch 'master' of github.com:git/git

* 'master' of github.com:git/git: (51 commits)
  Git 2.33-rc2
  object-file: use unsigned arithmetic with bit mask
  Revert 'diff-merges: let "-m" imply "-p"'
  object-store: avoid extra ';' from KHASH_INIT
  oidtree: avoid nested struct oidtree_node
  Git 2.33-rc1
  test: fix for COLUMNS and bash 5
  The eighth batch
  diff: --pickaxe-all typofix
  mingw: align symlinks-related rmdir() behavior with Linux
  t7508: avoid non POSIX BRE
  use fspathhash() everywhere
  t0001: fix broken not-quite getcwd(3) test in bed67874e2
  Documentation: render special characters correctly
  reset: clear_unpack_trees_porcelain to plug leak
  builtin/rebase: fix options.strategy memory lifecycle
  builtin/merge: free found_ref when done
  builtin/mv: free or UNLEAK multiple pointers at end of cmd_mv
  convert: release strbuf to avoid leak
  read-cache: call diff_setup_done to avoid leak
  ...
This commit is contained in:
Jiang Xin 2021-08-14 07:56:22 +08:00
commit 117e2caa42
50 changed files with 1247 additions and 418 deletions

View file

@ -1,18 +1,6 @@
Git 2.33 Release Notes
======================
Backward compatibility notes
----------------------------
* The "-m" option in "git log -m" that does not specify which format,
if any, of diff is desired did not have any visible effect; it now
implies some form of diff (by default "--patch") is produced.
You can disable the diff output with "git log -m --no-patch", but
then there probably isn't much point in passing "-m" in the first
place ;-).
Updates since Git 2.32
----------------------
@ -48,7 +36,7 @@ Performance, Internal Implementation, Development Support etc.
reduce code duplication.
* Repeated rename detections in a sequence of mergy operations have
been optimize out.
been optimized out for the 'ort' merge strategy.
* Preliminary clean-up of tests before the main reftable changes
hits the codebase.
@ -98,6 +86,11 @@ Performance, Internal Implementation, Development Support etc.
* "git read-tree" had a codepath where blobs are fetched one-by-one
from the promisor remote, which has been corrected to fetch in bulk.
* Rewrite of "git submodule" in C continues.
* "git checkout" and "git commit" learn to work without unnecessarily
expanding sparse indexes.
Fixes since v2.32
-----------------
@ -237,6 +230,14 @@ Fixes since v2.32
* A race between repacking and using pack bitmaps has been corrected.
(merge dc1daacdcc jk/check-pack-valid-before-opening-bitmap later to maint).
* The local changes stashed by "git merge --autostash" were lost when
the merge failed in certain ways, which has been corrected.
* Windows rmdir() equivalent behaves differently from POSIX ones in
that when used on a symbolic link that points at a directory, the
target directory gets removed, which has been corrected.
(merge 3e7d4888e5 tb/mingw-rmdir-symlink-to-directory later to maint).
* Other code cleanup, docfix, build fix, etc.
(merge bfe35a6165 ah/doc-describe later to maint).
(merge f302c1e4aa jc/clarify-revision-range later to maint).
@ -278,3 +279,5 @@ Fixes since v2.32
(merge ddcb189d9d tb/bitmap-type-filter-comment-fix later to maint).
(merge 878b399734 pb/submodule-recurse-doc later to maint).
(merge 734283855f jk/config-env-doc later to maint).
(merge 482e1488a9 ab/getcwd-test later to maint).
(merge f0b922473e ar/doc-markup-fix later to maint).

View file

@ -74,10 +74,9 @@ the feature triggers the new behavior when it should, and to show the
feature does not trigger when it shouldn't. After any code change, make
sure that the entire test suite passes.
If you have an account at GitHub (and you can get one for free to work
on open source projects), you can use their Travis CI integration to
test your changes on Linux, Mac (and hopefully soon Windows). See
GitHub-Travis CI hints section for details.
Pushing to a fork of https://github.com/git/git will use their CI
integration to test your changes on Linux, Mac and Windows. See the
<<GHCI,GitHub CI>> section for details.
Do not forget to update the documentation to describe the updated
behavior and make sure that the resulting documentation set formats
@ -167,6 +166,85 @@ or, on an older version of Git without support for --pretty=reference:
git show -s --date=short --pretty='format:%h (%s, %ad)' <commit>
....
[[sign-off]]
=== Certify your work by adding your `Signed-off-by` trailer
To improve tracking of who did what, we ask you to certify that you
wrote the patch or have the right to pass it on under the same license
as ours, by "signing off" your patch. Without sign-off, we cannot
accept your patches.
If (and only if) you certify the below D-C-O:
[[dco]]
.Developer's Certificate of Origin 1.1
____
By making a contribution to this project, I certify that:
a. The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
b. The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
c. The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
d. I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
____
you add a "Signed-off-by" trailer to your commit, that looks like
this:
....
Signed-off-by: Random J Developer <random@developer.example.org>
....
This line can be added by Git if you run the git-commit command with
the -s option.
Notice that you can place your own `Signed-off-by` trailer when
forwarding somebody else's patch with the above rules for
D-C-O. Indeed you are encouraged to do so. Do not forget to
place an in-body "From: " line at the beginning to properly attribute
the change to its true author (see (2) above).
This procedure originally came from the Linux kernel project, so our
rule is quite similar to theirs, but what exactly it means to sign-off
your patch differs from project to project, so it may be different
from that of the project you are accustomed to.
[[real-name]]
Also notice that a real name is used in the `Signed-off-by` trailer. Please
don't hide your real name.
[[commit-trailers]]
If you like, you can put extra tags at the end:
. `Reported-by:` is used to credit someone who found the bug that
the patch attempts to fix.
. `Acked-by:` says that the person who is more familiar with the area
the patch attempts to modify liked the patch.
. `Reviewed-by:`, unlike the other tags, can only be offered by the
reviewers themselves when they are completely satisfied with the
patch after a detailed analysis.
. `Tested-by:` is used to indicate that the person applied the patch
and found it to have the desired effect.
You can also create your own tag or use one that's in common usage
such as "Thanks-to:", "Based-on-patch-by:", or "Mentored-by:".
[[git-tools]]
=== Generate your patch using Git tools out of your commits.
@ -302,85 +380,6 @@ Do not forget to add trailers such as `Acked-by:`, `Reviewed-by:` and
`Tested-by:` lines as necessary to credit people who helped your
patch, and "cc:" them when sending such a final version for inclusion.
[[sign-off]]
=== Certify your work by adding your `Signed-off-by` trailer
To improve tracking of who did what, we ask you to certify that you
wrote the patch or have the right to pass it on under the same license
as ours, by "signing off" your patch. Without sign-off, we cannot
accept your patches.
If (and only if) you certify the below D-C-O:
[[dco]]
.Developer's Certificate of Origin 1.1
____
By making a contribution to this project, I certify that:
a. The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
b. The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
c. The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
d. I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
____
you add a "Signed-off-by" trailer to your commit, that looks like
this:
....
Signed-off-by: Random J Developer <random@developer.example.org>
....
This line can be added by Git if you run the git-commit command with
the -s option.
Notice that you can place your own `Signed-off-by` trailer when
forwarding somebody else's patch with the above rules for
D-C-O. Indeed you are encouraged to do so. Do not forget to
place an in-body "From: " line at the beginning to properly attribute
the change to its true author (see (2) above).
This procedure originally came from the Linux kernel project, so our
rule is quite similar to theirs, but what exactly it means to sign-off
your patch differs from project to project, so it may be different
from that of the project you are accustomed to.
[[real-name]]
Also notice that a real name is used in the `Signed-off-by` trailer. Please
don't hide your real name.
[[commit-trailers]]
If you like, you can put extra tags at the end:
. `Reported-by:` is used to credit someone who found the bug that
the patch attempts to fix.
. `Acked-by:` says that the person who is more familiar with the area
the patch attempts to modify liked the patch.
. `Reviewed-by:`, unlike the other tags, can only be offered by the
reviewers themselves when they are completely satisfied with the
patch after a detailed analysis.
. `Tested-by:` is used to indicate that the person applied the patch
and found it to have the desired effect.
You can also create your own tag or use one that's in common usage
such as "Thanks-to:", "Based-on-patch-by:", or "Mentored-by:".
== Subsystems with dedicated maintainers
Some parts of the system have dedicated maintainers with their own
@ -449,13 +448,12 @@ their trees themselves.
entitled "What's cooking in git.git" and "What's in git.git" giving
the status of various proposed changes.
[[travis]]
== GitHub-Travis CI hints
== GitHub CI[[GHCI]]]
With an account at GitHub (you can get one for free to work on open
source projects), you can use Travis CI to test your changes on Linux,
Mac (and hopefully soon Windows). You can find a successful example
test build here: https://travis-ci.org/git/git/builds/120473209
With an account at GitHub, you can use GitHub CI to test your changes
on Linux, Mac and Windows. See
https://github.com/git/git/actions/workflows/main.yml for examples of
recent CI runs.
Follow these steps for the initial setup:
@ -463,31 +461,18 @@ Follow these steps for the initial setup:
You can find detailed instructions how to fork here:
https://help.github.com/articles/fork-a-repo/
. Open the Travis CI website: https://travis-ci.org
. Press the "Sign in with GitHub" button.
. Grant Travis CI permissions to access your GitHub account.
You can find more information about the required permissions here:
https://docs.travis-ci.com/user/github-oauth-scopes
. Open your Travis CI profile page: https://travis-ci.org/profile
. Enable Travis CI builds for your Git fork.
After the initial setup, Travis CI will run whenever you push new changes
After the initial setup, CI will run whenever you push new changes
to your fork of Git on GitHub. You can monitor the test state of all your
branches here: https://travis-ci.org/__<Your GitHub handle>__/git/branches
branches here: https://github.com/<Your GitHub handle>/git/actions/workflows/main.yml
If a branch did not pass all test cases then it is marked with a red
cross. In that case you can click on the failing Travis CI job and
scroll all the way down in the log. Find the line "<-- Click here to see
detailed test output!" and click on the triangle next to the log line
number to expand the detailed test output. Here is such a failing
example: https://travis-ci.org/git/git/jobs/122676187
cross. In that case you can click on the failing job and navigate to
"ci/run-build-and-tests.sh" and/or "ci/print-test-failures.sh". You
can also download "Artifacts" which are tarred (or zipped) archives
with test data relevant for debugging.
Fix the problem and push your fix to your Git fork. This will trigger
a new Travis CI build to ensure all tests pass.
Then fix the problem and push your fix to your GitHub fork. This will
trigger a new CI build to ensure all tests pass.
[[mua]]
== MUA specific hints

View file

@ -49,9 +49,10 @@ ifdef::git-log[]
--diff-merges=m:::
-m:::
This option makes diff output for merge commits to be shown in
the default format. The default format could be changed using
the default format. `-m` will produce the output only if `-p`
is given as well. The default format could be changed using
`log.diffMerges` configuration parameter, which default value
is `separate`. `-m` implies `-p`.
is `separate`.
+
--diff-merges=first-parent:::
--diff-merges=1:::
@ -61,8 +62,7 @@ ifdef::git-log[]
--diff-merges=separate:::
This makes merge commits show the full diff with respect to
each of the parents. Separate log entry and diff is generated
for each parent. This is the format that `-m` produced
historically.
for each parent.
+
--diff-merges=combined:::
--diff-merges=c:::

View file

@ -133,7 +133,7 @@ remember to run that, set `fetch.prune` globally, or
linkgit:git-config[1].
Here's where things get tricky and more specific. The pruning feature
doesn't actually care about branches, instead it'll prune local <->
doesn't actually care about branches, instead it'll prune local <-->
remote-references as a function of the refspec of the remote (see
`<refspec>` and <<CRTB,CONFIGURED REMOTE-TRACKING BRANCHES>> above).

View file

@ -322,7 +322,7 @@ initiating this "pull". If Bob's work conflicts with what Alice did since
their histories forked, Alice will use her working tree and the index to
resolve conflicts, and existing local changes will interfere with the
conflict resolution process (Git will still perform the fetch but will
refuse to merge --- Alice will have to get rid of her local changes in
refuse to merge -- Alice will have to get rid of her local changes in
some way and pull again when this happens).
Alice can peek at what Bob did without merging first, using the "fetch"

View file

@ -154,7 +154,8 @@ endif::git-pull[]
--autostash::
--no-autostash::
Automatically create a temporary stash entry before the operation
begins, and apply it after the operation ends. This means
begins, record it in the special ref `MERGE_AUTOSTASH`
and apply it after the operation ends. This means
that you can run the operation on a dirty worktree. However, use
with care: the final stash application after a successful
merge might result in non-trivial conflicts.

View file

@ -1,7 +1,7 @@
#!/bin/sh
GVF=GIT-VERSION-FILE
DEF_VER=v2.33.0-rc0
DEF_VER=v2.33.0-rc2
LF='
'

View file

@ -715,6 +715,7 @@ TEST_BUILTINS_OBJS += test-example-decorate.o
TEST_BUILTINS_OBJS += test-fast-rebase.o
TEST_BUILTINS_OBJS += test-genrandom.o
TEST_BUILTINS_OBJS += test-genzeros.o
TEST_BUILTINS_OBJS += test-getcwd.o
TEST_BUILTINS_OBJS += test-hash-speed.o
TEST_BUILTINS_OBJS += test-hash.o
TEST_BUILTINS_OBJS += test-hashmap.o

View file

@ -378,9 +378,6 @@ static int checkout_worktree(const struct checkout_opts *opts,
if (pc_workers > 1)
init_parallel_checkout();
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (pos = 0; pos < active_nr; pos++) {
struct cache_entry *ce = active_cache[pos];
if (ce->ce_flags & CE_MATCHED) {
@ -530,8 +527,6 @@ static int checkout_paths(const struct checkout_opts *opts,
* Make sure all pathspecs participated in locating the paths
* to be checked out.
*/
/* TODO: audit for interaction with sparse-index. */
ensure_full_index(&the_index);
for (pos = 0; pos < active_nr; pos++)
if (opts->overlay_mode)
mark_ce_for_checkout_overlay(active_cache[pos],
@ -1593,6 +1588,9 @@ static int checkout_main(int argc, const char **argv, const char *prefix,
git_config(git_checkout_config, opts);
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
opts->track = BRANCH_TRACK_UNSPECIFIED;
if (!opts->accept_pathspec && !opts->accept_ref)

View file

@ -1689,6 +1689,9 @@ int cmd_commit(int argc, const char **argv, const char *prefix)
if (argc == 2 && !strcmp(argv[1], "-h"))
usage_with_options(builtin_commit_usage, builtin_commit_options);
prepare_repo_settings(the_repository);
the_repository->settings.command_requires_full_index = 0;
status_init_config(&s, git_commit_config);
s.commit_template = 1;
status_format = STATUS_FORMAT_NONE; /* Ignore status.short */

View file

@ -10,18 +10,16 @@ static const char * const for_each_repo_usage[] = {
NULL
};
static int run_command_on_repo(const char *path,
void *cbdata)
static int run_command_on_repo(const char *path, int argc, const char ** argv)
{
int i;
struct child_process child = CHILD_PROCESS_INIT;
struct strvec *args = (struct strvec *)cbdata;
child.git_cmd = 1;
strvec_pushl(&child.args, "-C", path, NULL);
for (i = 0; i < args->nr; i++)
strvec_push(&child.args, args->v[i]);
for (i = 0; i < argc; i++)
strvec_push(&child.args, argv[i]);
return run_command(&child);
}
@ -31,7 +29,6 @@ int cmd_for_each_repo(int argc, const char **argv, const char *prefix)
static const char *config_key = NULL;
int i, result = 0;
const struct string_list *values;
struct strvec args = STRVEC_INIT;
const struct option options[] = {
OPT_STRING(0, "config", &config_key, N_("config"),
@ -45,9 +42,6 @@ int cmd_for_each_repo(int argc, const char **argv, const char *prefix)
if (!config_key)
die(_("missing --config=<config>"));
for (i = 0; i < argc; i++)
strvec_push(&args, argv[i]);
values = repo_config_get_value_multi(the_repository,
config_key);
@ -59,7 +53,7 @@ int cmd_for_each_repo(int argc, const char **argv, const char *prefix)
return 0;
for (i = 0; !result && i < values->nr; i++)
result = run_command_on_repo(values->items[i].string, &args);
result = run_command_on_repo(values->items[i].string, argc, argv);
return result;
}

View file

@ -503,7 +503,7 @@ static void merge_name(const char *remote, struct strbuf *msg)
struct strbuf bname = STRBUF_INIT;
struct merge_remote_desc *desc;
const char *ptr;
char *found_ref;
char *found_ref = NULL;
int len, early;
strbuf_branchname(&bname, remote, 0);
@ -586,6 +586,7 @@ static void merge_name(const char *remote, struct strbuf *msg)
strbuf_addf(msg, "%s\t\tcommit '%s'\n",
oid_to_hex(&remote_head->object.oid), remote);
cleanup:
free(found_ref);
strbuf_release(&buf);
strbuf_release(&bname);
}
@ -1560,6 +1561,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
&head_commit->object.oid,
&commit->object.oid,
overwrite_ignore)) {
apply_autostash(git_path_merge_autostash(the_repository));
ret = 1;
goto done;
}
@ -1708,6 +1710,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
else
fprintf(stderr, _("Merge with strategy %s failed.\n"),
use_strategies[0]->name);
apply_autostash(git_path_merge_autostash(the_repository));
ret = 2;
goto done;
} else if (best_strategy == wt_strategy)
@ -1715,7 +1718,7 @@ int cmd_merge(int argc, const char **argv, const char *prefix)
else {
printf(_("Rewinding the tree to pristine...\n"));
restore_state(&head_commit->object.oid, &stash);
printf(_("Using the %s to prepare resolving by hand.\n"),
printf(_("Using the %s strategy to prepare resolving by hand.\n"),
best_strategy);
try_merge_strategy(best_strategy, common, remoteheads,
head_commit);

View file

@ -303,5 +303,10 @@ int cmd_mv(int argc, const char **argv, const char *prefix)
COMMIT_LOCK | SKIP_IF_UNCHANGED))
die(_("Unable to write new index file"));
string_list_clear(&src_for_dst, 0);
UNLEAK(source);
UNLEAK(dest_path);
free(submodule_gitfile);
free(modes);
return 0;
}

View file

@ -139,7 +139,7 @@ static struct replay_opts get_replay_opts(const struct rebase_options *opts)
replay.ignore_date = opts->ignore_date;
replay.gpg_sign = xstrdup_or_null(opts->gpg_sign_opt);
if (opts->strategy)
replay.strategy = opts->strategy;
replay.strategy = xstrdup_or_null(opts->strategy);
else if (!replay.strategy && replay.default_strategy) {
replay.strategy = replay.default_strategy;
replay.default_strategy = NULL;
@ -2109,6 +2109,7 @@ int cmd_rebase(int argc, const char **argv, const char *prefix)
free(options.head_name);
free(options.gpg_sign_opt);
free(options.cmd);
free(options.strategy);
strbuf_release(&options.git_format_patch_opt);
free(squash_onto_name);
return ret;

View file

@ -380,10 +380,7 @@ static void insert_recursive_pattern(struct pattern_list *pl, struct strbuf *pat
struct pattern_entry *e = xmalloc(sizeof(*e));
e->patternlen = path->len;
e->pattern = strbuf_detach(path, NULL);
hashmap_entry_init(&e->ent,
ignore_case ?
strihash(e->pattern) :
strhash(e->pattern));
hashmap_entry_init(&e->ent, fspathhash(e->pattern));
hashmap_add(&pl->recursive_hashmap, &e->ent);
@ -399,10 +396,7 @@ static void insert_recursive_pattern(struct pattern_list *pl, struct strbuf *pat
e = xmalloc(sizeof(struct pattern_entry));
e->patternlen = newlen;
e->pattern = xstrndup(oldpattern, newlen);
hashmap_entry_init(&e->ent,
ignore_case ?
strihash(e->pattern) :
strhash(e->pattern));
hashmap_entry_init(&e->ent, fspathhash(e->pattern));
if (!hashmap_get_entry(&pl->parent_hashmap, e, ent, NULL))
hashmap_add(&pl->parent_hashmap, &e->ent);

View file

@ -187,11 +187,13 @@ static char *relative_url(const char *remote_url,
out = xstrdup(sb.buf + 2);
else
out = xstrdup(sb.buf);
strbuf_reset(&sb);
if (!up_path || !is_relative)
if (!up_path || !is_relative) {
strbuf_release(&sb);
return out;
}
strbuf_reset(&sb);
strbuf_addf(&sb, "%s%s", up_path, out);
free(out);
return strbuf_detach(&sb, NULL);
@ -1657,45 +1659,20 @@ static int module_deinit(int argc, const char **argv, const char *prefix)
return 0;
}
static int clone_submodule(const char *path, const char *gitdir, const char *url,
const char *depth, struct string_list *reference, int dissociate,
int quiet, int progress, int single_branch)
{
struct child_process cp = CHILD_PROCESS_INIT;
strvec_push(&cp.args, "clone");
strvec_push(&cp.args, "--no-checkout");
if (quiet)
strvec_push(&cp.args, "--quiet");
if (progress)
strvec_push(&cp.args, "--progress");
if (depth && *depth)
strvec_pushl(&cp.args, "--depth", depth, NULL);
if (reference->nr) {
struct string_list_item *item;
for_each_string_list_item(item, reference)
strvec_pushl(&cp.args, "--reference",
item->string, NULL);
}
if (dissociate)
strvec_push(&cp.args, "--dissociate");
if (gitdir && *gitdir)
strvec_pushl(&cp.args, "--separate-git-dir", gitdir, NULL);
if (single_branch >= 0)
strvec_push(&cp.args, single_branch ?
"--single-branch" :
"--no-single-branch");
strvec_push(&cp.args, "--");
strvec_push(&cp.args, url);
strvec_push(&cp.args, path);
cp.git_cmd = 1;
prepare_submodule_repo_env(&cp.env_array);
cp.no_stdin = 1;
return run_command(&cp);
}
struct module_clone_data {
const char *prefix;
const char *path;
const char *name;
const char *url;
const char *depth;
struct string_list reference;
unsigned int quiet: 1;
unsigned int progress: 1;
unsigned int dissociate: 1;
unsigned int require_init: 1;
int single_branch;
};
#define MODULE_CLONE_DATA_INIT { .reference = STRING_LIST_INIT_NODUP, .single_branch = -1 }
struct submodule_alternate_setup {
const char *submodule_name;
@ -1801,37 +1778,128 @@ static void prepare_possible_alternates(const char *sm_name,
free(error_strategy);
}
static int clone_submodule(struct module_clone_data *clone_data)
{
char *p, *sm_gitdir;
char *sm_alternate = NULL, *error_strategy = NULL;
struct strbuf sb = STRBUF_INIT;
struct child_process cp = CHILD_PROCESS_INIT;
strbuf_addf(&sb, "%s/modules/%s", get_git_dir(), clone_data->name);
sm_gitdir = absolute_pathdup(sb.buf);
strbuf_reset(&sb);
if (!is_absolute_path(clone_data->path)) {
strbuf_addf(&sb, "%s/%s", get_git_work_tree(), clone_data->path);
clone_data->path = strbuf_detach(&sb, NULL);
} else {
clone_data->path = xstrdup(clone_data->path);
}
if (validate_submodule_git_dir(sm_gitdir, clone_data->name) < 0)
die(_("refusing to create/use '%s' in another submodule's "
"git dir"), sm_gitdir);
if (!file_exists(sm_gitdir)) {
if (safe_create_leading_directories_const(sm_gitdir) < 0)
die(_("could not create directory '%s'"), sm_gitdir);
prepare_possible_alternates(clone_data->name, &clone_data->reference);
strvec_push(&cp.args, "clone");
strvec_push(&cp.args, "--no-checkout");
if (clone_data->quiet)
strvec_push(&cp.args, "--quiet");
if (clone_data->progress)
strvec_push(&cp.args, "--progress");
if (clone_data->depth && *(clone_data->depth))
strvec_pushl(&cp.args, "--depth", clone_data->depth, NULL);
if (clone_data->reference.nr) {
struct string_list_item *item;
for_each_string_list_item(item, &clone_data->reference)
strvec_pushl(&cp.args, "--reference",
item->string, NULL);
}
if (clone_data->dissociate)
strvec_push(&cp.args, "--dissociate");
if (sm_gitdir && *sm_gitdir)
strvec_pushl(&cp.args, "--separate-git-dir", sm_gitdir, NULL);
if (clone_data->single_branch >= 0)
strvec_push(&cp.args, clone_data->single_branch ?
"--single-branch" :
"--no-single-branch");
strvec_push(&cp.args, "--");
strvec_push(&cp.args, clone_data->url);
strvec_push(&cp.args, clone_data->path);
cp.git_cmd = 1;
prepare_submodule_repo_env(&cp.env_array);
cp.no_stdin = 1;
if(run_command(&cp))
die(_("clone of '%s' into submodule path '%s' failed"),
clone_data->url, clone_data->path);
} else {
if (clone_data->require_init && !access(clone_data->path, X_OK) &&
!is_empty_dir(clone_data->path))
die(_("directory not empty: '%s'"), clone_data->path);
if (safe_create_leading_directories_const(clone_data->path) < 0)
die(_("could not create directory '%s'"), clone_data->path);
strbuf_addf(&sb, "%s/index", sm_gitdir);
unlink_or_warn(sb.buf);
strbuf_reset(&sb);
}
connect_work_tree_and_git_dir(clone_data->path, sm_gitdir, 0);
p = git_pathdup_submodule(clone_data->path, "config");
if (!p)
die(_("could not get submodule directory for '%s'"), clone_data->path);
/* setup alternateLocation and alternateErrorStrategy in the cloned submodule if needed */
git_config_get_string("submodule.alternateLocation", &sm_alternate);
if (sm_alternate)
git_config_set_in_file(p, "submodule.alternateLocation",
sm_alternate);
git_config_get_string("submodule.alternateErrorStrategy", &error_strategy);
if (error_strategy)
git_config_set_in_file(p, "submodule.alternateErrorStrategy",
error_strategy);
free(sm_alternate);
free(error_strategy);
strbuf_release(&sb);
free(sm_gitdir);
free(p);
return 0;
}
static int module_clone(int argc, const char **argv, const char *prefix)
{
const char *name = NULL, *url = NULL, *depth = NULL;
int quiet = 0;
int progress = 0;
char *p, *path = NULL, *sm_gitdir;
struct strbuf sb = STRBUF_INIT;
struct string_list reference = STRING_LIST_INIT_NODUP;
int dissociate = 0, require_init = 0;
char *sm_alternate = NULL, *error_strategy = NULL;
int single_branch = -1;
int dissociate = 0, quiet = 0, progress = 0, require_init = 0;
struct module_clone_data clone_data = MODULE_CLONE_DATA_INIT;
struct option module_clone_options[] = {
OPT_STRING(0, "prefix", &prefix,
OPT_STRING(0, "prefix", &clone_data.prefix,
N_("path"),
N_("alternative anchor for relative paths")),
OPT_STRING(0, "path", &path,
OPT_STRING(0, "path", &clone_data.path,
N_("path"),
N_("where the new submodule will be cloned to")),
OPT_STRING(0, "name", &name,
OPT_STRING(0, "name", &clone_data.name,
N_("string"),
N_("name of the new submodule")),
OPT_STRING(0, "url", &url,
OPT_STRING(0, "url", &clone_data.url,
N_("string"),
N_("url where to clone the submodule from")),
OPT_STRING_LIST(0, "reference", &reference,
OPT_STRING_LIST(0, "reference", &clone_data.reference,
N_("repo"),
N_("reference repository")),
OPT_BOOL(0, "dissociate", &dissociate,
N_("use --reference only while cloning")),
OPT_STRING(0, "depth", &depth,
OPT_STRING(0, "depth", &clone_data.depth,
N_("string"),
N_("depth for shallow clones")),
OPT__QUIET(&quiet, "Suppress output for cloning a submodule"),
@ -1839,7 +1907,7 @@ static int module_clone(int argc, const char **argv, const char *prefix)
N_("force cloning progress")),
OPT_BOOL(0, "require-init", &require_init,
N_("disallow cloning into non-empty directory")),
OPT_BOOL(0, "single-branch", &single_branch,
OPT_BOOL(0, "single-branch", &clone_data.single_branch,
N_("clone only one branch, HEAD or --branch")),
OPT_END()
};
@ -1855,67 +1923,16 @@ static int module_clone(int argc, const char **argv, const char *prefix)
argc = parse_options(argc, argv, prefix, module_clone_options,
git_submodule_helper_usage, 0);
if (argc || !url || !path || !*path)
clone_data.dissociate = !!dissociate;
clone_data.quiet = !!quiet;
clone_data.progress = !!progress;
clone_data.require_init = !!require_init;
if (argc || !clone_data.url || !clone_data.path || !*(clone_data.path))
usage_with_options(git_submodule_helper_usage,
module_clone_options);
strbuf_addf(&sb, "%s/modules/%s", get_git_dir(), name);
sm_gitdir = absolute_pathdup(sb.buf);
strbuf_reset(&sb);
if (!is_absolute_path(path)) {
strbuf_addf(&sb, "%s/%s", get_git_work_tree(), path);
path = strbuf_detach(&sb, NULL);
} else
path = xstrdup(path);
if (validate_submodule_git_dir(sm_gitdir, name) < 0)
die(_("refusing to create/use '%s' in another submodule's "
"git dir"), sm_gitdir);
if (!file_exists(sm_gitdir)) {
if (safe_create_leading_directories_const(sm_gitdir) < 0)
die(_("could not create directory '%s'"), sm_gitdir);
prepare_possible_alternates(name, &reference);
if (clone_submodule(path, sm_gitdir, url, depth, &reference, dissociate,
quiet, progress, single_branch))
die(_("clone of '%s' into submodule path '%s' failed"),
url, path);
} else {
if (require_init && !access(path, X_OK) && !is_empty_dir(path))
die(_("directory not empty: '%s'"), path);
if (safe_create_leading_directories_const(path) < 0)
die(_("could not create directory '%s'"), path);
strbuf_addf(&sb, "%s/index", sm_gitdir);
unlink_or_warn(sb.buf);
strbuf_reset(&sb);
}
connect_work_tree_and_git_dir(path, sm_gitdir, 0);
p = git_pathdup_submodule(path, "config");
if (!p)
die(_("could not get submodule directory for '%s'"), path);
/* setup alternateLocation and alternateErrorStrategy in the cloned submodule if needed */
git_config_get_string("submodule.alternateLocation", &sm_alternate);
if (sm_alternate)
git_config_set_in_file(p, "submodule.alternateLocation",
sm_alternate);
git_config_get_string("submodule.alternateErrorStrategy", &error_strategy);
if (error_strategy)
git_config_set_in_file(p, "submodule.alternateErrorStrategy",
error_strategy);
free(sm_alternate);
free(error_strategy);
strbuf_release(&sb);
free(sm_gitdir);
free(path);
free(p);
clone_submodule(&clone_data);
return 0;
}
@ -2744,6 +2761,181 @@ static int module_set_branch(int argc, const char **argv, const char *prefix)
return !!ret;
}
struct add_data {
const char *prefix;
const char *branch;
const char *reference_path;
const char *sm_path;
const char *sm_name;
const char *repo;
const char *realrepo;
int depth;
unsigned int force: 1;
unsigned int quiet: 1;
unsigned int progress: 1;
unsigned int dissociate: 1;
};
#define ADD_DATA_INIT { .depth = -1 }
static void show_fetch_remotes(FILE *output, const char *git_dir_path)
{
struct child_process cp_remote = CHILD_PROCESS_INIT;
struct strbuf sb_remote_out = STRBUF_INIT;
cp_remote.git_cmd = 1;
strvec_pushf(&cp_remote.env_array,
"GIT_DIR=%s", git_dir_path);
strvec_push(&cp_remote.env_array, "GIT_WORK_TREE=.");
strvec_pushl(&cp_remote.args, "remote", "-v", NULL);
if (!capture_command(&cp_remote, &sb_remote_out, 0)) {
char *next_line;
char *line = sb_remote_out.buf;
while ((next_line = strchr(line, '\n')) != NULL) {
size_t len = next_line - line;
if (strip_suffix_mem(line, &len, " (fetch)"))
fprintf(output, " %.*s\n", (int)len, line);
line = next_line + 1;
}
}
strbuf_release(&sb_remote_out);
}
static int add_submodule(const struct add_data *add_data)
{
char *submod_gitdir_path;
struct module_clone_data clone_data = MODULE_CLONE_DATA_INIT;
/* perhaps the path already exists and is already a git repo, else clone it */
if (is_directory(add_data->sm_path)) {
struct strbuf sm_path = STRBUF_INIT;
strbuf_addstr(&sm_path, add_data->sm_path);
submod_gitdir_path = xstrfmt("%s/.git", add_data->sm_path);
if (is_nonbare_repository_dir(&sm_path))
printf(_("Adding existing repo at '%s' to the index\n"),
add_data->sm_path);
else
die(_("'%s' already exists and is not a valid git repo"),
add_data->sm_path);
strbuf_release(&sm_path);
free(submod_gitdir_path);
} else {
struct child_process cp = CHILD_PROCESS_INIT;
submod_gitdir_path = xstrfmt(".git/modules/%s", add_data->sm_name);
if (is_directory(submod_gitdir_path)) {
if (!add_data->force) {
fprintf(stderr, _("A git directory for '%s' is found "
"locally with remote(s):"),
add_data->sm_name);
show_fetch_remotes(stderr, submod_gitdir_path);
free(submod_gitdir_path);
die(_("If you want to reuse this local git "
"directory instead of cloning again from\n"
" %s\n"
"use the '--force' option. If the local git "
"directory is not the correct repo\n"
"or if you are unsure what this means, choose "
"another name with the '--name' option.\n"),
add_data->realrepo);
} else {
printf(_("Reactivating local git directory for "
"submodule '%s'\n"), add_data->sm_name);
}
}
free(submod_gitdir_path);
clone_data.prefix = add_data->prefix;
clone_data.path = add_data->sm_path;
clone_data.name = add_data->sm_name;
clone_data.url = add_data->realrepo;
clone_data.quiet = add_data->quiet;
clone_data.progress = add_data->progress;
if (add_data->reference_path)
string_list_append(&clone_data.reference,
xstrdup(add_data->reference_path));
clone_data.dissociate = add_data->dissociate;
if (add_data->depth >= 0)
clone_data.depth = xstrfmt("%d", add_data->depth);
if (clone_submodule(&clone_data))
return -1;
prepare_submodule_repo_env(&cp.env_array);
cp.git_cmd = 1;
cp.dir = add_data->sm_path;
strvec_pushl(&cp.args, "checkout", "-f", "-q", NULL);
if (add_data->branch) {
strvec_pushl(&cp.args, "-B", add_data->branch, NULL);
strvec_pushf(&cp.args, "origin/%s", add_data->branch);
}
if (run_command(&cp))
die(_("unable to checkout submodule '%s'"), add_data->sm_path);
}
return 0;
}
static int add_clone(int argc, const char **argv, const char *prefix)
{
int force = 0, quiet = 0, dissociate = 0, progress = 0;
struct add_data add_data = ADD_DATA_INIT;
struct option options[] = {
OPT_STRING('b', "branch", &add_data.branch,
N_("branch"),
N_("branch of repository to checkout on cloning")),
OPT_STRING(0, "prefix", &prefix,
N_("path"),
N_("alternative anchor for relative paths")),
OPT_STRING(0, "path", &add_data.sm_path,
N_("path"),
N_("where the new submodule will be cloned to")),
OPT_STRING(0, "name", &add_data.sm_name,
N_("string"),
N_("name of the new submodule")),
OPT_STRING(0, "url", &add_data.realrepo,
N_("string"),
N_("url where to clone the submodule from")),
OPT_STRING(0, "reference", &add_data.reference_path,
N_("repo"),
N_("reference repository")),
OPT_BOOL(0, "dissociate", &dissociate,
N_("use --reference only while cloning")),
OPT_INTEGER(0, "depth", &add_data.depth,
N_("depth for shallow clones")),
OPT_BOOL(0, "progress", &progress,
N_("force cloning progress")),
OPT__FORCE(&force, N_("allow adding an otherwise ignored submodule path"),
PARSE_OPT_NOCOMPLETE),
OPT__QUIET(&quiet, "suppress output for cloning a submodule"),
OPT_END()
};
const char *const usage[] = {
N_("git submodule--helper add-clone [<options>...] "
"--url <url> --path <path> --name <name>"),
NULL
};
argc = parse_options(argc, argv, prefix, options, usage, 0);
if (argc != 0)
usage_with_options(usage, options);
add_data.prefix = prefix;
add_data.progress = !!progress;
add_data.dissociate = !!dissociate;
add_data.force = !!force;
add_data.quiet = !!quiet;
if (add_submodule(&add_data))
return 1;
return 0;
}
#define SUPPORT_SUPER_PREFIX (1<<0)
struct cmd_struct {
@ -2756,6 +2948,7 @@ static struct cmd_struct commands[] = {
{"list", module_list, 0},
{"name", module_name, 0},
{"clone", module_clone, 0},
{"add-clone", add_clone, 0},
{"update-module-mode", module_update_module_mode, 0},
{"update-clone", update_clone, 0},
{"ensure-core-worktree", ensure_core_worktree, 0},

View file

@ -465,8 +465,6 @@ int cache_tree_update(struct index_state *istate, int flags)
if (i)
return i;
ensure_full_index(istate);
if (!istate->cache_tree)
istate->cache_tree = cache_tree();

View file

@ -341,6 +341,27 @@ int mingw_rmdir(const char *pathname)
{
int ret, tries = 0;
wchar_t wpathname[MAX_PATH];
struct stat st;
/*
* Contrary to Linux' `rmdir()`, Windows' _wrmdir() and _rmdir()
* (and `RemoveDirectoryW()`) will attempt to remove the target of a
* symbolic link (if it points to a directory).
*
* This behavior breaks the assumption of e.g. `remove_path()` which
* upon successful deletion of a file will attempt to remove its parent
* directories recursively until failure (which usually happens when
* the directory is not empty).
*
* Therefore, before calling `_wrmdir()`, we first check if the path is
* a symbolic link. If it is, we exit and return the same error as
* Linux' `rmdir()` would, i.e. `ENOTDIR`.
*/
if (!mingw_lstat(pathname, &st) && S_ISLNK(st.st_mode)) {
errno = ENOTDIR;
return -1;
}
if (xutftowcs_path(wpathname, pathname) < 0)
return -1;

View file

@ -916,6 +916,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len
else
strbuf_swap(dst, &nbuf);
strbuf_release(&nbuf);
strbuf_release(&filter_status);
return !err;
}
@ -966,6 +967,7 @@ int async_query_available_blobs(const char *cmd, struct string_list *available_p
if (err)
handle_filter_error(&filter_status, entry, 0);
strbuf_release(&filter_status);
return !err;
}

View file

@ -107,7 +107,6 @@ int diff_merges_parse_opts(struct rev_info *revs, const char **argv)
if (!strcmp(arg, "-m")) {
set_to_default(revs);
revs->merges_imply_patch = 1;
} else if (!strcmp(arg, "-c")) {
set_combined(revs);
revs->merges_imply_patch = 1;

2
diff.c
View file

@ -4640,7 +4640,7 @@ void diff_setup_done(struct diff_options *options)
die(_("-G and --pickaxe-regex are mutually exclusive, use --pickaxe-regex with -S"));
if (HAS_MULTI_BITS(options->pickaxe_opts & DIFF_PICKAXE_KINDS_ALL_OBJFIND_MASK))
die(_("---pickaxe-all and --find-object are mutually exclusive, use --pickaxe-all with -G and -S"));
die(_("--pickaxe-all and --find-object are mutually exclusive, use --pickaxe-all with -G and -S"));
/*
* Most of the time we can say "there are changes"

View file

@ -448,9 +448,9 @@ static void update_dir_rename_counts(struct dir_rename_info *info,
const char *oldname,
const char *newname)
{
char *old_dir = xstrdup(oldname);
char *new_dir = xstrdup(newname);
char new_dir_first_char = new_dir[0];
char *old_dir;
char *new_dir;
const char new_dir_first_char = newname[0];
int first_time_in_loop = 1;
if (!info->setup)
@ -475,6 +475,10 @@ static void update_dir_rename_counts(struct dir_rename_info *info,
*/
return;
old_dir = xstrdup(oldname);
new_dir = xstrdup(newname);
while (1) {
int drd_flag = NOT_RELEVANT;

13
dir.c
View file

@ -782,9 +782,7 @@ static void add_pattern_to_hashsets(struct pattern_list *pl, struct path_pattern
translated->pattern = truncated;
translated->patternlen = given->patternlen - 2;
hashmap_entry_init(&translated->ent,
ignore_case ?
strihash(translated->pattern) :
strhash(translated->pattern));
fspathhash(translated->pattern));
if (!hashmap_get_entry(&pl->recursive_hashmap,
translated, ent, NULL)) {
@ -813,9 +811,7 @@ static void add_pattern_to_hashsets(struct pattern_list *pl, struct path_pattern
translated->pattern = dup_and_filter_pattern(given->pattern);
translated->patternlen = given->patternlen;
hashmap_entry_init(&translated->ent,
ignore_case ?
strihash(translated->pattern) :
strhash(translated->pattern));
fspathhash(translated->pattern));
hashmap_add(&pl->recursive_hashmap, &translated->ent);
@ -845,10 +841,7 @@ static int hashmap_contains_path(struct hashmap *map,
/* Check straight mapping */
p.pattern = pattern->buf;
p.patternlen = pattern->len;
hashmap_entry_init(&p.ent,
ignore_case ?
strihash(p.pattern) :
strhash(p.pattern));
hashmap_entry_init(&p.ent, fspathhash(p.pattern));
return !!hashmap_get_entry(map, &p, ent, NULL);
}

View file

@ -253,21 +253,20 @@ static int git_work_tree_initialized;
*/
void set_git_work_tree(const char *new_work_tree)
{
struct strbuf realpath = STRBUF_INIT;
if (git_work_tree_initialized) {
struct strbuf realpath = STRBUF_INIT;
strbuf_realpath(&realpath, new_work_tree, 1);
new_work_tree = realpath.buf;
if (strcmp(new_work_tree, the_repository->worktree))
die("internal error: work tree has already been set\n"
"Current worktree: %s\nNew worktree: %s",
the_repository->worktree, new_work_tree);
strbuf_release(&realpath);
return;
}
git_work_tree_initialized = 1;
repo_set_worktree(the_repository, new_work_tree);
strbuf_release(&realpath);
}
const char *get_git_work_tree(void)

View file

@ -108,6 +108,7 @@ static int handle_line(char *line, struct merge_parents *merge_parents)
struct origin_data *origin_data;
char *src;
const char *origin, *tag_name;
char *to_free = NULL;
struct src_data *src_data;
struct string_list_item *item;
int pulling_head = 0;
@ -183,12 +184,13 @@ static int handle_line(char *line, struct merge_parents *merge_parents)
if (!strcmp(".", src) || !strcmp(src, origin)) {
int len = strlen(origin);
if (origin[0] == '\'' && origin[len - 1] == '\'')
origin = xmemdupz(origin + 1, len - 2);
origin = to_free = xmemdupz(origin + 1, len - 2);
} else
origin = xstrfmt("%s of %s", origin, src);
origin = to_free = xstrfmt("%s of %s", origin, src);
if (strcmp(".", src))
origin_data->is_local_branch = 0;
string_list_append(&origins, origin)->util = origin_data;
free(to_free);
return 0;
}

View file

@ -147,7 +147,7 @@ cmd_add()
if ! git submodule--helper config --check-writeable >/dev/null 2>&1
then
die "$(eval_gettext "please make sure that the .gitmodules file is in the working tree")"
die "fatal: $(eval_gettext "please make sure that the .gitmodules file is in the working tree")"
fi
if test -n "$reference_path"
@ -176,7 +176,7 @@ cmd_add()
case "$repo" in
./*|../*)
test -z "$wt_prefix" ||
die "$(gettext "Relative path can only be used from the toplevel of the working tree")"
die "fatal: $(gettext "Relative path can only be used from the toplevel of the working tree")"
# dereference source url relative to parent's url
realrepo=$(git submodule--helper resolve-relative-url "$repo") || exit
@ -186,7 +186,7 @@ cmd_add()
realrepo=$repo
;;
*)
die "$(eval_gettext "repo URL: '\$repo' must be absolute or begin with ./|../")"
die "fatal: $(eval_gettext "repo URL: '\$repo' must be absolute or begin with ./|../")"
;;
esac
@ -205,17 +205,17 @@ cmd_add()
if test -z "$force"
then
git ls-files --error-unmatch "$sm_path" > /dev/null 2>&1 &&
die "$(eval_gettext "'\$sm_path' already exists in the index")"
die "fatal: $(eval_gettext "'\$sm_path' already exists in the index")"
else
git ls-files -s "$sm_path" | sane_grep -v "^160000" > /dev/null 2>&1 &&
die "$(eval_gettext "'\$sm_path' already exists in the index and is not a submodule")"
die "fatal: $(eval_gettext "'\$sm_path' already exists in the index and is not a submodule")"
fi
if test -d "$sm_path" &&
test -z $(git -C "$sm_path" rev-parse --show-cdup 2>/dev/null)
then
git -C "$sm_path" rev-parse --verify -q HEAD >/dev/null ||
die "$(eval_gettext "'\$sm_path' does not have a commit checked out")"
die "fatal: $(eval_gettext "'\$sm_path' does not have a commit checked out")"
fi
if test -z "$force"
@ -238,50 +238,14 @@ cmd_add()
if ! git submodule--helper check-name "$sm_name"
then
die "$(eval_gettext "'$sm_name' is not a valid submodule name")"
die "fatal: $(eval_gettext "'$sm_name' is not a valid submodule name")"
fi
# perhaps the path exists and is already a git repo, else clone it
if test -e "$sm_path"
then
if test -d "$sm_path"/.git || test -f "$sm_path"/.git
then
eval_gettextln "Adding existing repo at '\$sm_path' to the index"
else
die "$(eval_gettext "'\$sm_path' already exists and is not a valid git repo")"
fi
else
if test -d ".git/modules/$sm_name"
then
if test -z "$force"
then
eval_gettextln >&2 "A git directory for '\$sm_name' is found locally with remote(s):"
GIT_DIR=".git/modules/$sm_name" GIT_WORK_TREE=. git remote -v | grep '(fetch)' | sed -e s,^," ", -e s,' (fetch)',, >&2
die "$(eval_gettextln "\
If you want to reuse this local git directory instead of cloning again from
\$realrepo
use the '--force' option. If the local git directory is not the correct repo
or you are unsure what this means choose another name with the '--name' option.")"
else
eval_gettextln "Reactivating local git directory for submodule '\$sm_name'."
fi
fi
git submodule--helper clone ${GIT_QUIET:+--quiet} ${progress:+"--progress"} --prefix "$wt_prefix" --path "$sm_path" --name "$sm_name" --url "$realrepo" ${reference:+"$reference"} ${dissociate:+"--dissociate"} ${depth:+"$depth"} || exit
(
sanitize_submodule_env
cd "$sm_path" &&
# ash fails to wordsplit ${branch:+-b "$branch"...}
case "$branch" in
'') git checkout -f -q ;;
?*) git checkout -f -q -B "$branch" "origin/$branch" ;;
esac
) || die "$(eval_gettext "Unable to checkout submodule '\$sm_path'")"
fi
git submodule--helper add-clone ${GIT_QUIET:+--quiet} ${force:+"--force"} ${progress:+"--progress"} ${branch:+--branch "$branch"} --prefix "$wt_prefix" --path "$sm_path" --name "$sm_name" --url "$realrepo" ${reference:+"$reference"} ${dissociate:+"--dissociate"} ${depth:+"$depth"} || exit
git config submodule."$sm_name".url "$realrepo"
git add --no-warn-embedded-repo $force "$sm_path" ||
die "$(eval_gettext "Failed to add submodule '\$sm_path'")"
die "fatal: $(eval_gettext "Failed to add submodule '\$sm_path'")"
git submodule--helper config submodule."$sm_name".path "$sm_path" &&
git submodule--helper config submodule."$sm_name".url "$repo" &&
@ -290,7 +254,7 @@ or you are unsure what this means choose another name with the '--name' option."
git submodule--helper config submodule."$sm_name".branch "$branch"
fi &&
git add --force .gitmodules ||
die "$(eval_gettext "Failed to register submodule '\$sm_path'")"
die "fatal: $(eval_gettext "Failed to register submodule '\$sm_path'")"
# NEEDSWORK: In a multi-working-tree world, this needs to be
# set in the per-worktree config.
@ -565,7 +529,7 @@ cmd_update()
else
subsha1=$(sanitize_submodule_env; cd "$sm_path" &&
git rev-parse --verify HEAD) ||
die "$(eval_gettext "Unable to find current revision in submodule path '\$displaypath'")"
die "fatal: $(eval_gettext "Unable to find current revision in submodule path '\$displaypath'")"
fi
if test -n "$remote"
@ -575,12 +539,12 @@ cmd_update()
then
# Fetch remote before determining tracking $sha1
fetch_in_submodule "$sm_path" $depth ||
die "$(eval_gettext "Unable to fetch in submodule path '\$sm_path'")"
die "fatal: $(eval_gettext "Unable to fetch in submodule path '\$sm_path'")"
fi
remote_name=$(sanitize_submodule_env; cd "$sm_path" && git submodule--helper print-default-remote)
sha1=$(sanitize_submodule_env; cd "$sm_path" &&
git rev-parse --verify "${remote_name}/${branch}") ||
die "$(eval_gettext "Unable to find current \${remote_name}/\${branch} revision in submodule path '\$sm_path'")"
die "fatal: $(eval_gettext "Unable to find current \${remote_name}/\${branch} revision in submodule path '\$sm_path'")"
fi
if test "$subsha1" != "$sha1" || test -n "$force"
@ -604,36 +568,36 @@ cmd_update()
# not be reachable from any of the refs
is_tip_reachable "$sm_path" "$sha1" ||
fetch_in_submodule "$sm_path" "$depth" "$sha1" ||
die "$(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain \$sha1. Direct fetching of that commit failed.")"
die "fatal: $(eval_gettext "Fetched in submodule path '\$displaypath', but it did not contain \$sha1. Direct fetching of that commit failed.")"
fi
must_die_on_failure=
case "$update_module" in
checkout)
command="git checkout $subforce -q"
die_msg="$(eval_gettext "Unable to checkout '\$sha1' in submodule path '\$displaypath'")"
die_msg="fatal: $(eval_gettext "Unable to checkout '\$sha1' in submodule path '\$displaypath'")"
say_msg="$(eval_gettext "Submodule path '\$displaypath': checked out '\$sha1'")"
;;
rebase)
command="git rebase ${GIT_QUIET:+--quiet}"
die_msg="$(eval_gettext "Unable to rebase '\$sha1' in submodule path '\$displaypath'")"
die_msg="fatal: $(eval_gettext "Unable to rebase '\$sha1' in submodule path '\$displaypath'")"
say_msg="$(eval_gettext "Submodule path '\$displaypath': rebased into '\$sha1'")"
must_die_on_failure=yes
;;
merge)
command="git merge ${GIT_QUIET:+--quiet}"
die_msg="$(eval_gettext "Unable to merge '\$sha1' in submodule path '\$displaypath'")"
die_msg="fatal: $(eval_gettext "Unable to merge '\$sha1' in submodule path '\$displaypath'")"
say_msg="$(eval_gettext "Submodule path '\$displaypath': merged in '\$sha1'")"
must_die_on_failure=yes
;;
!*)
command="${update_module#!}"
die_msg="$(eval_gettext "Execution of '\$command \$sha1' failed in submodule path '\$displaypath'")"
die_msg="fatal: $(eval_gettext "Execution of '\$command \$sha1' failed in submodule path '\$displaypath'")"
say_msg="$(eval_gettext "Submodule path '\$displaypath': '\$command \$sha1'")"
must_die_on_failure=yes
;;
*)
die "$(eval_gettext "Invalid update mode '$update_module' for submodule path '$path'")"
die "fatal: $(eval_gettext "Invalid update mode '$update_module' for submodule path '$path'")"
esac
if (sanitize_submodule_env; cd "$sm_path" && $command "$sha1")
@ -660,7 +624,7 @@ cmd_update()
res=$?
if test $res -gt 0
then
die_msg="$(eval_gettext "Failed to recurse into submodule path '\$displaypath'")"
die_msg="fatal: $(eval_gettext "Failed to recurse into submodule path '\$displaypath'")"
if test $res -ne 2
then
err="${err};$die_msg"

View file

@ -62,6 +62,53 @@ struct traversal_callback_data {
struct name_entry names[3];
};
struct deferred_traversal_data {
/*
* possible_trivial_merges: directories to be explored only when needed
*
* possible_trivial_merges is a map of directory names to
* dir_rename_mask. When we detect that a directory is unchanged on
* one side, we can sometimes resolve the directory without recursing
* into it. Renames are the only things that can prevent such an
* optimization. However, for rename sources:
* - If no parent directory needed directory rename detection, then
* no path under such a directory can be a relevant_source.
* and for rename destinations:
* - If no cached rename has a target path under the directory AND
* - If there are no unpaired relevant_sources elsewhere in the
* repository
* then we don't need any path under this directory for a rename
* destination. The only way to know the last item above is to defer
* handling such directories until the end of collect_merge_info(),
* in handle_deferred_entries().
*
* For each we store dir_rename_mask, since that's the only bit of
* information we need, other than the path, to resume the recursive
* traversal.
*/
struct strintmap possible_trivial_merges;
/*
* trivial_merges_okay: if trivial directory merges are okay
*
* See possible_trivial_merges above. The "no unpaired
* relevant_sources elsewhere in the repository" is a single boolean
* per merge side, which we store here. Note that while 0 means no,
* 1 only means "maybe" rather than "yes"; we optimistically set it
* to 1 initially and only clear when we determine it is unsafe to
* do trivial directory merges.
*/
unsigned trivial_merges_okay;
/*
* target_dirs: ancestor directories of rename targets
*
* target_dirs contains all directory names that are an ancestor of
* any rename destination.
*/
struct strset target_dirs;
};
struct rename_info {
/*
* All variables that are arrays of size 3 correspond to data tracked
@ -119,6 +166,8 @@ struct rename_info {
*/
struct strintmap relevant_sources[3];
struct deferred_traversal_data deferred[3];
/*
* dir_rename_mask:
* 0: optimization removing unmodified potential rename source okay
@ -164,6 +213,7 @@ struct rename_info {
* MERGE_SIDE2: cached data from side2 can be reused
* MERGE_SIDE1: cached data from side1 can be reused
* 0: no cached data can be reused
* -1: See redo_after_renames; both sides can be reused.
*/
int cached_pairs_valid_side;
@ -209,6 +259,28 @@ struct rename_info {
*/
struct strset cached_irrelevant[3];
/*
* redo_after_renames: optimization flag for "restarting" the merge
*
* Sometimes it pays to detect renames, cache them, and then
* restart the merge operation from the beginning. The reason for
* this is that when we know where all the renames are, we know
* whether a certain directory has any paths under it affected --
* and if a directory is not affected then it permits us to do
* trivial tree merging in more cases. Doing trivial tree merging
* prevents the need to run process_entry() on every path
* underneath trees that can be trivially merged, and
* process_entry() is more expensive than collect_merge_info() --
* plus, the second collect_merge_info() will be much faster since
* it doesn't have to recurse into the relevant trees.
*
* Values for this flag:
* 0 = don't bother, not worth it (or conditions not yet checked)
* 1 = conditions for optimization met, optimization worthwhile
* 2 = we already did it (don't restart merge yet again)
*/
unsigned redo_after_renames;
/*
* needed_limit: value needed for inexact rename detection to run
*
@ -492,7 +564,8 @@ static void clear_or_reinit_internal_opts(struct merge_options_internal *opti,
strintmap_func(&renames->relevant_sources[i]);
if (!reinitialize)
assert(renames->cached_pairs_valid_side == 0);
if (i != renames->cached_pairs_valid_side) {
if (i != renames->cached_pairs_valid_side &&
-1 != renames->cached_pairs_valid_side) {
strset_func(&renames->cached_target_names[i]);
strmap_func(&renames->cached_pairs[i], 1);
strset_func(&renames->cached_irrelevant[i]);
@ -501,6 +574,11 @@ static void clear_or_reinit_internal_opts(struct merge_options_internal *opti,
strmap_clear(&renames->dir_rename_count[i], 1);
}
}
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; ++i) {
strintmap_func(&renames->deferred[i].possible_trivial_merges);
strset_func(&renames->deferred[i].target_dirs);
renames->deferred[i].trivial_merges_okay = 1; /* 1 == maybe */
}
renames->cached_pairs_valid_side = 0;
renames->dir_rename_mask = 0;
@ -1019,20 +1097,67 @@ static int collect_merge_info_callback(int n,
if (side1_matches_mbase && side2_matches_mbase) {
/* mbase, side1, & side2 all match; use mbase as resolution */
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
names, names+0, mbase_null, 0,
names, names+0, mbase_null, 0 /* df_conflict */,
filemask, dirmask, 1 /* resolved */);
return mask;
}
/*
* If the sides match, and all three paths are present and are
* files, then we can take either as the resolution. We can't do
* this with trees, because there may be rename sources from the
* merge_base.
*/
if (sides_match && filemask == 0x07) {
/* use side1 (== side2) version as resolution */
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
names, names+1, side1_null, 0,
filemask, dirmask, 1);
return mask;
}
/*
* Gather additional information used in rename detection.
* If side1 matches mbase and all three paths are present and are
* files, then we can use side2 as the resolution. We cannot
* necessarily do so this for trees, because there may be rename
* destinations within side2.
*/
if (side1_matches_mbase && filemask == 0x07) {
/* use side2 version as resolution */
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
names, names+2, side2_null, 0,
filemask, dirmask, 1);
return mask;
}
/* Similar to above but swapping sides 1 and 2 */
if (side2_matches_mbase && filemask == 0x07) {
/* use side1 version as resolution */
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
names, names+1, side1_null, 0,
filemask, dirmask, 1);
return mask;
}
/*
* Sometimes we can tell that a source path need not be included in
* rename detection -- namely, whenever either
* side1_matches_mbase && side2_null
* or
* side2_matches_mbase && side1_null
* However, we call collect_rename_info() even in those cases,
* because exact renames are cheap and would let us remove both a
* source and destination path. We'll cull the unneeded sources
* later.
*/
collect_rename_info(opt, names, dirname, fullpath,
filemask, dirmask, match_mask);
/*
* Record information about the path so we can resolve later in
* process_entries.
* None of the special cases above matched, so we have a
* provisional conflict. (Rename detection might allow us to
* unconflict some more cases, but that comes later so all we can
* do now is record the different non-null file hashes.)
*/
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
names, NULL, 0, df_conflict, filemask, dirmask, 0);
@ -1047,8 +1172,36 @@ static int collect_merge_info_callback(int n,
struct tree_desc t[3];
void *buf[3] = {NULL, NULL, NULL};
const char *original_dir_name;
int i, ret;
int i, ret, side;
/*
* Check for whether we can avoid recursing due to one side
* matching the merge base. The side that does NOT match is
* the one that might have a rename destination we need.
*/
assert(!side1_matches_mbase || !side2_matches_mbase);
side = side1_matches_mbase ? MERGE_SIDE2 :
side2_matches_mbase ? MERGE_SIDE1 : MERGE_BASE;
if (filemask == 0 && (dirmask == 2 || dirmask == 4)) {
/*
* Also defer recursing into new directories; set up a
* few variables to let us do so.
*/
ci->match_mask = (7 - dirmask);
side = dirmask / 2;
}
if (renames->dir_rename_mask != 0x07 &&
side != MERGE_BASE &&
renames->deferred[side].trivial_merges_okay &&
!strset_contains(&renames->deferred[side].target_dirs,
pi.string)) {
strintmap_set(&renames->deferred[side].possible_trivial_merges,
pi.string, renames->dir_rename_mask);
renames->dir_rename_mask = prev_dir_rename_mask;
return mask;
}
/* We need to recurse */
ci->match_mask &= filemask;
newinfo = *info;
newinfo.prev = info;
@ -1102,6 +1255,192 @@ static int collect_merge_info_callback(int n,
return mask;
}
static void resolve_trivial_directory_merge(struct conflict_info *ci, int side)
{
VERIFY_CI(ci);
assert((side == 1 && ci->match_mask == 5) ||
(side == 2 && ci->match_mask == 3));
oidcpy(&ci->merged.result.oid, &ci->stages[side].oid);
ci->merged.result.mode = ci->stages[side].mode;
ci->merged.is_null = is_null_oid(&ci->stages[side].oid);
ci->match_mask = 0;
ci->merged.clean = 1; /* (ci->filemask == 0); */
}
static int handle_deferred_entries(struct merge_options *opt,
struct traverse_info *info)
{
struct rename_info *renames = &opt->priv->renames;
struct hashmap_iter iter;
struct strmap_entry *entry;
int side, ret = 0;
int path_count_before, path_count_after = 0;
path_count_before = strmap_get_size(&opt->priv->paths);
for (side = MERGE_SIDE1; side <= MERGE_SIDE2; side++) {
unsigned optimization_okay = 1;
struct strintmap copy;
/* Loop over the set of paths we need to know rename info for */
strset_for_each_entry(&renames->relevant_sources[side],
&iter, entry) {
char *rename_target, *dir, *dir_marker;
struct strmap_entry *e;
/*
* If we don't know delete/rename info for this path,
* then we need to recurse into all trees to get all
* adds to make sure we have it.
*/
if (strset_contains(&renames->cached_irrelevant[side],
entry->key))
continue;
e = strmap_get_entry(&renames->cached_pairs[side],
entry->key);
if (!e) {
optimization_okay = 0;
break;
}
/* If this is a delete, we have enough info already */
rename_target = e->value;
if (!rename_target)
continue;
/* If we already walked the rename target, we're good */
if (strmap_contains(&opt->priv->paths, rename_target))
continue;
/*
* Otherwise, we need to get a list of directories that
* will need to be recursed into to get this
* rename_target.
*/
dir = xstrdup(rename_target);
while ((dir_marker = strrchr(dir, '/'))) {
*dir_marker = '\0';
if (strset_contains(&renames->deferred[side].target_dirs,
dir))
break;
strset_add(&renames->deferred[side].target_dirs,
dir);
}
free(dir);
}
renames->deferred[side].trivial_merges_okay = optimization_okay;
/*
* We need to recurse into any directories in
* possible_trivial_merges[side] found in target_dirs[side].
* But when we recurse, we may need to queue up some of the
* subdirectories for possible_trivial_merges[side]. Since
* we can't safely iterate through a hashmap while also adding
* entries, move the entries into 'copy', iterate over 'copy',
* and then we'll also iterate anything added into
* possible_trivial_merges[side] once this loop is done.
*/
copy = renames->deferred[side].possible_trivial_merges;
strintmap_init_with_options(&renames->deferred[side].possible_trivial_merges,
0,
NULL,
0);
strintmap_for_each_entry(&copy, &iter, entry) {
const char *path = entry->key;
unsigned dir_rename_mask = (intptr_t)entry->value;
struct conflict_info *ci;
unsigned dirmask;
struct tree_desc t[3];
void *buf[3] = {NULL,};
int i;
ci = strmap_get(&opt->priv->paths, path);
VERIFY_CI(ci);
dirmask = ci->dirmask;
if (optimization_okay &&
!strset_contains(&renames->deferred[side].target_dirs,
path)) {
resolve_trivial_directory_merge(ci, side);
continue;
}
info->name = path;
info->namelen = strlen(path);
info->pathlen = info->namelen + 1;
for (i = 0; i < 3; i++, dirmask >>= 1) {
if (i == 1 && ci->match_mask == 3)
t[1] = t[0];
else if (i == 2 && ci->match_mask == 5)
t[2] = t[0];
else if (i == 2 && ci->match_mask == 6)
t[2] = t[1];
else {
const struct object_id *oid = NULL;
if (dirmask & 1)
oid = &ci->stages[i].oid;
buf[i] = fill_tree_descriptor(opt->repo,
t+i, oid);
}
}
ci->match_mask &= ci->filemask;
opt->priv->current_dir_name = path;
renames->dir_rename_mask = dir_rename_mask;
if (renames->dir_rename_mask == 0 ||
renames->dir_rename_mask == 0x07)
ret = traverse_trees(NULL, 3, t, info);
else
ret = traverse_trees_wrapper(NULL, 3, t, info);
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++)
free(buf[i]);
if (ret < 0)
return ret;
}
strintmap_clear(&copy);
strintmap_for_each_entry(&renames->deferred[side].possible_trivial_merges,
&iter, entry) {
const char *path = entry->key;
struct conflict_info *ci;
ci = strmap_get(&opt->priv->paths, path);
VERIFY_CI(ci);
assert(renames->deferred[side].trivial_merges_okay &&
!strset_contains(&renames->deferred[side].target_dirs,
path));
resolve_trivial_directory_merge(ci, side);
}
if (!optimization_okay || path_count_after)
path_count_after = strmap_get_size(&opt->priv->paths);
}
if (path_count_after) {
/*
* The choice of wanted_factor here does not affect
* correctness, only performance. When the
* path_count_after / path_count_before
* ratio is high, redoing after renames is a big
* performance boost. I suspect that redoing is a wash
* somewhere near a value of 2, and below that redoing will
* slow things down. I applied a fudge factor and picked
* 3; see the commit message when this was introduced for
* back of the envelope calculations for this ratio.
*/
const int wanted_factor = 3;
/* We should only redo collect_merge_info one time */
assert(renames->redo_after_renames == 0);
if (path_count_after / path_count_before >= wanted_factor) {
renames->redo_after_renames = 1;
renames->cached_pairs_valid_side = -1;
}
} else if (renames->redo_after_renames == 2)
renames->redo_after_renames = 0;
return ret;
}
static int collect_merge_info(struct merge_options *opt,
struct tree *merge_base,
struct tree *side1,
@ -1127,6 +1466,8 @@ static int collect_merge_info(struct merge_options *opt,
trace2_region_enter("merge", "traverse_trees", opt->repo);
ret = traverse_trees(NULL, 3, t, &info);
if (ret == 0)
ret = handle_deferred_entries(opt, &info);
trace2_region_leave("merge", "traverse_trees", opt->repo);
return ret;
@ -2539,8 +2880,8 @@ static int compare_pairs(const void *a_, const void *b_)
}
/* Call diffcore_rename() to update deleted/added pairs into rename pairs */
static void detect_regular_renames(struct merge_options *opt,
unsigned side_index)
static int detect_regular_renames(struct merge_options *opt,
unsigned side_index)
{
struct diff_options diff_opts;
struct rename_info *renames = &opt->priv->renames;
@ -2553,7 +2894,7 @@ static void detect_regular_renames(struct merge_options *opt,
* side had directory renames.
*/
resolve_diffpair_statuses(&renames->pairs[side_index]);
return;
return 0;
}
partial_clear_dir_rename_count(&renames->dir_rename_count[side_index]);
@ -2579,6 +2920,8 @@ static void detect_regular_renames(struct merge_options *opt,
trace2_region_leave("diff", "diffcore_rename", opt->repo);
resolve_diffpair_statuses(&diff_queued_diff);
if (diff_opts.needed_rename_limit > 0)
renames->redo_after_renames = 0;
if (diff_opts.needed_rename_limit > renames->needed_limit)
renames->needed_limit = diff_opts.needed_rename_limit;
@ -2588,6 +2931,8 @@ static void detect_regular_renames(struct merge_options *opt,
diff_queued_diff.nr = 0;
diff_queued_diff.queue = NULL;
diff_flush(&diff_opts);
return 1;
}
/*
@ -2677,14 +3022,32 @@ static int detect_and_process_renames(struct merge_options *opt,
struct diff_queue_struct combined;
struct rename_info *renames = &opt->priv->renames;
int need_dir_renames, s, clean = 1;
unsigned detection_run = 0;
memset(&combined, 0, sizeof(combined));
if (!possible_renames(renames))
goto cleanup;
trace2_region_enter("merge", "regular renames", opt->repo);
detect_regular_renames(opt, MERGE_SIDE1);
detect_regular_renames(opt, MERGE_SIDE2);
detection_run |= detect_regular_renames(opt, MERGE_SIDE1);
detection_run |= detect_regular_renames(opt, MERGE_SIDE2);
if (renames->redo_after_renames && detection_run) {
int i, side;
struct diff_filepair *p;
/* Cache the renames, we found */
for (side = MERGE_SIDE1; side <= MERGE_SIDE2; side++) {
for (i = 0; i < renames->pairs[side].nr; ++i) {
p = renames->pairs[side].queue[i];
possibly_cache_new_pair(renames, p, side, NULL);
}
}
/* Restart the merge with the cached renames */
renames->redo_after_renames = 2;
trace2_region_leave("merge", "regular renames", opt->repo);
goto cleanup;
}
use_cached_pairs(opt, &renames->cached_pairs[1], &renames->pairs[1]);
use_cached_pairs(opt, &renames->cached_pairs[2], &renames->pairs[2]);
trace2_region_leave("merge", "regular renames", opt->repo);
@ -4019,6 +4382,13 @@ static void merge_start(struct merge_options *opt, struct merge_result *result)
strset_init_with_options(&renames->cached_target_names[i],
NULL, 0);
}
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; i++) {
strintmap_init_with_options(&renames->deferred[i].possible_trivial_merges,
0, NULL, 0);
strset_init_with_options(&renames->deferred[i].target_dirs,
NULL, 1);
renames->deferred[i].trivial_merges_okay = 1; /* 1 == maybe */
}
/*
* Although we initialize opt->priv->paths with strdup_strings=0,
@ -4107,6 +4477,7 @@ static void merge_ort_nonrecursive_internal(struct merge_options *opt,
opt->subtree_shift);
}
redo:
trace2_region_enter("merge", "collect_merge_info", opt->repo);
if (collect_merge_info(opt, merge_base, side1, side2) != 0) {
/*
@ -4126,6 +4497,12 @@ static void merge_ort_nonrecursive_internal(struct merge_options *opt,
result->clean = detect_and_process_renames(opt, merge_base,
side1, side2);
trace2_region_leave("merge", "renames", opt->repo);
if (opt->priv->renames.redo_after_renames == 2) {
trace2_region_enter("merge", "reset_maps", opt->repo);
clear_or_reinit_internal_opts(opt->priv, 1);
trace2_region_leave("merge", "reset_maps", opt->repo);
goto redo;
}
trace2_region_enter("merge", "process_entries", opt->repo);
process_entries(opt, &working_tree_oid);

View file

@ -61,11 +61,6 @@ static int path_hashmap_cmp(const void *cmp_data,
return strcmp(a->path, key ? key : b->path);
}
static unsigned int path_hash(const char *path)
{
return ignore_case ? strihash(path) : strhash(path);
}
/*
* For dir_rename_entry, directory names are stored as a full path from the
* toplevel of the repository and do not include a trailing '/'. Also:
@ -463,7 +458,7 @@ static int save_files_dirs(const struct object_id *oid,
strbuf_addstr(base, path);
FLEX_ALLOC_MEM(entry, path, base->buf, base->len);
hashmap_entry_init(&entry->e, path_hash(entry->path));
hashmap_entry_init(&entry->e, fspathhash(entry->path));
hashmap_add(&opt->priv->current_file_dir_set, &entry->e);
strbuf_setlen(base, baselen);
@ -737,14 +732,14 @@ static char *unique_path(struct merge_options *opt,
base_len = newpath.len;
while (hashmap_get_from_hash(&opt->priv->current_file_dir_set,
path_hash(newpath.buf), newpath.buf) ||
fspathhash(newpath.buf), newpath.buf) ||
(!opt->priv->call_depth && file_exists(newpath.buf))) {
strbuf_setlen(&newpath, base_len);
strbuf_addf(&newpath, "_%d", suffix++);
}
FLEX_ALLOC_MEM(entry, path, newpath.buf, newpath.len);
hashmap_entry_init(&entry->e, path_hash(entry->path));
hashmap_entry_init(&entry->e, fspathhash(entry->path));
hashmap_add(&opt->priv->current_file_dir_set, &entry->e);
return strbuf_detach(&newpath, NULL);
}

View file

@ -2474,7 +2474,7 @@ struct oidtree *odb_loose_cache(struct object_directory *odb,
struct strbuf buf = STRBUF_INIT;
size_t word_bits = bitsizeof(odb->loose_objects_subdir_seen[0]);
size_t word_index = subdir_nr / word_bits;
size_t mask = 1 << (subdir_nr % word_bits);
size_t mask = 1u << (subdir_nr % word_bits);
uint32_t *bitmap;
if (subdir_nr < 0 ||

View file

@ -34,7 +34,7 @@ struct object_directory {
};
KHASH_INIT(odb_path_map, const char * /* key: odb_path */,
struct object_directory *, 1, fspathhash, fspatheq);
struct object_directory *, 1, fspathhash, fspatheq)
void prepare_alt_odb(struct repository *r);
char *compute_alternate_path(const char *path, struct strbuf *err);

View file

@ -6,11 +6,6 @@
#include "alloc.h"
#include "hash.h"
struct oidtree_node {
/* n.k[] is used to store "struct object_id" */
struct cb_node n;
};
struct oidtree_iter_data {
oidtree_iter fn;
void *arg;
@ -35,13 +30,13 @@ void oidtree_clear(struct oidtree *ot)
void oidtree_insert(struct oidtree *ot, const struct object_id *oid)
{
struct oidtree_node *on;
struct cb_node *on;
if (!oid->algo)
BUG("oidtree_insert requires oid->algo");
on = mem_pool_alloc(&ot->mem_pool, sizeof(*on) + sizeof(*oid));
oidcpy_with_padding((struct object_id *)on->n.k, oid);
oidcpy_with_padding((struct object_id *)on->k, oid);
/*
* n.b. Current callers won't get us duplicates, here. If a
@ -49,7 +44,7 @@ void oidtree_insert(struct oidtree *ot, const struct object_id *oid)
* that won't be freed until oidtree_clear. Currently it's not
* worth maintaining a free list
*/
cb_insert(&ot->tree, &on->n, sizeof(*oid));
cb_insert(&ot->tree, on, sizeof(*oid));
}

View file

@ -2506,6 +2506,7 @@ int repo_index_has_changes(struct repository *repo,
opt.flags.exit_with_status = 1;
if (!sb)
opt.flags.quick = 1;
diff_setup_done(&opt);
do_diff_cache(&cmp, &opt);
diffcore_std(&opt);
for (i = 0; sb && i < diff_queued_diff.nr; i++) {

View file

@ -2226,8 +2226,12 @@ void ref_array_clear(struct ref_array *array)
FREE_AND_NULL(array->items);
array->nr = array->alloc = 0;
for (i = 0; i < used_atom_cnt; i++)
free((char *)used_atom[i].name);
for (i = 0; i < used_atom_cnt; i++) {
struct used_atom *atom = &used_atom[i];
if (atom->atom_type == ATOM_HEAD)
free(atom->u.head);
free((char *)atom->name);
}
FREE_AND_NULL(used_atom);
used_atom_cnt = 0;

View file

@ -21,7 +21,7 @@ int reset_head(struct repository *r, struct object_id *oid, const char *action,
struct object_id head_oid;
struct tree_desc desc[2] = { { NULL }, { NULL } };
struct lock_file lock = LOCK_INIT;
struct unpack_trees_options unpack_tree_opts;
struct unpack_trees_options unpack_tree_opts = { 0 };
struct tree *tree;
const char *reflog_action;
struct strbuf msg = STRBUF_INIT;
@ -49,7 +49,6 @@ int reset_head(struct repository *r, struct object_id *oid, const char *action,
if (refs_only)
goto reset_head_refs;
memset(&unpack_tree_opts, 0, sizeof(unpack_tree_opts));
setup_unpack_trees_porcelain(&unpack_tree_opts, action);
unpack_tree_opts.head_idx = 1;
unpack_tree_opts.src_index = r->index;
@ -134,6 +133,7 @@ int reset_head(struct repository *r, struct object_id *oid, const char *action,
leave_reset_head:
strbuf_release(&msg);
rollback_lock_file(&lock);
clear_unpack_trees_porcelain(&unpack_tree_opts);
while (nr)
free((void *)desc[--nr].buffer);
return ret;

View file

@ -170,6 +170,8 @@ int convert_to_sparse(struct index_state *istate)
if (index_has_unmerged_entries(istate))
return 0;
/* Clear and recompute the cache-tree */
cache_tree_free(&istate->cache_tree);
if (cache_tree_update(istate, 0)) {
warning(_("unable to update cache-tree, staying full"));
return -1;

26
t/helper/test-getcwd.c Normal file
View file

@ -0,0 +1,26 @@
#include "test-tool.h"
#include "git-compat-util.h"
#include "parse-options.h"
static const char *getcwd_usage[] = {
"test-tool getcwd",
NULL
};
int cmd__getcwd(int argc, const char **argv)
{
struct option options[] = {
OPT_END()
};
char *cwd;
argc = parse_options(argc, argv, "test-tools", options, getcwd_usage, 0);
if (argc > 0)
usage_with_options(getcwd_usage, options);
cwd = xgetcwd();
puts(cwd);
free(cwd);
return 0;
}

View file

@ -33,6 +33,7 @@ static struct test_cmd cmds[] = {
{ "fast-rebase", cmd__fast_rebase },
{ "genrandom", cmd__genrandom },
{ "genzeros", cmd__genzeros },
{ "getcwd", cmd__getcwd },
{ "hashmap", cmd__hashmap },
{ "hash-speed", cmd__hash_speed },
{ "index-version", cmd__index_version },

View file

@ -23,6 +23,7 @@ int cmd__example_decorate(int argc, const char **argv);
int cmd__fast_rebase(int argc, const char **argv);
int cmd__genrandom(int argc, const char **argv);
int cmd__genzeros(int argc, const char **argv);
int cmd__getcwd(int argc, const char **argv);
int cmd__hashmap(int argc, const char **argv);
int cmd__hash_speed(int argc, const char **argv);
int cmd__index_version(int argc, const char **argv);

View file

@ -6,7 +6,7 @@ test_description="test performance of Git operations using the index"
test_perf_default_repo
SPARSE_CONE=f2/f4/f1
SPARSE_CONE=f2/f4
test_expect_success 'setup repo and indexes' '
git reset --hard HEAD &&
@ -27,7 +27,7 @@ test_expect_success 'setup repo and indexes' '
OLD_COMMIT=$(git rev-parse HEAD) &&
OLD_TREE=$(git rev-parse HEAD^{tree}) &&
for i in $(test_seq 1 4)
for i in $(test_seq 1 3)
do
cat >in <<-EOF &&
100755 blob $BLOB a
@ -43,45 +43,57 @@ test_expect_success 'setup repo and indexes' '
done &&
git sparse-checkout init --cone &&
git branch -f wide $OLD_COMMIT &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . full-index-v3 &&
git sparse-checkout set $SPARSE_CONE &&
git checkout -b wide $OLD_COMMIT &&
for l2 in f1 f2 f3 f4
do
echo more bogus >>$SPARSE_CONE/$l2/a &&
git commit -a -m "edit $SPARSE_CONE/$l2/a" || return 1
done &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . full-v3 &&
(
cd full-index-v3 &&
cd full-v3 &&
git sparse-checkout init --cone &&
git sparse-checkout set $SPARSE_CONE &&
git config index.version 3 &&
git update-index --index-version=3
git update-index --index-version=3 &&
git checkout HEAD~4
) &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . full-index-v4 &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . full-v4 &&
(
cd full-index-v4 &&
cd full-v4 &&
git sparse-checkout init --cone &&
git sparse-checkout set $SPARSE_CONE &&
git config index.version 4 &&
git update-index --index-version=4
git update-index --index-version=4 &&
git checkout HEAD~4
) &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . sparse-index-v3 &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . sparse-v3 &&
(
cd sparse-index-v3 &&
cd sparse-v3 &&
git sparse-checkout init --cone --sparse-index &&
git sparse-checkout set $SPARSE_CONE &&
git config index.version 3 &&
git update-index --index-version=3
git update-index --index-version=3 &&
git checkout HEAD~4
) &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . sparse-index-v4 &&
git -c core.sparseCheckoutCone=true clone --branch=wide --sparse . sparse-v4 &&
(
cd sparse-index-v4 &&
cd sparse-v4 &&
git sparse-checkout init --cone --sparse-index &&
git sparse-checkout set $SPARSE_CONE &&
git config index.version 4 &&
git update-index --index-version=4
git update-index --index-version=4 &&
git checkout HEAD~4
)
'
test_perf_on_all () {
command="$@"
for repo in full-index-v3 full-index-v4 \
sparse-index-v3 sparse-index-v4
for repo in full-v3 full-v4 \
sparse-v3 sparse-v4
do
test_perf "$command ($repo)" "
(
@ -97,5 +109,6 @@ test_perf_on_all git status
test_perf_on_all git add -A
test_perf_on_all git add .
test_perf_on_all git commit -a -m A
test_perf_on_all git checkout -f -
test_done

View file

@ -356,7 +356,10 @@ test_lazy_prereq GETCWD_IGNORES_PERMS '
chmod 100 $base ||
BUG "cannot prepare $base"
(cd $base/dir && /bin/pwd -P)
(
cd $base/dir &&
test-tool getcwd
)
status=$?
chmod 700 $base &&

View file

@ -95,6 +95,25 @@ test_expect_success 'setup' '
git add . &&
git commit -m "rename deep/deeper1/... to folder1/..." &&
git checkout -b df-conflict-1 base &&
rm -rf folder1 &&
echo content >folder1 &&
git add . &&
git commit -m "dir to file" &&
git checkout -b df-conflict-2 base &&
rm -rf folder2 &&
echo content >folder2 &&
git add . &&
git commit -m "dir to file" &&
git checkout -b fd-conflict base &&
rm a &&
mkdir a &&
echo content >a/a &&
git add . &&
git commit -m "file to dir" &&
git checkout -b deepest base &&
echo "updated deepest" >deep/deeper1/deepest/a &&
git commit -a -m "update deepest" &&
@ -262,6 +281,34 @@ test_expect_success 'add, commit, checkout' '
test_all_match git checkout -
'
test_expect_success 'commit including unstaged changes' '
init_repos &&
write_script edit-file <<-\EOF &&
echo $1 >$2
EOF
run_on_all ../edit-file 1 a &&
run_on_all ../edit-file 1 deep/a &&
test_all_match git commit -m "-a" -a &&
test_all_match git status --porcelain=v2 &&
run_on_all ../edit-file 2 a &&
run_on_all ../edit-file 2 deep/a &&
test_all_match git commit -m "--include" --include deep/a &&
test_all_match git status --porcelain=v2 &&
test_all_match git commit -m "--include" --include a &&
test_all_match git status --porcelain=v2 &&
run_on_all ../edit-file 3 a &&
run_on_all ../edit-file 3 deep/a &&
test_all_match git commit -m "--amend" -a --amend &&
test_all_match git status --porcelain=v2
'
test_expect_success 'status/add: outside sparse cone' '
init_repos &&
@ -330,10 +377,16 @@ test_expect_success 'diff --staged' '
test_all_match git diff --staged
'
# NEEDSWORK: sparse-checkout behaves differently from full-checkout when
# running this test with 'df-conflict-2' after 'df-conflict-1'.
test_expect_success 'diff with renames and conflicts' '
init_repos &&
for branch in rename-out-to-out rename-out-to-in rename-in-to-out
for branch in rename-out-to-out \
rename-out-to-in \
rename-in-to-out \
df-conflict-1 \
fd-conflict
do
test_all_match git checkout rename-base &&
test_all_match git checkout $branch -- . &&
@ -346,7 +399,12 @@ test_expect_success 'diff with renames and conflicts' '
test_expect_success 'diff with directory/file conflicts' '
init_repos &&
for branch in rename-out-to-out rename-out-to-in rename-in-to-out
for branch in rename-out-to-out \
rename-out-to-in \
rename-in-to-out \
df-conflict-1 \
df-conflict-2 \
fd-conflict
do
git -C full-checkout reset --hard &&
test_sparse_match git reset --hard &&
@ -514,14 +572,33 @@ test_expect_success 'sparse-index is expanded and converted back' '
test_region index ensure_full_index trace2.txt
'
test_expect_success 'sparse-index is not expanded' '
init_repos &&
ensure_not_expanded () {
rm -f trace2.txt &&
echo >>sparse-index/untracked.txt &&
GIT_TRACE2_EVENT="$(pwd)/trace2.txt" GIT_TRACE2_EVENT_NESTING=10 \
git -C sparse-index status &&
git -C sparse-index "$@" &&
test_region ! index ensure_full_index trace2.txt
}
test_expect_success 'sparse-index is not expanded' '
init_repos &&
ensure_not_expanded status &&
ensure_not_expanded commit --allow-empty -m empty &&
echo >>sparse-index/a &&
ensure_not_expanded commit -a -m a &&
echo >>sparse-index/a &&
ensure_not_expanded commit --include a -m a &&
echo >>sparse-index/deep/deeper1/a &&
ensure_not_expanded commit --include deep/deeper1/a -m deeper &&
ensure_not_expanded checkout rename-out-to-out &&
ensure_not_expanded checkout - &&
ensure_not_expanded switch rename-out-to-out &&
ensure_not_expanded switch - &&
git -C sparse-index reset --hard &&
ensure_not_expanded checkout rename-out-to-out -- deep/deeper1 &&
git -C sparse-index reset --hard &&
ensure_not_expanded restore -s rename-out-to-out -- deep/deeper1
'
# NEEDSWORK: a sparse-checkout behaves differently from a full checkout
@ -559,4 +636,112 @@ test_expect_success 'add everything with deep new file' '
test_all_match git status --porcelain=v2
'
# NEEDSWORK: 'git checkout' behaves incorrectly in the case of
# directory/file conflicts, even without sparse-checkout. Use this
# test only as a documentation of the incorrect behavior, not a
# measure of how it _should_ behave.
test_expect_success 'checkout behaves oddly with df-conflict-1' '
init_repos &&
test_sparse_match git sparse-checkout disable &&
write_script edit-content <<-\EOF &&
echo content >>folder1/larger-content
git add folder1
EOF
run_on_all ../edit-content &&
test_all_match git status --porcelain=v2 &&
git -C sparse-checkout sparse-checkout init --cone &&
git -C sparse-index sparse-checkout init --cone --sparse-index &&
test_all_match git status --porcelain=v2 &&
# This checkout command should fail, because we have a staged
# change to folder1/larger-content, but the destination changes
# folder1 to a file.
git -C full-checkout checkout df-conflict-1 \
1>full-checkout-out \
2>full-checkout-err &&
git -C sparse-checkout checkout df-conflict-1 \
1>sparse-checkout-out \
2>sparse-checkout-err &&
git -C sparse-index checkout df-conflict-1 \
1>sparse-index-out \
2>sparse-index-err &&
# Instead, the checkout deletes the folder1 file and adds the
# folder1/larger-content file, leaving all other paths that were
# in folder1/ as deleted (without any warning).
cat >expect <<-EOF &&
D folder1
A folder1/larger-content
EOF
test_cmp expect full-checkout-out &&
test_cmp expect sparse-checkout-out &&
# The sparse-index reports no output
test_must_be_empty sparse-index-out &&
# stderr: Switched to branch df-conflict-1
test_cmp full-checkout-err sparse-checkout-err &&
test_cmp full-checkout-err sparse-checkout-err
'
# NEEDSWORK: 'git checkout' behaves incorrectly in the case of
# directory/file conflicts, even without sparse-checkout. Use this
# test only as a documentation of the incorrect behavior, not a
# measure of how it _should_ behave.
test_expect_success 'checkout behaves oddly with df-conflict-2' '
init_repos &&
test_sparse_match git sparse-checkout disable &&
write_script edit-content <<-\EOF &&
echo content >>folder2/larger-content
git add folder2
EOF
run_on_all ../edit-content &&
test_all_match git status --porcelain=v2 &&
git -C sparse-checkout sparse-checkout init --cone &&
git -C sparse-index sparse-checkout init --cone --sparse-index &&
test_all_match git status --porcelain=v2 &&
# This checkout command should fail, because we have a staged
# change to folder1/larger-content, but the destination changes
# folder1 to a file.
git -C full-checkout checkout df-conflict-2 \
1>full-checkout-out \
2>full-checkout-err &&
git -C sparse-checkout checkout df-conflict-2 \
1>sparse-checkout-out \
2>sparse-checkout-err &&
git -C sparse-index checkout df-conflict-2 \
1>sparse-index-out \
2>sparse-index-err &&
# The full checkout deviates from the df-conflict-1 case here!
# It drops the change to folder1/larger-content and leaves the
# folder1 path as-is on disk. The sparse-index behaves the same.
test_must_be_empty full-checkout-out &&
test_must_be_empty sparse-index-out &&
# In the sparse-checkout case, the checkout deletes the folder1
# file and adds the folder1/larger-content file, leaving all other
# paths that were in folder1/ as deleted (without any warning).
cat >expect <<-EOF &&
D folder2
A folder2/larger-content
EOF
test_cmp expect sparse-checkout-out &&
# Switched to branch df-conflict-1
test_cmp full-checkout-err sparse-checkout-err &&
test_cmp full-checkout-err sparse-index-err
'
test_done

View file

@ -406,4 +406,14 @@ test_expect_success 'refuse to switch to branch checked out elsewhere' '
test_i18ngrep "already checked out" err
'
test_expect_success MINGW,SYMLINKS_WINDOWS 'rebase when .git/logs is a symlink' '
git checkout main &&
mv .git/logs actual_logs &&
cmd //c "mklink /D .git\logs ..\actual_logs" &&
git rebase -f HEAD^ &&
test -L .git/logs &&
rm .git/logs &&
mv actual_logs .git/logs
'
test_done

View file

@ -455,8 +455,8 @@ diff-tree --stat --compact-summary initial mode
diff-tree -R --stat --compact-summary initial mode
EOF
test_expect_success 'log -m matches log -m -p' '
git log -m -p master >result &&
test_expect_success 'log -m matches pure log' '
git log master >result &&
process_diffs result >expected &&
git log -m >result &&
process_diffs result >actual &&

View file

@ -4797,7 +4797,7 @@ test_setup_12f () {
)
}
test_expect_merge_algorithm failure failure '12f: Trivial directory resolve, caching, all kinds of fun' '
test_expect_merge_algorithm failure success '12f: Trivial directory resolve, caching, all kinds of fun' '
test_setup_12f &&
(
cd 12f &&

View file

@ -51,7 +51,7 @@ test_expect_success 'submodule update aborts on missing gitmodules url' '
test_expect_success 'add aborts on repository with no commits' '
cat >expect <<-\EOF &&
'"'repo-no-commits'"' does not have a commit checked out
fatal: '"'repo-no-commits'"' does not have a commit checked out
EOF
git init repo-no-commits &&
test_must_fail git submodule add ../a ./repo-no-commits 2>actual &&
@ -196,6 +196,17 @@ test_expect_success 'submodule add to .gitignored path with --force' '
)
'
test_expect_success 'submodule add to path with tracked content fails' '
(
cd addtest &&
echo "fatal: '\''dir-tracked'\'' already exists in the index" >expect &&
mkdir dir-tracked &&
test_commit foo dir-tracked/bar &&
test_must_fail git submodule add "$submodurl" dir-tracked >actual 2>&1 &&
test_cmp expect actual
)
'
test_expect_success 'submodule add to reconfigure existing submodule with --force' '
(
cd addtest-ignore &&

View file

@ -448,7 +448,7 @@ test_expect_success 'fsck detects command in .gitmodules' '
'
cat << EOF >expect
Execution of 'false $submodulesha1' failed in submodule path 'submodule'
fatal: Execution of 'false $submodulesha1' failed in submodule path 'submodule'
EOF
test_expect_success 'submodule update - command in .git/config catches failure' '
@ -465,7 +465,7 @@ test_expect_success 'submodule update - command in .git/config catches failure'
'
cat << EOF >expect
Execution of 'false $submodulesha1' failed in submodule path '../submodule'
fatal: Execution of 'false $submodulesha1' failed in submodule path '../submodule'
EOF
test_expect_success 'submodule update - command in .git/config catches failure -- subdirectory' '
@ -484,7 +484,7 @@ test_expect_success 'submodule update - command in .git/config catches failure -
test_expect_success 'submodule update - command run for initial population of submodule' '
cat >expect <<-EOF &&
Execution of '\''false $submodulesha1'\'' failed in submodule path '\''submodule'\''
fatal: Execution of '\''false $submodulesha1'\'' failed in submodule path '\''submodule'\''
EOF
rm -rf super/submodule &&
test_must_fail git -C super submodule update 2>actual &&
@ -493,8 +493,8 @@ test_expect_success 'submodule update - command run for initial population of su
'
cat << EOF >expect
Execution of 'false $submodulesha1' failed in submodule path '../super/submodule'
Failed to recurse into submodule path '../super'
fatal: Execution of 'false $submodulesha1' failed in submodule path '../super/submodule'
fatal: Failed to recurse into submodule path '../super'
EOF
test_expect_success 'recursive submodule update - command in .git/config catches failure -- subdirectory' '

View file

@ -882,7 +882,7 @@ test_expect_success 'status shows detached HEAD properly after checking out non-
git clone upstream downstream &&
git -C downstream checkout @{u} &&
git -C downstream status >actual &&
test_i18ngrep "HEAD detached at [0-9a-f]\\+" actual
grep -E "HEAD detached at [0-9a-f]+" actual
'
test_expect_success 'setup status submodule summary' '

View file

@ -122,6 +122,8 @@ test_expect_success 'setup' '
c0=$(git rev-parse HEAD) &&
cp file.1 file &&
git add file &&
cp file.1 other &&
git add other &&
test_tick &&
git commit -m "commit 1" &&
git tag c1 &&
@ -711,6 +713,15 @@ test_expect_success 'fast-forward merge with --autostash' '
test_cmp result.1-5 file
'
test_expect_success 'failed fast-forward merge with --autostash' '
git reset --hard c0 &&
git merge-file file file.orig file.5 &&
cp file.5 other &&
test_must_fail git merge --autostash c1 2>err &&
test_i18ngrep "Applied autostash." err &&
test_cmp file.5 file
'
test_expect_success 'octopus merge with --autostash' '
git reset --hard c1 &&
git merge-file file file.orig file.3 &&
@ -721,6 +732,14 @@ test_expect_success 'octopus merge with --autostash' '
test_cmp result.1-3-5-9 file
'
test_expect_success 'failed merge (exit 2) with --autostash' '
git reset --hard c1 &&
git merge-file file file.orig file.5 &&
test_must_fail git merge -s recursive --autostash c2 c3 2>err &&
test_i18ngrep "Applied autostash." err &&
test_cmp result.1-5 file
'
test_expect_success 'conflicted merge with --autostash, --abort restores stash' '
git reset --hard c3 &&
cp file.1 file &&

View file

@ -409,6 +409,12 @@ then
verbose=t
fi
# Since bash 5.0, checkwinsize is enabled by default which does
# update the COLUMNS variable every time a non-builtin command
# completes, even for non-interactive shells.
# Disable that since we are aiming for repeatability.
test -n "$BASH_VERSION" && shopt -u checkwinsize 2>/dev/null
# For repeatability, reset the environment to known value.
# TERM is sanitized below, after saving color control sequences.
LANG=C
@ -1545,6 +1551,12 @@ test_lazy_prereq SYMLINKS '
ln -s x y && test -h y
'
test_lazy_prereq SYMLINKS_WINDOWS '
# test whether symbolic links are enabled on Windows
test_have_prereq MINGW &&
cmd //c "mklink y x" &> /dev/null && test -h y
'
test_lazy_prereq FILEMODE '
test "$(git config --bool core.filemode)" = true
'

View file

@ -2608,6 +2608,17 @@ int twoway_merge(const struct cache_entry * const *src,
same(current, oldtree) && !same(current, newtree)) {
/* 20 or 21 */
return merged_entry(newtree, current, o);
} else if (current && !oldtree && newtree &&
S_ISSPARSEDIR(current->ce_mode) != S_ISSPARSEDIR(newtree->ce_mode) &&
ce_stage(current) == 0) {
/*
* This case is a directory/file conflict across the sparse-index
* boundary. When we are changing from one path to another via
* 'git checkout', then we want to replace one entry with another
* via merged_entry(). If there are staged changes, then we should
* reject the merge instead.
*/
return merged_entry(newtree, current, o);
} else
return reject_merge(current, o);
}