2020-10-27 02:08:07 +00:00
|
|
|
/*
|
|
|
|
* "Ostensibly Recursive's Twin" merge strategy, or "ort" for short. Meant
|
|
|
|
* as a drop-in replacement for the "recursive" merge strategy, allowing one
|
|
|
|
* to replace
|
|
|
|
*
|
|
|
|
* git merge [-s recursive]
|
|
|
|
*
|
|
|
|
* with
|
|
|
|
*
|
|
|
|
* git merge -s ort
|
|
|
|
*
|
|
|
|
* Note: git's parser allows the space between '-s' and its argument to be
|
|
|
|
* missing. (Should I have backronymed "ham", "alsa", "kip", "nap, "alvo",
|
|
|
|
* "cale", "peedy", or "ins" instead of "ort"?)
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include "cache.h"
|
|
|
|
#include "merge-ort.h"
|
|
|
|
|
2020-12-16 22:28:00 +00:00
|
|
|
#include "alloc.h"
|
2021-03-20 00:03:45 +00:00
|
|
|
#include "attr.h"
|
2020-12-03 15:59:40 +00:00
|
|
|
#include "blob.h"
|
2020-12-13 08:04:26 +00:00
|
|
|
#include "cache-tree.h"
|
2020-12-16 22:28:00 +00:00
|
|
|
#include "commit.h"
|
2020-12-03 15:59:40 +00:00
|
|
|
#include "commit-reach.h"
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
#include "diff.h"
|
|
|
|
#include "diffcore.h"
|
2020-12-13 08:04:24 +00:00
|
|
|
#include "dir.h"
|
2023-03-21 06:26:03 +00:00
|
|
|
#include "environment.h"
|
2023-03-21 06:25:54 +00:00
|
|
|
#include "gettext.h"
|
2023-02-24 00:09:27 +00:00
|
|
|
#include "hex.h"
|
2021-04-16 20:53:34 +00:00
|
|
|
#include "entry.h"
|
2021-01-01 02:34:44 +00:00
|
|
|
#include "ll-merge.h"
|
2023-04-22 20:17:15 +00:00
|
|
|
#include "match-trees.h"
|
2023-04-11 07:41:47 +00:00
|
|
|
#include "mem-pool.h"
|
2023-04-11 07:41:49 +00:00
|
|
|
#include "object-name.h"
|
2020-12-13 08:04:21 +00:00
|
|
|
#include "object-store.h"
|
2023-04-11 03:00:42 +00:00
|
|
|
#include "oid-array.h"
|
merge-ort: add prefetching for content merges
Commit 7fbbcb21b1 ("diff: batch fetching of missing blobs", 2019-04-05)
introduced batching of fetching missing blobs, so that the diff
machinery would have one fetch subprocess grab N blobs instead of N
processes each grabbing 1.
However, the diff machinery is not the only thing in a merge that needs
to work on blobs. The 3-way content merges need them as well. Rather
than download all the blobs 1 at a time, prefetch all the blobs needed
for regular content merges.
This does not cover all possible paths in merge-ort that might need to
download blobs. Others include:
- The blob_unchanged() calls to avoid modify/delete conflicts (when
blob renormalization results in an "unchanged" file)
- Preliminary content merges needed for rename/add and
rename/rename(2to1) style conflicts. (Both of these types of
conflicts can result in nested conflict markers from the need to do
two levels of content merging; the first happens before our new
prefetch_for_content_merges() function.)
The first of these wouldn't be an extreme amount of work to support, and
even the second could be theoretically supported in batching, but all of
these cases seem unusual to me, and this is a minor performance
optimization anyway; in the worst case we only get some of the fetches
batched and have a few additional one-off fetches. So for now, just
handle the regular 3-way content merges in our prefetching.
For the testcase from the previous commit, the number of downloaded
objects remains at 63, but this drops the number of fetches needed from
32 down to 20, a sizeable reduction.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-22 08:04:41 +00:00
|
|
|
#include "promisor-remote.h"
|
2021-01-01 02:34:47 +00:00
|
|
|
#include "revision.h"
|
2020-12-13 08:04:08 +00:00
|
|
|
#include "strmap.h"
|
2021-09-09 18:47:29 +00:00
|
|
|
#include "submodule-config.h"
|
2021-01-01 02:34:45 +00:00
|
|
|
#include "submodule.h"
|
2023-04-11 03:00:38 +00:00
|
|
|
#include "trace2.h"
|
2020-12-13 08:04:09 +00:00
|
|
|
#include "tree.h"
|
2020-12-13 08:04:24 +00:00
|
|
|
#include "unpack-trees.h"
|
merge-ort: use histogram diff
In my cursory investigation, histogram diffs are about 2% slower than
Myers diffs. Others have probably done more detailed benchmarks. But,
in short, histogram diffs have been around for years and in a number of
cases provide obviously better looking diffs where Myers diffs are
unintelligible but the performance hit has kept them from becoming the
default.
However, there are real merge bugs we know about that have triggered on
git.git and linux.git, which I don't have a clue how to address without
the additional information that I believe is provided by histogram
diffs. See the following:
https://lore.kernel.org/git/20190816184051.GB13894@sigill.intra.peff.net/
https://lore.kernel.org/git/CABPp-BHvJHpSJT7sdFwfNcPn_sOXwJi3=o14qjZS3M8Rzcxe2A@mail.gmail.com/
https://lore.kernel.org/git/CABPp-BGtez4qjbtFT1hQoREfcJPmk9MzjhY5eEq1QhXT23tFOw@mail.gmail.com/
I don't like mismerges. I really don't like silent mismerges. While I
am sometimes willing to make performance and correctness tradeoff, I'm
much more interested in correctness in general. I want to fix the above
bugs. I have not yet started doing so, but I believe histogram diff at
least gives me an angle. Unfortunately, I can't rely on using the
information from histogram diff unless it's in use. And it hasn't been
used because of a few percentage performance hit.
In testcases I have looked at, merge-ort is _much_ faster than
merge-recursive for non-trivial merges/rebases/cherry-picks. As such,
this is a golden opportunity to switch out the underlying diff algorithm
(at least the one used by the merge machinery; git-diff and git-log are
separate questions); doing so will allow me to get additional data and
improved diffs, and I believe it will help me fix the above bugs at some
point in the future.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:11 +00:00
|
|
|
#include "xdiff-interface.h"
|
2020-12-13 08:04:08 +00:00
|
|
|
|
2020-12-13 08:04:13 +00:00
|
|
|
/*
|
|
|
|
* We have many arrays of size 3. Whenever we have such an array, the
|
|
|
|
* indices refer to one of the sides of the three-way merge. This is so
|
|
|
|
* pervasive that the constants 0, 1, and 2 are used in many places in the
|
|
|
|
* code (especially in arithmetic operations to find the other side's index
|
|
|
|
* or to compute a relevant mask), but sometimes these enum names are used
|
|
|
|
* to aid code clarity.
|
|
|
|
*
|
|
|
|
* See also 'filemask' and 'dirmask' in struct conflict_info; the "ith side"
|
|
|
|
* referred to there is one of these three sides.
|
|
|
|
*/
|
|
|
|
enum merge_side {
|
|
|
|
MERGE_BASE = 0,
|
|
|
|
MERGE_SIDE1 = 1,
|
|
|
|
MERGE_SIDE2 = 2
|
|
|
|
};
|
|
|
|
|
merge-ort: avoid accidental API mis-use
Previously, callers of the merge-ort API could have passed an
uninitialized value for struct merge_result *result. However, we want
to check result to see if it has cached renames from a previous merge
that we can reuse; such values would be found behind result->priv.
However, if result->priv is uninitialized, attempting to access behind
it will give a segfault. So, we need result->priv to be NULL (which
will be the case if the caller does a memset(&result, 0)), or be written
by a previous call to the merge-ort machinery. Documenting this
requirement may help, but despite being the person who introduced this
requirement, I still missed it once and it did not fail in a very clear
way and led to a long debugging session.
Add a _properly_initialized field to merge_result; that value will be
0 if the caller zero'ed the merge_result, it will be set to a very
specific value by a previous run by the merge-ort machinery, and if it's
uninitialized it will most likely either be 0 or some value that does
not match the specific one we'd expect allowing us to throw a much more
meaningful error.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:37 +00:00
|
|
|
static unsigned RESULT_INITIALIZED = 0x1abe11ed; /* unlikely accidental value */
|
|
|
|
|
2021-03-11 00:38:26 +00:00
|
|
|
struct traversal_callback_data {
|
|
|
|
unsigned long mask;
|
|
|
|
unsigned long dirmask;
|
|
|
|
struct name_entry names[3];
|
|
|
|
};
|
|
|
|
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
struct deferred_traversal_data {
|
|
|
|
/*
|
|
|
|
* possible_trivial_merges: directories to be explored only when needed
|
|
|
|
*
|
|
|
|
* possible_trivial_merges is a map of directory names to
|
|
|
|
* dir_rename_mask. When we detect that a directory is unchanged on
|
|
|
|
* one side, we can sometimes resolve the directory without recursing
|
|
|
|
* into it. Renames are the only things that can prevent such an
|
|
|
|
* optimization. However, for rename sources:
|
|
|
|
* - If no parent directory needed directory rename detection, then
|
|
|
|
* no path under such a directory can be a relevant_source.
|
|
|
|
* and for rename destinations:
|
|
|
|
* - If no cached rename has a target path under the directory AND
|
|
|
|
* - If there are no unpaired relevant_sources elsewhere in the
|
|
|
|
* repository
|
|
|
|
* then we don't need any path under this directory for a rename
|
|
|
|
* destination. The only way to know the last item above is to defer
|
|
|
|
* handling such directories until the end of collect_merge_info(),
|
|
|
|
* in handle_deferred_entries().
|
|
|
|
*
|
|
|
|
* For each we store dir_rename_mask, since that's the only bit of
|
|
|
|
* information we need, other than the path, to resume the recursive
|
|
|
|
* traversal.
|
|
|
|
*/
|
|
|
|
struct strintmap possible_trivial_merges;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* trivial_merges_okay: if trivial directory merges are okay
|
|
|
|
*
|
|
|
|
* See possible_trivial_merges above. The "no unpaired
|
|
|
|
* relevant_sources elsewhere in the repository" is a single boolean
|
|
|
|
* per merge side, which we store here. Note that while 0 means no,
|
|
|
|
* 1 only means "maybe" rather than "yes"; we optimistically set it
|
|
|
|
* to 1 initially and only clear when we determine it is unsafe to
|
|
|
|
* do trivial directory merges.
|
|
|
|
*/
|
|
|
|
unsigned trivial_merges_okay;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* target_dirs: ancestor directories of rename targets
|
|
|
|
*
|
|
|
|
* target_dirs contains all directory names that are an ancestor of
|
|
|
|
* any rename destination.
|
|
|
|
*/
|
|
|
|
struct strset target_dirs;
|
|
|
|
};
|
|
|
|
|
2020-12-14 16:21:30 +00:00
|
|
|
struct rename_info {
|
2021-01-07 21:35:49 +00:00
|
|
|
/*
|
|
|
|
* All variables that are arrays of size 3 correspond to data tracked
|
|
|
|
* for the sides in enum merge_side. Index 0 is almost always unused
|
|
|
|
* because we often only need to track information for MERGE_SIDE1 and
|
|
|
|
* MERGE_SIDE2 (MERGE_BASE can't have rename information since renames
|
|
|
|
* are determined relative to what changed since the MERGE_BASE).
|
|
|
|
*/
|
|
|
|
|
2020-12-14 16:21:30 +00:00
|
|
|
/*
|
|
|
|
* pairs: pairing of filenames from diffcore_rename()
|
|
|
|
*/
|
|
|
|
struct diff_queue_struct pairs[3];
|
|
|
|
|
2021-01-07 21:35:49 +00:00
|
|
|
/*
|
|
|
|
* dirs_removed: directories removed on a given side of history.
|
2021-03-13 22:22:03 +00:00
|
|
|
*
|
|
|
|
* The keys of dirs_removed[side] are the directories that were removed
|
|
|
|
* on the given side of history. The value of the strintmap for each
|
|
|
|
* directory is a value from enum dir_rename_relevance.
|
2021-01-07 21:35:49 +00:00
|
|
|
*/
|
2021-03-13 22:22:02 +00:00
|
|
|
struct strintmap dirs_removed[3];
|
2021-01-07 21:35:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* dir_rename_count: tracking where parts of a directory were renamed to
|
|
|
|
*
|
|
|
|
* When files in a directory are renamed, they may not all go to the
|
|
|
|
* same location. Each strmap here tracks:
|
|
|
|
* old_dir => {new_dir => int}
|
|
|
|
* That is, dir_rename_count[side] is a strmap to a strintmap.
|
|
|
|
*/
|
|
|
|
struct strmap dir_rename_count[3];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* dir_renames: computed directory renames
|
|
|
|
*
|
|
|
|
* This is a map of old_dir => new_dir and is derived in part from
|
|
|
|
* dir_rename_count.
|
|
|
|
*/
|
|
|
|
struct strmap dir_renames[3];
|
|
|
|
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
/*
|
2021-03-13 22:22:07 +00:00
|
|
|
* relevant_sources: deleted paths wanted in rename detection, and why
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
*
|
|
|
|
* relevant_sources is a set of deleted paths on each side of
|
|
|
|
* history for which we need rename detection. If a path is deleted
|
|
|
|
* on one side of history, we need to detect if it is part of a
|
|
|
|
* rename if either
|
|
|
|
* * the file is modified/deleted on the other side of history
|
2021-03-13 22:22:07 +00:00
|
|
|
* * we need to detect renames for an ancestor directory
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
* If neither of those are true, we can skip rename detection for
|
2021-03-13 22:22:07 +00:00
|
|
|
* that path. The reason is stored as a value from enum
|
|
|
|
* file_rename_relevance, as the reason can inform the algorithm in
|
|
|
|
* diffcore_rename_extended().
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
*/
|
2021-03-13 22:22:02 +00:00
|
|
|
struct strintmap relevant_sources[3];
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
struct deferred_traversal_data deferred[3];
|
|
|
|
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
/*
|
|
|
|
* dir_rename_mask:
|
|
|
|
* 0: optimization removing unmodified potential rename source okay
|
|
|
|
* 2 or 4: optimization okay, but must check for files added to dir
|
|
|
|
* 7: optimization forbidden; need rename source in case of dir rename
|
|
|
|
*/
|
|
|
|
unsigned dir_rename_mask:3;
|
|
|
|
|
2021-03-11 00:38:26 +00:00
|
|
|
/*
|
|
|
|
* callback_data_*: supporting data structures for alternate traversal
|
|
|
|
*
|
|
|
|
* We sometimes need to be able to traverse through all the files
|
|
|
|
* in a given tree before all immediate subdirectories within that
|
|
|
|
* tree. Since traverse_trees() doesn't do that naturally, we have
|
|
|
|
* a traverse_trees_wrapper() that stores any immediate
|
|
|
|
* subdirectories while traversing files, then traverses the
|
|
|
|
* immediate subdirectories later. These callback_data* variables
|
|
|
|
* store the information for the subdirectories so that we can do
|
|
|
|
* that traversal order.
|
|
|
|
*/
|
|
|
|
struct traversal_callback_data *callback_data;
|
|
|
|
int callback_data_nr, callback_data_alloc;
|
|
|
|
char *callback_data_traverse_path;
|
|
|
|
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
/*
|
|
|
|
* merge_trees: trees passed to the merge algorithm for the merge
|
|
|
|
*
|
|
|
|
* merge_trees records the trees passed to the merge algorithm. But,
|
|
|
|
* this data also is stored in merge_result->priv. If a sequence of
|
|
|
|
* merges are being done (such as when cherry-picking or rebasing),
|
|
|
|
* the next merge can look at this and re-use information from
|
|
|
|
* previous merges under certain circumstances.
|
|
|
|
*
|
|
|
|
* See also all the cached_* variables.
|
|
|
|
*/
|
|
|
|
struct tree *merge_trees[3];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cached_pairs_valid_side: which side's cached info can be reused
|
|
|
|
*
|
|
|
|
* See the description for merge_trees. For repeated merges, at most
|
|
|
|
* only one side's cached information can be used. Valid values:
|
|
|
|
* MERGE_SIDE2: cached data from side2 can be reused
|
|
|
|
* MERGE_SIDE1: cached data from side1 can be reused
|
|
|
|
* 0: no cached data can be reused
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
* -1: See redo_after_renames; both sides can be reused.
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
*/
|
|
|
|
int cached_pairs_valid_side;
|
|
|
|
|
2021-05-20 06:09:34 +00:00
|
|
|
/*
|
|
|
|
* cached_pairs: Caching of renames and deletions.
|
|
|
|
*
|
|
|
|
* These are mappings recording renames and deletions of individual
|
|
|
|
* files (not directories). They are thus a map from an old
|
|
|
|
* filename to either NULL (for deletions) or a new filename (for
|
|
|
|
* renames).
|
|
|
|
*/
|
|
|
|
struct strmap cached_pairs[3];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cached_target_names: just the destinations from cached_pairs
|
|
|
|
*
|
|
|
|
* We sometimes want a fast lookup to determine if a given filename
|
|
|
|
* is one of the destinations in cached_pairs. cached_target_names
|
|
|
|
* is thus duplicative information, but it provides a fast lookup.
|
|
|
|
*/
|
|
|
|
struct strset cached_target_names[3];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* cached_irrelevant: Caching of rename_sources that aren't relevant.
|
|
|
|
*
|
|
|
|
* If we try to detect a rename for a source path and succeed, it's
|
|
|
|
* part of a rename. If we try to detect a rename for a source path
|
|
|
|
* and fail, then it's a delete. If we do not try to detect a rename
|
|
|
|
* for a path, then we don't know if it's a rename or a delete. If
|
|
|
|
* merge-ort doesn't think the path is relevant, then we just won't
|
|
|
|
* cache anything for that path. But there's a slight problem in
|
|
|
|
* that merge-ort can think a path is RELEVANT_LOCATION, but due to
|
|
|
|
* commit 9bd342137e ("diffcore-rename: determine which
|
|
|
|
* relevant_sources are no longer relevant", 2021-03-13),
|
|
|
|
* diffcore-rename can downgrade the path to RELEVANT_NO_MORE. To
|
|
|
|
* avoid excessive calls to diffcore_rename_extended() we still need
|
|
|
|
* to cache such paths, though we cannot record them as either
|
|
|
|
* renames or deletes. So we cache them here as a "turned out to be
|
|
|
|
* irrelevant *for this commit*" as they are often also irrelevant
|
|
|
|
* for subsequent commits, though we will have to do some extra
|
|
|
|
* checking to see whether such paths become relevant for rename
|
|
|
|
* detection when cherry-picking/rebasing subsequent commits.
|
|
|
|
*/
|
|
|
|
struct strset cached_irrelevant[3];
|
|
|
|
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
/*
|
|
|
|
* redo_after_renames: optimization flag for "restarting" the merge
|
|
|
|
*
|
|
|
|
* Sometimes it pays to detect renames, cache them, and then
|
|
|
|
* restart the merge operation from the beginning. The reason for
|
|
|
|
* this is that when we know where all the renames are, we know
|
|
|
|
* whether a certain directory has any paths under it affected --
|
|
|
|
* and if a directory is not affected then it permits us to do
|
|
|
|
* trivial tree merging in more cases. Doing trivial tree merging
|
|
|
|
* prevents the need to run process_entry() on every path
|
|
|
|
* underneath trees that can be trivially merged, and
|
|
|
|
* process_entry() is more expensive than collect_merge_info() --
|
|
|
|
* plus, the second collect_merge_info() will be much faster since
|
|
|
|
* it doesn't have to recurse into the relevant trees.
|
|
|
|
*
|
|
|
|
* Values for this flag:
|
|
|
|
* 0 = don't bother, not worth it (or conditions not yet checked)
|
|
|
|
* 1 = conditions for optimization met, optimization worthwhile
|
|
|
|
* 2 = we already did it (don't restart merge yet again)
|
|
|
|
*/
|
|
|
|
unsigned redo_after_renames;
|
|
|
|
|
2020-12-14 16:21:30 +00:00
|
|
|
/*
|
|
|
|
* needed_limit: value needed for inexact rename detection to run
|
|
|
|
*
|
|
|
|
* If the current rename limit wasn't high enough for inexact
|
|
|
|
* rename detection to run, this records the limit needed. Otherwise,
|
|
|
|
* this value remains 0.
|
|
|
|
*/
|
|
|
|
int needed_limit;
|
|
|
|
};
|
|
|
|
|
2020-12-13 08:04:08 +00:00
|
|
|
struct merge_options_internal {
|
|
|
|
/*
|
|
|
|
* paths: primary data structure in all of merge ort.
|
|
|
|
*
|
|
|
|
* The keys of paths:
|
|
|
|
* * are full relative paths from the toplevel of the repository
|
|
|
|
* (e.g. "drivers/firmware/raspberrypi.c").
|
|
|
|
* * store all relevant paths in the repo, both directories and
|
|
|
|
* files (e.g. drivers, drivers/firmware would also be included)
|
|
|
|
* * these keys serve to intern all the path strings, which allows
|
|
|
|
* us to do pointer comparison on directory names instead of
|
|
|
|
* strcmp; we just have to be careful to use the interned strings.
|
|
|
|
*
|
|
|
|
* The values of paths:
|
|
|
|
* * either a pointer to a merged_info, or a conflict_info struct
|
|
|
|
* * merged_info contains all relevant information for a
|
|
|
|
* non-conflicted entry.
|
|
|
|
* * conflict_info contains a merged_info, plus any additional
|
|
|
|
* information about a conflict such as the higher orders stages
|
|
|
|
* involved and the names of the paths those came from (handy
|
|
|
|
* once renames get involved).
|
|
|
|
* * a path may start "conflicted" (i.e. point to a conflict_info)
|
|
|
|
* and then a later step (e.g. three-way content merge) determines
|
|
|
|
* it can be cleanly merged, at which point it'll be marked clean
|
|
|
|
* and the algorithm will ignore any data outside the contained
|
|
|
|
* merged_info for that entry
|
|
|
|
* * If an entry remains conflicted, the merged_info portion of a
|
|
|
|
* conflict_info will later be filled with whatever version of
|
|
|
|
* the file should be placed in the working directory (e.g. an
|
|
|
|
* as-merged-as-possible variation that contains conflict markers).
|
|
|
|
*/
|
|
|
|
struct strmap paths;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* conflicted: a subset of keys->values from "paths"
|
|
|
|
*
|
|
|
|
* conflicted is basically an optimization between process_entries()
|
|
|
|
* and record_conflicted_index_entries(); the latter could loop over
|
|
|
|
* ALL the entries in paths AGAIN and look for the ones that are
|
|
|
|
* still conflicted, but since process_entries() has to loop over
|
|
|
|
* all of them, it saves the ones it couldn't resolve in this strmap
|
|
|
|
* so that record_conflicted_index_entries() can iterate just the
|
|
|
|
* relevant entries.
|
|
|
|
*/
|
|
|
|
struct strmap conflicted;
|
|
|
|
|
2020-12-03 15:59:43 +00:00
|
|
|
/*
|
2021-07-30 11:47:39 +00:00
|
|
|
* pool: memory pool for fast allocation/deallocation
|
2020-12-03 15:59:43 +00:00
|
|
|
*
|
2021-07-30 11:47:39 +00:00
|
|
|
* We allocate room for lots of filenames and auxiliary data
|
|
|
|
* structures in merge_options_internal, and it tends to all be
|
|
|
|
* freed together too. Using a memory pool for these provides a
|
|
|
|
* nice speedup.
|
2020-12-03 15:59:43 +00:00
|
|
|
*/
|
2021-07-31 17:27:38 +00:00
|
|
|
struct mem_pool pool;
|
2020-12-03 15:59:43 +00:00
|
|
|
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
/*
|
2022-06-18 00:20:54 +00:00
|
|
|
* conflicts: logical conflicts and messages stored by _primary_ path
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
*
|
|
|
|
* This is a map of pathnames (a subset of the keys in "paths" above)
|
2022-06-18 00:20:54 +00:00
|
|
|
* to struct string_list, with each item's `util` containing a
|
|
|
|
* `struct logical_conflict_info`. Note, though, that for each path,
|
|
|
|
* it only stores the logical conflicts for which that path is the
|
|
|
|
* primary path; the path might be part of additional conflicts.
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
*/
|
2022-06-18 00:20:54 +00:00
|
|
|
struct strmap conflicts;
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
|
2020-12-13 08:04:08 +00:00
|
|
|
/*
|
2020-12-14 16:21:30 +00:00
|
|
|
* renames: various data relating to rename detection
|
|
|
|
*/
|
|
|
|
struct rename_info renames;
|
|
|
|
|
2021-03-20 00:03:45 +00:00
|
|
|
/*
|
|
|
|
* attr_index: hacky minimal index used for renormalization
|
|
|
|
*
|
|
|
|
* renormalization code _requires_ an index, though it only needs to
|
|
|
|
* find a .gitattributes file within the index. So, when
|
|
|
|
* renormalization is important, we create a special index with just
|
|
|
|
* that one file.
|
|
|
|
*/
|
|
|
|
struct index_state attr_index;
|
|
|
|
|
2020-12-13 08:04:08 +00:00
|
|
|
/*
|
2021-01-19 19:53:50 +00:00
|
|
|
* current_dir_name, toplevel_dir: temporary vars
|
2020-12-13 08:04:08 +00:00
|
|
|
*
|
2021-01-19 19:53:50 +00:00
|
|
|
* These are used in collect_merge_info_callback(), and will set the
|
|
|
|
* various merged_info.directory_name for the various paths we get;
|
|
|
|
* see documentation for that variable and the requirements placed on
|
|
|
|
* that field.
|
2020-12-13 08:04:08 +00:00
|
|
|
*/
|
|
|
|
const char *current_dir_name;
|
2021-01-19 19:53:50 +00:00
|
|
|
const char *toplevel_dir;
|
2020-12-13 08:04:08 +00:00
|
|
|
|
|
|
|
/* call_depth: recursion level counter for merging merge bases */
|
|
|
|
int call_depth;
|
2022-08-04 19:51:05 +00:00
|
|
|
|
|
|
|
/* field that holds submodule conflict information */
|
|
|
|
struct string_list conflicted_submodules;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct conflicted_submodule_item {
|
|
|
|
char *abbrev;
|
|
|
|
int flag;
|
2020-12-13 08:04:08 +00:00
|
|
|
};
|
|
|
|
|
2022-10-18 01:05:32 +00:00
|
|
|
static void conflicted_submodule_item_free(void *util, const char *str UNUSED)
|
2022-08-04 19:51:05 +00:00
|
|
|
{
|
|
|
|
struct conflicted_submodule_item *item = util;
|
|
|
|
|
|
|
|
free(item->abbrev);
|
|
|
|
free(item);
|
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:08 +00:00
|
|
|
struct version_info {
|
|
|
|
struct object_id oid;
|
|
|
|
unsigned short mode;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct merged_info {
|
|
|
|
/* if is_null, ignore result. otherwise result has oid & mode */
|
|
|
|
struct version_info result;
|
|
|
|
unsigned is_null:1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* clean: whether the path in question is cleanly merged.
|
|
|
|
*
|
|
|
|
* see conflict_info.merged for more details.
|
|
|
|
*/
|
|
|
|
unsigned clean:1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* basename_offset: offset of basename of path.
|
|
|
|
*
|
|
|
|
* perf optimization to avoid recomputing offset of final '/'
|
|
|
|
* character in pathname (0 if no '/' in pathname).
|
|
|
|
*/
|
|
|
|
size_t basename_offset;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* directory_name: containing directory name.
|
|
|
|
*
|
|
|
|
* Note that we assume directory_name is constructed such that
|
|
|
|
* strcmp(dir1_name, dir2_name) == 0 iff dir1_name == dir2_name,
|
|
|
|
* i.e. string equality is equivalent to pointer equality. For this
|
|
|
|
* to hold, we have to be careful setting directory_name.
|
|
|
|
*/
|
|
|
|
const char *directory_name;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct conflict_info {
|
|
|
|
/*
|
|
|
|
* merged: the version of the path that will be written to working tree
|
|
|
|
*
|
|
|
|
* WARNING: It is critical to check merged.clean and ensure it is 0
|
|
|
|
* before reading any conflict_info fields outside of merged.
|
|
|
|
* Allocated merge_info structs will always have clean set to 1.
|
|
|
|
* Allocated conflict_info structs will have merged.clean set to 0
|
|
|
|
* initially. The merged.clean field is how we know if it is safe
|
|
|
|
* to access other parts of conflict_info besides merged; if a
|
|
|
|
* conflict_info's merged.clean is changed to 1, the rest of the
|
|
|
|
* algorithm is not allowed to look at anything outside of the
|
|
|
|
* merged member anymore.
|
|
|
|
*/
|
|
|
|
struct merged_info merged;
|
|
|
|
|
|
|
|
/* oids & modes from each of the three trees for this path */
|
|
|
|
struct version_info stages[3];
|
|
|
|
|
|
|
|
/* pathnames for each stage; may differ due to rename detection */
|
|
|
|
const char *pathnames[3];
|
|
|
|
|
|
|
|
/* Whether this path is/was involved in a directory/file conflict */
|
|
|
|
unsigned df_conflict:1;
|
|
|
|
|
2020-12-03 15:59:42 +00:00
|
|
|
/*
|
|
|
|
* Whether this path is/was involved in a non-content conflict other
|
|
|
|
* than a directory/file conflict (e.g. rename/rename, rename/delete,
|
|
|
|
* file location based on possible directory rename).
|
|
|
|
*/
|
|
|
|
unsigned path_conflict:1;
|
|
|
|
|
2020-12-13 08:04:08 +00:00
|
|
|
/*
|
|
|
|
* For filemask and dirmask, the ith bit corresponds to whether the
|
|
|
|
* ith entry is a file (filemask) or a directory (dirmask). Thus,
|
|
|
|
* filemask & dirmask is always zero, and filemask | dirmask is at
|
|
|
|
* most 7 but can be less when a path does not appear as either a
|
|
|
|
* file or a directory on at least one side of history.
|
|
|
|
*
|
|
|
|
* Note that these masks are related to enum merge_side, as the ith
|
|
|
|
* entry corresponds to side i.
|
|
|
|
*
|
|
|
|
* These values come from a traverse_trees() call; more info may be
|
|
|
|
* found looking at tree-walk.h's struct traverse_info,
|
|
|
|
* particularly the documentation above the "fn" member (note that
|
|
|
|
* filemask = mask & ~dirmask from that documentation).
|
|
|
|
*/
|
|
|
|
unsigned filemask:3;
|
|
|
|
unsigned dirmask:3;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Optimization to track which stages match, to avoid the need to
|
|
|
|
* recompute it in multiple steps. Either 0 or at least 2 bits are
|
|
|
|
* set; if at least 2 bits are set, their corresponding stages match.
|
|
|
|
*/
|
|
|
|
unsigned match_mask:3;
|
|
|
|
};
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
enum conflict_and_info_types {
|
|
|
|
/* "Simple" conflicts and informational messages */
|
|
|
|
INFO_AUTO_MERGING = 0,
|
|
|
|
CONFLICT_CONTENTS, /* text file that failed to merge */
|
|
|
|
CONFLICT_BINARY,
|
|
|
|
CONFLICT_FILE_DIRECTORY,
|
|
|
|
CONFLICT_DISTINCT_MODES,
|
|
|
|
CONFLICT_MODIFY_DELETE,
|
|
|
|
|
|
|
|
/* Regular rename */
|
|
|
|
CONFLICT_RENAME_RENAME, /* same file renamed differently */
|
|
|
|
CONFLICT_RENAME_COLLIDES, /* rename/add or two files renamed to 1 */
|
|
|
|
CONFLICT_RENAME_DELETE,
|
|
|
|
|
|
|
|
/* Basic directory rename */
|
|
|
|
CONFLICT_DIR_RENAME_SUGGESTED,
|
|
|
|
INFO_DIR_RENAME_APPLIED,
|
|
|
|
|
|
|
|
/* Special directory rename cases */
|
|
|
|
INFO_DIR_RENAME_SKIPPED_DUE_TO_RERENAME,
|
|
|
|
CONFLICT_DIR_RENAME_FILE_IN_WAY,
|
|
|
|
CONFLICT_DIR_RENAME_COLLISION,
|
|
|
|
CONFLICT_DIR_RENAME_SPLIT,
|
|
|
|
|
|
|
|
/* Basic submodule */
|
|
|
|
INFO_SUBMODULE_FAST_FORWARDING,
|
|
|
|
CONFLICT_SUBMODULE_FAILED_TO_MERGE,
|
|
|
|
|
|
|
|
/* Special submodule cases broken out from FAILED_TO_MERGE */
|
|
|
|
CONFLICT_SUBMODULE_FAILED_TO_MERGE_BUT_POSSIBLE_RESOLUTION,
|
|
|
|
CONFLICT_SUBMODULE_NOT_INITIALIZED,
|
|
|
|
CONFLICT_SUBMODULE_HISTORY_NOT_AVAILABLE,
|
|
|
|
CONFLICT_SUBMODULE_MAY_HAVE_REWINDS,
|
2022-08-04 19:51:05 +00:00
|
|
|
CONFLICT_SUBMODULE_NULL_MERGE_BASE,
|
2022-06-18 00:20:56 +00:00
|
|
|
|
|
|
|
/* Keep this entry _last_ in the list */
|
|
|
|
NB_CONFLICT_TYPES,
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Short description of conflict type, relied upon by external tools.
|
|
|
|
*
|
|
|
|
* We can add more entries, but DO NOT change any of these strings. Also,
|
|
|
|
* Order MUST match conflict_info_and_types.
|
|
|
|
*/
|
|
|
|
static const char *type_short_descriptions[] = {
|
|
|
|
/*** "Simple" conflicts and informational messages ***/
|
|
|
|
[INFO_AUTO_MERGING] = "Auto-merging",
|
|
|
|
[CONFLICT_CONTENTS] = "CONFLICT (contents)",
|
|
|
|
[CONFLICT_BINARY] = "CONFLICT (binary)",
|
|
|
|
[CONFLICT_FILE_DIRECTORY] = "CONFLICT (file/directory)",
|
|
|
|
[CONFLICT_DISTINCT_MODES] = "CONFLICT (distinct modes)",
|
|
|
|
[CONFLICT_MODIFY_DELETE] = "CONFLICT (modify/delete)",
|
|
|
|
|
|
|
|
/*** Regular rename ***/
|
|
|
|
[CONFLICT_RENAME_RENAME] = "CONFLICT (rename/rename)",
|
|
|
|
[CONFLICT_RENAME_COLLIDES] = "CONFLICT (rename involved in collision)",
|
|
|
|
[CONFLICT_RENAME_DELETE] = "CONFLICT (rename/delete)",
|
|
|
|
|
|
|
|
/*** Basic directory rename ***/
|
|
|
|
[CONFLICT_DIR_RENAME_SUGGESTED] =
|
|
|
|
"CONFLICT (directory rename suggested)",
|
|
|
|
[INFO_DIR_RENAME_APPLIED] = "Path updated due to directory rename",
|
|
|
|
|
|
|
|
/*** Special directory rename cases ***/
|
|
|
|
[INFO_DIR_RENAME_SKIPPED_DUE_TO_RERENAME] =
|
|
|
|
"Directory rename skipped since directory was renamed on both sides",
|
|
|
|
[CONFLICT_DIR_RENAME_FILE_IN_WAY] =
|
|
|
|
"CONFLICT (file in way of directory rename)",
|
|
|
|
[CONFLICT_DIR_RENAME_COLLISION] = "CONFLICT(directory rename collision)",
|
|
|
|
[CONFLICT_DIR_RENAME_SPLIT] = "CONFLICT(directory rename unclear split)",
|
|
|
|
|
|
|
|
/*** Basic submodule ***/
|
|
|
|
[INFO_SUBMODULE_FAST_FORWARDING] = "Fast forwarding submodule",
|
|
|
|
[CONFLICT_SUBMODULE_FAILED_TO_MERGE] = "CONFLICT (submodule)",
|
|
|
|
|
|
|
|
/*** Special submodule cases broken out from FAILED_TO_MERGE ***/
|
|
|
|
[CONFLICT_SUBMODULE_FAILED_TO_MERGE_BUT_POSSIBLE_RESOLUTION] =
|
|
|
|
"CONFLICT (submodule with possible resolution)",
|
|
|
|
[CONFLICT_SUBMODULE_NOT_INITIALIZED] =
|
|
|
|
"CONFLICT (submodule not initialized)",
|
|
|
|
[CONFLICT_SUBMODULE_HISTORY_NOT_AVAILABLE] =
|
|
|
|
"CONFLICT (submodule history not available)",
|
|
|
|
[CONFLICT_SUBMODULE_MAY_HAVE_REWINDS] =
|
|
|
|
"CONFLICT (submodule may have rewinds)",
|
2022-08-04 19:51:05 +00:00
|
|
|
[CONFLICT_SUBMODULE_NULL_MERGE_BASE] =
|
|
|
|
"CONFLICT (submodule lacks merge base)"
|
2022-06-18 00:20:56 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct logical_conflict_info {
|
|
|
|
enum conflict_and_info_types type;
|
|
|
|
struct strvec paths;
|
|
|
|
};
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: various utility functions ***/
|
|
|
|
|
2020-12-13 08:04:16 +00:00
|
|
|
/*
|
|
|
|
* For the next three macros, see warning for conflict_info.merged.
|
|
|
|
*
|
|
|
|
* In each of the below, mi is a struct merged_info*, and ci was defined
|
|
|
|
* as a struct conflict_info* (but we need to verify ci isn't actually
|
|
|
|
* pointed at a struct merged_info*).
|
|
|
|
*
|
|
|
|
* INITIALIZE_CI: Assign ci to mi but only if it's safe; set to NULL otherwise.
|
|
|
|
* VERIFY_CI: Ensure that something we assigned to a conflict_info* is one.
|
|
|
|
* ASSIGN_AND_VERIFY_CI: Similar to VERIFY_CI but do assignment first.
|
|
|
|
*/
|
|
|
|
#define INITIALIZE_CI(ci, mi) do { \
|
|
|
|
(ci) = (!(mi) || (mi)->clean) ? NULL : (struct conflict_info *)(mi); \
|
|
|
|
} while (0)
|
|
|
|
#define VERIFY_CI(ci) assert(ci && !ci->merged.clean);
|
|
|
|
#define ASSIGN_AND_VERIFY_CI(ci, mi) do { \
|
|
|
|
(ci) = (struct conflict_info *)(mi); \
|
|
|
|
assert((ci) && !(mi)->clean); \
|
|
|
|
} while (0)
|
|
|
|
|
2020-12-13 08:04:27 +00:00
|
|
|
static void free_strmap_strings(struct strmap *map)
|
|
|
|
{
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
|
|
|
|
strmap_for_each_entry(map, &iter, entry) {
|
|
|
|
free((char*)entry->key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-16 22:28:01 +00:00
|
|
|
static void clear_or_reinit_internal_opts(struct merge_options_internal *opti,
|
|
|
|
int reinitialize)
|
2020-12-03 15:59:41 +00:00
|
|
|
{
|
2021-01-07 21:35:50 +00:00
|
|
|
struct rename_info *renames = &opti->renames;
|
|
|
|
int i;
|
2021-07-30 11:47:36 +00:00
|
|
|
void (*strmap_clear_func)(struct strmap *, int) =
|
2020-12-16 22:28:01 +00:00
|
|
|
reinitialize ? strmap_partial_clear : strmap_clear;
|
2021-07-30 11:47:36 +00:00
|
|
|
void (*strintmap_clear_func)(struct strintmap *) =
|
2021-03-13 22:22:02 +00:00
|
|
|
reinitialize ? strintmap_partial_clear : strintmap_clear;
|
2021-07-30 11:47:36 +00:00
|
|
|
void (*strset_clear_func)(struct strset *) =
|
2021-05-20 06:09:34 +00:00
|
|
|
reinitialize ? strset_partial_clear : strset_clear;
|
2020-12-03 15:59:41 +00:00
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
strmap_clear_func(&opti->paths, 0);
|
2020-12-03 15:59:41 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* All keys and values in opti->conflicted are a subset of those in
|
|
|
|
* opti->paths. We don't want to deallocate anything twice, so we
|
|
|
|
* don't free the keys and we pass 0 for free_values.
|
|
|
|
*/
|
2021-07-30 11:47:36 +00:00
|
|
|
strmap_clear_func(&opti->conflicted, 0);
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
|
merge-ort: have ll_merge() use a special attr_index for renormalization
ll_merge() needs an index when renormalization is requested. Create one
specifically for just this purpose with just the one needed entry. This
fixes t6418.4 and t6418.5 under GIT_TEST_MERGE_ALGORITHM=ort.
NOTE 1: Even if the user has a working copy or a real index (which is
not a given as merge-ort can be used in bare repositories), we
explicitly ignore any .gitattributes file from either of these
locations. merge-ort can be used to merge two branches that are
unrelated to HEAD, so .gitattributes from the working copy and current
index should not be considered relevant.
NOTE 2: Since we are in the middle of merging, there is a risk that
.gitattributes itself is conflicted...leaving us with an ill-defined
situation about how to perform the rest of the merge. It could be that
the .gitattributes file does not even exist on one of the sides of the
merge, or that it has been modified on both sides. If it's been
modified on both sides, it's possible that it could itself be merged
cleanly, though it's also possible that it only merges cleanly if you
use the right version of the .gitattributes file to drive the merge. It
gets kind of complicated. The only test we ever had that attempted to
test behavior in this area was seemingly unaware of the undefined
behavior, but knew the test wouldn't work for lack of attribute handling
support, marked it as test_expect_failure from the beginning, but
managed to fail for several reasons unrelated to attribute handling.
See commit 6f6e7cfb52 ("t6038: remove problematic test", 2020-08-03) for
details. So there are probably various ways to improve what
initialize_attr_index() picks in the case of a conflicted .gitattributes
but for now I just implemented something simple -- look for whatever
.gitattributes file we can find in any of the higher order stages and
use it.
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-20 00:03:46 +00:00
|
|
|
if (opti->attr_index.cache_nr) /* true iff opt->renormalize */
|
2021-03-20 00:03:45 +00:00
|
|
|
discard_index(&opti->attr_index);
|
|
|
|
|
2021-01-07 21:35:50 +00:00
|
|
|
/* Free memory used by various renames maps */
|
|
|
|
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; ++i) {
|
2021-07-30 11:47:36 +00:00
|
|
|
strintmap_clear_func(&renames->dirs_removed[i]);
|
|
|
|
strmap_clear_func(&renames->dir_renames[i], 0);
|
|
|
|
strintmap_clear_func(&renames->relevant_sources[i]);
|
2021-05-20 06:09:38 +00:00
|
|
|
if (!reinitialize)
|
|
|
|
assert(renames->cached_pairs_valid_side == 0);
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
if (i != renames->cached_pairs_valid_side &&
|
|
|
|
-1 != renames->cached_pairs_valid_side) {
|
2021-07-30 11:47:36 +00:00
|
|
|
strset_clear_func(&renames->cached_target_names[i]);
|
|
|
|
strmap_clear_func(&renames->cached_pairs[i], 1);
|
|
|
|
strset_clear_func(&renames->cached_irrelevant[i]);
|
2021-05-20 06:09:38 +00:00
|
|
|
partial_clear_dir_rename_count(&renames->dir_rename_count[i]);
|
|
|
|
if (!reinitialize)
|
|
|
|
strmap_clear(&renames->dir_rename_count[i], 1);
|
|
|
|
}
|
2021-01-07 21:35:50 +00:00
|
|
|
}
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; ++i) {
|
2021-07-30 11:47:36 +00:00
|
|
|
strintmap_clear_func(&renames->deferred[i].possible_trivial_merges);
|
|
|
|
strset_clear_func(&renames->deferred[i].target_dirs);
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
renames->deferred[i].trivial_merges_okay = 1; /* 1 == maybe */
|
|
|
|
}
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
renames->cached_pairs_valid_side = 0;
|
|
|
|
renames->dir_rename_mask = 0;
|
2021-01-07 21:35:50 +00:00
|
|
|
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
if (!reinitialize) {
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *e;
|
|
|
|
|
|
|
|
/* Release and free each strbuf found in output */
|
2022-06-18 00:20:54 +00:00
|
|
|
strmap_for_each_entry(&opti->conflicts, &iter, e) {
|
|
|
|
struct string_list *list = e->value;
|
2022-06-18 00:20:56 +00:00
|
|
|
for (int i = 0; i < list->nr; i++) {
|
|
|
|
struct logical_conflict_info *info =
|
|
|
|
list->items[i].util;
|
|
|
|
strvec_clear(&info->paths);
|
|
|
|
}
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
/*
|
2022-06-18 00:20:54 +00:00
|
|
|
* While strictly speaking we don't need to
|
|
|
|
* free(conflicts) here because we could pass
|
|
|
|
* free_values=1 when calling strmap_clear() on
|
|
|
|
* opti->conflicts, that would require strmap_clear
|
|
|
|
* to do another strmap_for_each_entry() loop, so we
|
|
|
|
* just free it while we're iterating anyway.
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
*/
|
2022-06-18 00:20:54 +00:00
|
|
|
string_list_clear(list, 1);
|
|
|
|
free(list);
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
}
|
2022-06-18 00:20:54 +00:00
|
|
|
strmap_clear(&opti->conflicts, 0);
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
}
|
2021-03-11 00:38:26 +00:00
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
mem_pool_discard(&opti->pool, 0);
|
2021-07-30 11:47:39 +00:00
|
|
|
|
2022-08-04 19:51:05 +00:00
|
|
|
string_list_clear_func(&opti->conflicted_submodules,
|
|
|
|
conflicted_submodule_item_free);
|
|
|
|
|
2021-03-11 00:38:26 +00:00
|
|
|
/* Clean out callback_data as well. */
|
|
|
|
FREE_AND_NULL(renames->callback_data);
|
|
|
|
renames->callback_data_nr = renames->callback_data_alloc = 0;
|
2020-12-03 15:59:41 +00:00
|
|
|
}
|
|
|
|
|
2021-07-13 08:05:18 +00:00
|
|
|
__attribute__((format (printf, 2, 3)))
|
2020-12-13 08:04:12 +00:00
|
|
|
static int err(struct merge_options *opt, const char *err, ...)
|
|
|
|
{
|
|
|
|
va_list params;
|
|
|
|
struct strbuf sb = STRBUF_INIT;
|
|
|
|
|
|
|
|
strbuf_addstr(&sb, "error: ");
|
|
|
|
va_start(params, err);
|
|
|
|
strbuf_vaddf(&sb, err, params);
|
|
|
|
va_end(params);
|
|
|
|
|
|
|
|
error("%s", sb.buf);
|
|
|
|
strbuf_release(&sb);
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2021-01-01 02:34:45 +00:00
|
|
|
static void format_commit(struct strbuf *sb,
|
|
|
|
int indent,
|
2021-10-08 21:08:17 +00:00
|
|
|
struct repository *repo,
|
2021-01-01 02:34:45 +00:00
|
|
|
struct commit *commit)
|
|
|
|
{
|
2021-01-01 02:34:46 +00:00
|
|
|
struct merge_remote_desc *desc;
|
|
|
|
struct pretty_print_context ctx = {0};
|
|
|
|
ctx.abbrev = DEFAULT_ABBREV;
|
|
|
|
|
|
|
|
strbuf_addchars(sb, ' ', indent);
|
|
|
|
desc = merge_remote_util(commit);
|
|
|
|
if (desc) {
|
|
|
|
strbuf_addf(sb, "virtual %s\n", desc->name);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-10-08 21:08:17 +00:00
|
|
|
repo_format_commit_message(repo, commit, "%h %s", sb, &ctx);
|
2021-01-01 02:34:46 +00:00
|
|
|
strbuf_addch(sb, '\n');
|
2021-01-01 02:34:45 +00:00
|
|
|
}
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
__attribute__((format (printf, 8, 9)))
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
static void path_msg(struct merge_options *opt,
|
2022-06-18 00:20:56 +00:00
|
|
|
enum conflict_and_info_types type,
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
int omittable_hint, /* skippable under --remerge-diff */
|
2022-06-18 00:20:56 +00:00
|
|
|
const char *primary_path,
|
|
|
|
const char *other_path_1, /* may be NULL */
|
|
|
|
const char *other_path_2, /* may be NULL */
|
|
|
|
struct string_list *other_paths, /* may be NULL */
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
const char *fmt, ...)
|
|
|
|
{
|
|
|
|
va_list ap;
|
2022-06-18 00:20:54 +00:00
|
|
|
struct string_list *path_conflicts;
|
2022-06-18 00:20:56 +00:00
|
|
|
struct logical_conflict_info *info;
|
2022-06-18 00:20:54 +00:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
struct strbuf *dest;
|
2022-02-02 02:37:33 +00:00
|
|
|
struct strbuf tmp = STRBUF_INIT;
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
/* Sanity checks */
|
|
|
|
assert(omittable_hint ==
|
|
|
|
!starts_with(type_short_descriptions[type], "CONFLICT") ||
|
2022-08-19 04:45:55 +00:00
|
|
|
type == CONFLICT_DIR_RENAME_SUGGESTED);
|
2022-02-02 02:37:33 +00:00
|
|
|
if (opt->record_conflict_msgs_as_headers && omittable_hint)
|
2022-02-02 02:37:36 +00:00
|
|
|
return; /* Do not record mere hints in headers */
|
2022-03-02 04:19:21 +00:00
|
|
|
if (opt->priv->call_depth && opt->verbosity < 5)
|
|
|
|
return; /* Ignore messages from inner merges */
|
|
|
|
|
2022-06-18 00:20:54 +00:00
|
|
|
/* Ensure path_conflicts (ptr to array of logical_conflict) allocated */
|
2022-06-18 00:20:56 +00:00
|
|
|
path_conflicts = strmap_get(&opt->priv->conflicts, primary_path);
|
2022-06-18 00:20:54 +00:00
|
|
|
if (!path_conflicts) {
|
|
|
|
path_conflicts = xmalloc(sizeof(*path_conflicts));
|
|
|
|
string_list_init_dup(path_conflicts);
|
2022-06-18 00:20:56 +00:00
|
|
|
strmap_put(&opt->priv->conflicts, primary_path, path_conflicts);
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
}
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
/* Add a logical_conflict at the end to store info from this call */
|
|
|
|
info = xcalloc(1, sizeof(*info));
|
|
|
|
info->type = type;
|
|
|
|
strvec_init(&info->paths);
|
|
|
|
|
|
|
|
/* Handle the list of paths */
|
|
|
|
strvec_push(&info->paths, primary_path);
|
|
|
|
if (other_path_1)
|
|
|
|
strvec_push(&info->paths, other_path_1);
|
|
|
|
if (other_path_2)
|
|
|
|
strvec_push(&info->paths, other_path_2);
|
|
|
|
if (other_paths)
|
|
|
|
for (int i = 0; i < other_paths->nr; i++)
|
|
|
|
strvec_push(&info->paths, other_paths->items[i].string);
|
|
|
|
|
2022-06-18 00:20:54 +00:00
|
|
|
/* Handle message and its format, in normal case */
|
|
|
|
dest = (opt->record_conflict_msgs_as_headers ? &tmp : &buf);
|
2022-02-02 02:37:33 +00:00
|
|
|
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
va_start(ap, fmt);
|
merge-ort: make informational messages from recursive merges clearer
This is another simple change with a long explanation...
merge-recursive and merge-ort are both based on the same recursive idea:
if there is more than one merge base, merge the merge bases (which may
require first merging the merge bases of the merges bases, etc.). The
depth of the inner merge is recorded via a variable called "call_depth",
which we'll bring up again later. Naturally, the inner merges
themselves can have conflicts and various messages generated about those
files.
merge-recursive immediately prints to stdout as it goes, at the risk of
printing multiple conflict notices for the same path separated far apart
from each other with many intervenining conflict notices for other paths
between them. And this is true even if there are no inner merges
involved. An example of this was given in [1] and apparently caused
some confusion:
CONFLICT (rename/add): Rename A->B in HEAD. B added in otherbranch
...dozens of conflicts for OTHER paths...
CONFLICT (content): Merge conflicts in B
In contrast, merge-ort collects messages and stores them by path so that
it can print them grouped by path. Thus, the same case handled by
merge-ort would have output of the form:
CONFLICT (rename/add): Rename A->B in HEAD. B added in otherbranch
CONFLICT (content): Merge conflicts in B
...dozens of conflicts for OTHER paths...
This is generally helpful, but does make a separate bug more
problematic. In particular, while merge-recursive might report the
following for a recursive merge:
Auto-merging dir.c
Auto-merging midx.c
CONFLICT (content): Merge conflict in midx.c
Auto-merging diff.c
Auto-merging dir.c
CONFLICT (content): Merge conflict in dir.c
merge-ort would instead report:
Auto-merging diff.c
Auto-merging dir.c
Auto-merging dir.c
CONFLICT (content): Merge conflict in dir.c
Auto-merging midx.c
CONFLICT (content): Merge conflict in midx.c
The fact that messages for the same file are together is probably
helpful in general, but with the indentation missing for the inner
merge it unfortunately serves to confuse. This probably would lead
users to wonder:
* Why is Git reporting that "dir.c" is being merged twice?
* If midx.c has conflicts, why do I not see any when I open up the
file and why are no conflicts shown in the index?
Fix this output confusion by changing the output to clearly
differentiate the messages for outer merges from the ones for inner
merges, changing the above output from merge-ort to:
Auto-merging diff.c
From inner merge: Auto-merging dir.c
Auto-merging dir.c
CONFLICT (content): Merge conflict in dir.c
From inner merge: Auto-merging midx.c
From inner merge: CONFLICT (content): Merge conflict in midx.c
(Note: the number of spaces after the 'From inner merge:' is
2*call_depth).
One other thing to note here, that I didn't notice until typing up this
commit message, is that merge-recursive does not print any messages from
the inner merges by default; the extra verbosity has to be requested.
merge-ort currently has no verbosity controls and always prints these.
We may also want to change that, but for now, just make the output
clearer with these extra markings and indentation.
[1] https://lore.kernel.org/git/CAGyf7-He4in8JWUh9dpAwvoPkQz9hr8nCBpxOxhZEd8+jtqTpg@mail.gmail.com/
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-17 06:38:42 +00:00
|
|
|
if (opt->priv->call_depth) {
|
|
|
|
strbuf_addchars(dest, ' ', 2);
|
|
|
|
strbuf_addstr(dest, "From inner merge:");
|
|
|
|
strbuf_addchars(dest, ' ', opt->priv->call_depth * 2);
|
|
|
|
}
|
2022-02-02 02:37:33 +00:00
|
|
|
strbuf_vaddf(dest, fmt, ap);
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
va_end(ap);
|
|
|
|
|
2022-06-18 00:20:54 +00:00
|
|
|
/* Handle specialized formatting of message under --remerge-diff */
|
2022-02-02 02:37:33 +00:00
|
|
|
if (opt->record_conflict_msgs_as_headers) {
|
|
|
|
int i_sb = 0, i_tmp = 0;
|
|
|
|
|
|
|
|
/* Start with the specified prefix */
|
|
|
|
if (opt->msg_header_prefix)
|
2022-06-18 00:20:54 +00:00
|
|
|
strbuf_addf(&buf, "%s ", opt->msg_header_prefix);
|
2022-02-02 02:37:33 +00:00
|
|
|
|
|
|
|
/* Copy tmp to sb, adding spaces after newlines */
|
2022-06-18 00:20:54 +00:00
|
|
|
strbuf_grow(&buf, buf.len + 2*tmp.len); /* more than sufficient */
|
2022-02-02 02:37:33 +00:00
|
|
|
for (; i_tmp < tmp.len; i_tmp++, i_sb++) {
|
|
|
|
/* Copy next character from tmp to sb */
|
2022-06-18 00:20:54 +00:00
|
|
|
buf.buf[buf.len + i_sb] = tmp.buf[i_tmp];
|
2022-02-02 02:37:33 +00:00
|
|
|
|
|
|
|
/* If we copied a newline, add a space */
|
|
|
|
if (tmp.buf[i_tmp] == '\n')
|
2022-06-18 00:20:54 +00:00
|
|
|
buf.buf[++i_sb] = ' ';
|
2022-02-02 02:37:33 +00:00
|
|
|
}
|
|
|
|
/* Update length and ensure it's NUL-terminated */
|
2022-06-18 00:20:54 +00:00
|
|
|
buf.len += i_sb;
|
|
|
|
buf.buf[buf.len] = '\0';
|
2022-02-02 02:37:33 +00:00
|
|
|
|
|
|
|
strbuf_release(&tmp);
|
|
|
|
}
|
2022-06-18 00:20:56 +00:00
|
|
|
string_list_append_nodup(path_conflicts, strbuf_detach(&buf, NULL))
|
|
|
|
->util = info;
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
}
|
|
|
|
|
2021-07-30 11:47:41 +00:00
|
|
|
static struct diff_filespec *pool_alloc_filespec(struct mem_pool *pool,
|
|
|
|
const char *path)
|
|
|
|
{
|
2021-07-31 17:27:38 +00:00
|
|
|
/* Similar to alloc_filespec(), but allocate from pool and reuse path */
|
2021-07-30 11:47:41 +00:00
|
|
|
struct diff_filespec *spec;
|
|
|
|
|
2021-07-30 11:47:43 +00:00
|
|
|
spec = mem_pool_calloc(pool, 1, sizeof(*spec));
|
|
|
|
spec->path = (char*)path; /* spec won't modify it */
|
2021-07-30 11:47:41 +00:00
|
|
|
|
|
|
|
spec->count = 1;
|
|
|
|
spec->is_binary = -1;
|
|
|
|
return spec;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct diff_filepair *pool_diff_queue(struct mem_pool *pool,
|
|
|
|
struct diff_queue_struct *queue,
|
|
|
|
struct diff_filespec *one,
|
|
|
|
struct diff_filespec *two)
|
|
|
|
{
|
2021-07-31 17:27:38 +00:00
|
|
|
/* Same code as diff_queue(), except allocate from pool */
|
2021-07-30 11:47:41 +00:00
|
|
|
struct diff_filepair *dp;
|
|
|
|
|
|
|
|
dp = mem_pool_calloc(pool, 1, sizeof(*dp));
|
|
|
|
dp->one = one;
|
|
|
|
dp->two = two;
|
|
|
|
if (queue)
|
|
|
|
diff_q(queue, dp);
|
|
|
|
return dp;
|
|
|
|
}
|
|
|
|
|
2021-01-01 02:34:41 +00:00
|
|
|
/* add a string to a strbuf, but converting "/" to "_" */
|
|
|
|
static void add_flattened_path(struct strbuf *out, const char *s)
|
|
|
|
{
|
|
|
|
size_t i = out->len;
|
|
|
|
strbuf_addstr(out, s);
|
|
|
|
for (; i < out->len; i++)
|
|
|
|
if (out->buf[i] == '/')
|
|
|
|
out->buf[i] = '_';
|
|
|
|
}
|
|
|
|
|
2022-02-20 01:29:51 +00:00
|
|
|
static char *unique_path(struct merge_options *opt,
|
2021-01-01 02:34:40 +00:00
|
|
|
const char *path,
|
|
|
|
const char *branch)
|
|
|
|
{
|
2022-02-20 01:29:51 +00:00
|
|
|
char *ret = NULL;
|
2021-01-01 02:34:41 +00:00
|
|
|
struct strbuf newpath = STRBUF_INIT;
|
|
|
|
int suffix = 0;
|
|
|
|
size_t base_len;
|
2022-02-20 01:29:51 +00:00
|
|
|
struct strmap *existing_paths = &opt->priv->paths;
|
2021-01-01 02:34:41 +00:00
|
|
|
|
|
|
|
strbuf_addf(&newpath, "%s~", path);
|
|
|
|
add_flattened_path(&newpath, branch);
|
|
|
|
|
|
|
|
base_len = newpath.len;
|
|
|
|
while (strmap_contains(existing_paths, newpath.buf)) {
|
|
|
|
strbuf_setlen(&newpath, base_len);
|
|
|
|
strbuf_addf(&newpath, "_%d", suffix++);
|
|
|
|
}
|
|
|
|
|
2022-02-20 01:29:51 +00:00
|
|
|
/* Track the new path in our memory pool */
|
|
|
|
ret = mem_pool_alloc(&opt->priv->pool, newpath.len + 1);
|
|
|
|
memcpy(ret, newpath.buf, newpath.len + 1);
|
|
|
|
strbuf_release(&newpath);
|
|
|
|
return ret;
|
2021-01-01 02:34:40 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: functions related to collect_merge_info() ***/
|
|
|
|
|
2021-03-11 00:38:27 +00:00
|
|
|
static int traverse_trees_wrapper_callback(int n,
|
|
|
|
unsigned long mask,
|
|
|
|
unsigned long dirmask,
|
|
|
|
struct name_entry *names,
|
|
|
|
struct traverse_info *info)
|
|
|
|
{
|
|
|
|
struct merge_options *opt = info->data;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
unsigned filemask = mask & ~dirmask;
|
2021-03-11 00:38:27 +00:00
|
|
|
|
|
|
|
assert(n==3);
|
|
|
|
|
|
|
|
if (!renames->callback_data_traverse_path)
|
|
|
|
renames->callback_data_traverse_path = xstrdup(info->traverse_path);
|
|
|
|
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
if (filemask && filemask == renames->dir_rename_mask)
|
|
|
|
renames->dir_rename_mask = 0x07;
|
|
|
|
|
2021-03-11 00:38:27 +00:00
|
|
|
ALLOC_GROW(renames->callback_data, renames->callback_data_nr + 1,
|
|
|
|
renames->callback_data_alloc);
|
|
|
|
renames->callback_data[renames->callback_data_nr].mask = mask;
|
|
|
|
renames->callback_data[renames->callback_data_nr].dirmask = dirmask;
|
|
|
|
COPY_ARRAY(renames->callback_data[renames->callback_data_nr].names,
|
|
|
|
names, 3);
|
|
|
|
renames->callback_data_nr++;
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Much like traverse_trees(), BUT:
|
|
|
|
* - read all the tree entries FIRST, saving them
|
|
|
|
* - note that the above step provides an opportunity to compute necessary
|
|
|
|
* additional details before the "real" traversal
|
|
|
|
* - loop through the saved entries and call the original callback on them
|
|
|
|
*/
|
|
|
|
static int traverse_trees_wrapper(struct index_state *istate,
|
|
|
|
int n,
|
|
|
|
struct tree_desc *t,
|
|
|
|
struct traverse_info *info)
|
|
|
|
{
|
|
|
|
int ret, i, old_offset;
|
|
|
|
traverse_callback_t old_fn;
|
|
|
|
char *old_callback_data_traverse_path;
|
|
|
|
struct merge_options *opt = info->data;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
assert(renames->dir_rename_mask == 2 || renames->dir_rename_mask == 4);
|
|
|
|
|
2021-03-11 00:38:27 +00:00
|
|
|
old_callback_data_traverse_path = renames->callback_data_traverse_path;
|
|
|
|
old_fn = info->fn;
|
|
|
|
old_offset = renames->callback_data_nr;
|
|
|
|
|
|
|
|
renames->callback_data_traverse_path = NULL;
|
|
|
|
info->fn = traverse_trees_wrapper_callback;
|
|
|
|
ret = traverse_trees(istate, n, t, info);
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
info->traverse_path = renames->callback_data_traverse_path;
|
|
|
|
info->fn = old_fn;
|
|
|
|
for (i = old_offset; i < renames->callback_data_nr; ++i) {
|
|
|
|
info->fn(n,
|
|
|
|
renames->callback_data[i].mask,
|
|
|
|
renames->callback_data[i].dirmask,
|
|
|
|
renames->callback_data[i].names,
|
|
|
|
info);
|
|
|
|
}
|
|
|
|
|
|
|
|
renames->callback_data_nr = old_offset;
|
|
|
|
free(renames->callback_data_traverse_path);
|
|
|
|
renames->callback_data_traverse_path = old_callback_data_traverse_path;
|
|
|
|
info->traverse_path = NULL;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:16 +00:00
|
|
|
static void setup_path_info(struct merge_options *opt,
|
|
|
|
struct string_list_item *result,
|
|
|
|
const char *current_dir_name,
|
|
|
|
int current_dir_name_len,
|
|
|
|
char *fullpath, /* we'll take over ownership */
|
|
|
|
struct name_entry *names,
|
|
|
|
struct name_entry *merged_version,
|
|
|
|
unsigned is_null, /* boolean */
|
|
|
|
unsigned df_conflict, /* boolean */
|
|
|
|
unsigned filemask,
|
|
|
|
unsigned dirmask,
|
|
|
|
int resolved /* boolean */)
|
|
|
|
{
|
|
|
|
/* result->util is void*, so mi is a convenience typed variable */
|
|
|
|
struct merged_info *mi;
|
|
|
|
|
|
|
|
assert(!is_null || resolved);
|
|
|
|
assert(!df_conflict || !resolved); /* df_conflict implies !resolved */
|
|
|
|
assert(resolved == (merged_version != NULL));
|
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
mi = mem_pool_calloc(&opt->priv->pool, 1,
|
|
|
|
resolved ? sizeof(struct merged_info) :
|
|
|
|
sizeof(struct conflict_info));
|
2020-12-13 08:04:16 +00:00
|
|
|
mi->directory_name = current_dir_name;
|
|
|
|
mi->basename_offset = current_dir_name_len;
|
|
|
|
mi->clean = !!resolved;
|
|
|
|
if (resolved) {
|
|
|
|
mi->result.mode = merged_version->mode;
|
|
|
|
oidcpy(&mi->result.oid, &merged_version->oid);
|
|
|
|
mi->is_null = !!is_null;
|
|
|
|
} else {
|
|
|
|
int i;
|
|
|
|
struct conflict_info *ci;
|
|
|
|
|
|
|
|
ASSIGN_AND_VERIFY_CI(ci, mi);
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
|
|
|
ci->pathnames[i] = fullpath;
|
|
|
|
ci->stages[i].mode = names[i].mode;
|
|
|
|
oidcpy(&ci->stages[i].oid, &names[i].oid);
|
|
|
|
}
|
|
|
|
ci->filemask = filemask;
|
|
|
|
ci->dirmask = dirmask;
|
|
|
|
ci->df_conflict = !!df_conflict;
|
|
|
|
if (dirmask)
|
|
|
|
/*
|
|
|
|
* Assume is_null for now, but if we have entries
|
|
|
|
* under the directory then when it is complete in
|
|
|
|
* write_completed_directory() it'll update this.
|
|
|
|
* Also, for D/F conflicts, we have to handle the
|
|
|
|
* directory first, then clear this bit and process
|
|
|
|
* the file to see how it is handled -- that occurs
|
|
|
|
* near the top of process_entry().
|
|
|
|
*/
|
|
|
|
mi->is_null = 1;
|
|
|
|
}
|
|
|
|
strmap_put(&opt->priv->paths, fullpath, mi);
|
|
|
|
result->string = fullpath;
|
|
|
|
result->util = mi;
|
|
|
|
}
|
|
|
|
|
2021-02-14 07:51:51 +00:00
|
|
|
static void add_pair(struct merge_options *opt,
|
|
|
|
struct name_entry *names,
|
|
|
|
const char *pathname,
|
|
|
|
unsigned side,
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
unsigned is_add /* if false, is_delete */,
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
unsigned match_mask,
|
|
|
|
unsigned dir_rename_mask)
|
2021-02-14 07:51:51 +00:00
|
|
|
{
|
|
|
|
struct diff_filespec *one, *two;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
int names_idx = is_add ? side : 0;
|
|
|
|
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
if (is_add) {
|
2021-06-08 16:11:42 +00:00
|
|
|
assert(match_mask == 0 || match_mask == 6);
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
if (strset_contains(&renames->cached_target_names[side],
|
|
|
|
pathname))
|
|
|
|
return;
|
|
|
|
} else {
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
unsigned content_relevant = (match_mask == 0);
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
unsigned location_relevant = (dir_rename_mask == 0x07);
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
|
2021-06-08 16:11:42 +00:00
|
|
|
assert(match_mask == 0 || match_mask == 3 || match_mask == 5);
|
|
|
|
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
/*
|
|
|
|
* If pathname is found in cached_irrelevant[side] due to
|
|
|
|
* previous pick but for this commit content is relevant,
|
|
|
|
* then we need to remove it from cached_irrelevant.
|
|
|
|
*/
|
|
|
|
if (content_relevant)
|
|
|
|
/* strset_remove is no-op if strset doesn't have key */
|
|
|
|
strset_remove(&renames->cached_irrelevant[side],
|
|
|
|
pathname);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We do not need to re-detect renames for paths that we already
|
|
|
|
* know the pairing, i.e. for cached_pairs (or
|
|
|
|
* cached_irrelevant). However, handle_deferred_entries() needs
|
|
|
|
* to loop over the union of keys from relevant_sources[side] and
|
|
|
|
* cached_pairs[side], so for simplicity we set relevant_sources
|
|
|
|
* for all the cached_pairs too and then strip them back out in
|
|
|
|
* prune_cached_from_relevant() at the beginning of
|
|
|
|
* detect_regular_renames().
|
|
|
|
*/
|
2021-03-13 22:22:07 +00:00
|
|
|
if (content_relevant || location_relevant) {
|
|
|
|
/* content_relevant trumps location_relevant */
|
|
|
|
strintmap_set(&renames->relevant_sources[side], pathname,
|
|
|
|
content_relevant ? RELEVANT_CONTENT : RELEVANT_LOCATION);
|
|
|
|
}
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Avoid creating pair if we've already cached rename results.
|
|
|
|
* Note that we do this after setting relevant_sources[side]
|
|
|
|
* as noted in the comment above.
|
|
|
|
*/
|
|
|
|
if (strmap_contains(&renames->cached_pairs[side], pathname) ||
|
|
|
|
strset_contains(&renames->cached_irrelevant[side], pathname))
|
|
|
|
return;
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
}
|
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
one = pool_alloc_filespec(&opt->priv->pool, pathname);
|
|
|
|
two = pool_alloc_filespec(&opt->priv->pool, pathname);
|
2021-02-14 07:51:51 +00:00
|
|
|
fill_filespec(is_add ? two : one,
|
|
|
|
&names[names_idx].oid, 1, names[names_idx].mode);
|
2021-07-31 17:27:38 +00:00
|
|
|
pool_diff_queue(&opt->priv->pool, &renames->pairs[side], one, two);
|
2021-02-14 07:51:51 +00:00
|
|
|
}
|
|
|
|
|
2021-01-07 21:35:51 +00:00
|
|
|
static void collect_rename_info(struct merge_options *opt,
|
|
|
|
struct name_entry *names,
|
|
|
|
const char *dirname,
|
|
|
|
const char *fullname,
|
|
|
|
unsigned filemask,
|
|
|
|
unsigned dirmask,
|
|
|
|
unsigned match_mask)
|
|
|
|
{
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
2021-02-14 07:51:51 +00:00
|
|
|
unsigned side;
|
2021-01-07 21:35:51 +00:00
|
|
|
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
/*
|
|
|
|
* Update dir_rename_mask (determines ignore-rename-source validity)
|
|
|
|
*
|
|
|
|
* dir_rename_mask helps us keep track of when directory rename
|
|
|
|
* detection may be relevant. Basically, whenver a directory is
|
|
|
|
* removed on one side of history, and a file is added to that
|
|
|
|
* directory on the other side of history, directory rename
|
|
|
|
* detection is relevant (meaning we have to detect renames for all
|
|
|
|
* files within that directory to deduce where the directory
|
|
|
|
* moved). Also, whenever a directory needs directory rename
|
|
|
|
* detection, due to the "majority rules" choice for where to move
|
|
|
|
* it (see t6423 testcase 1f), we also need to detect renames for
|
|
|
|
* all files within subdirectories of that directory as well.
|
|
|
|
*
|
|
|
|
* Here we haven't looked at files within the directory yet, we are
|
|
|
|
* just looking at the directory itself. So, if we aren't yet in
|
|
|
|
* a case where a parent directory needed directory rename detection
|
|
|
|
* (i.e. dir_rename_mask != 0x07), and if the directory was removed
|
|
|
|
* on one side of history, record the mask of the other side of
|
|
|
|
* history in dir_rename_mask.
|
|
|
|
*/
|
|
|
|
if (renames->dir_rename_mask != 0x07 &&
|
|
|
|
(dirmask == 3 || dirmask == 5)) {
|
|
|
|
/* simple sanity check */
|
|
|
|
assert(renames->dir_rename_mask == 0 ||
|
|
|
|
renames->dir_rename_mask == (dirmask & ~1));
|
|
|
|
/* update dir_rename_mask; have it record mask of new side */
|
|
|
|
renames->dir_rename_mask = (dirmask & ~1);
|
|
|
|
}
|
|
|
|
|
2021-01-07 21:35:51 +00:00
|
|
|
/* Update dirs_removed, as needed */
|
|
|
|
if (dirmask == 1 || dirmask == 3 || dirmask == 5) {
|
|
|
|
/* absent_mask = 0x07 - dirmask; sides = absent_mask/2 */
|
|
|
|
unsigned sides = (0x07 - dirmask)/2;
|
2021-03-13 22:22:03 +00:00
|
|
|
unsigned relevance = (renames->dir_rename_mask == 0x07) ?
|
|
|
|
RELEVANT_FOR_ANCESTOR : NOT_RELEVANT;
|
|
|
|
/*
|
|
|
|
* Record relevance of this directory. However, note that
|
|
|
|
* when collect_merge_info_callback() recurses into this
|
|
|
|
* directory and calls collect_rename_info() on paths
|
|
|
|
* within that directory, if we find a path that was added
|
|
|
|
* to this directory on the other side of history, we will
|
|
|
|
* upgrade this value to RELEVANT_FOR_SELF; see below.
|
|
|
|
*/
|
2021-01-07 21:35:51 +00:00
|
|
|
if (sides & 1)
|
2021-03-13 22:22:03 +00:00
|
|
|
strintmap_set(&renames->dirs_removed[1], fullname,
|
|
|
|
relevance);
|
2021-01-07 21:35:51 +00:00
|
|
|
if (sides & 2)
|
2021-03-13 22:22:03 +00:00
|
|
|
strintmap_set(&renames->dirs_removed[2], fullname,
|
|
|
|
relevance);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Here's the block that potentially upgrades to RELEVANT_FOR_SELF.
|
|
|
|
* When we run across a file added to a directory. In such a case,
|
|
|
|
* find the directory of the file and upgrade its relevance.
|
|
|
|
*/
|
|
|
|
if (renames->dir_rename_mask == 0x07 &&
|
|
|
|
(filemask == 2 || filemask == 4)) {
|
|
|
|
/*
|
|
|
|
* Need directory rename for parent directory on other side
|
|
|
|
* of history from added file. Thus
|
|
|
|
* side = (~filemask & 0x06) >> 1
|
|
|
|
* or
|
|
|
|
* side = 3 - (filemask/2).
|
|
|
|
*/
|
|
|
|
unsigned side = 3 - (filemask >> 1);
|
|
|
|
strintmap_set(&renames->dirs_removed[side], dirname,
|
|
|
|
RELEVANT_FOR_SELF);
|
2021-01-07 21:35:51 +00:00
|
|
|
}
|
2021-02-14 07:51:51 +00:00
|
|
|
|
|
|
|
if (filemask == 0 || filemask == 7)
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (side = MERGE_SIDE1; side <= MERGE_SIDE2; ++side) {
|
|
|
|
unsigned side_mask = (1 << side);
|
|
|
|
|
|
|
|
/* Check for deletion on side */
|
|
|
|
if ((filemask & 1) && !(filemask & side_mask))
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
add_pair(opt, names, fullname, side, 0 /* delete */,
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
match_mask & filemask,
|
|
|
|
renames->dir_rename_mask);
|
2021-02-14 07:51:51 +00:00
|
|
|
|
|
|
|
/* Check for addition on side */
|
|
|
|
if (!(filemask & 1) && (filemask & side_mask))
|
merge-ort: precompute subset of sources for which we need rename detection
rename detection works by trying to pair all file deletions (or
"sources") with all file additions (or "destinations"), checking
similarity, and then marking the sufficiently similar ones as renames.
This can be expensive if there are many sources and destinations on a
given side of history as it results in an N x M comparison matrix.
However, there are many cases where we can compute in advance that
detecting renames for some of the sources provides no useful information
and thus that we can exclude those sources from the matrix.
To see why, first note that the merge machinery uses detected renames in
two ways:
* directory rename detection: when one side of history renames a
directory, and the other side of history adds new files to that
directory, we want to be able to warn the user about the need to
chose whether those new files stay in the old directory or move
to the new one.
* three-way content merging: in order to do three-way content merging
of files, we need three different file versions. If one side of
history renamed a file, then some of the content for the file is
found under a different path than in the merge base or on the
other side of history.
Add a simple testcase showing the two kinds of reasons renames are
relevant; it's a testcase that will only pass if we detect both kinds of
needed renames.
Other than the testcase added above, this commit concentrates just on
the three-way content merging; it will punt and mark all sources as
needed for directory rename detection, and leave it to future commits to
narrow that down more.
The point of three-way content merging is to reconcile changes made on
*both* sides of history. What if the file wasn't modified on both
sides? There are two possibilities:
* If it wasn't modified on the renamed side:
-> then we get to do exact rename detection, which is cheap.
* If it wasn't modified on the unrenamed side:
-> then detection of a rename for that source file is irrelevant
That latter claim might be surprising at first, so let's walk through a
case to show why rename detection for that source file is irrelevant.
Let's use two filenames, old.c & new.c, with the following abbreviated
object ids (and where the value '000000' is used to denote that the file
is missing in that commit):
old.c new.c
MERGE_BASE: 01d01d 000000
MERGE_SIDE1: 01d01d 000000
MERGE_SIDE2: 000000 5e1ec7
If the rename *isn't* detected:
then old.c looks like it was unmodified on one side and deleted on
the other and should thus be removed. new.c looks like a new file we
should keep as-is.
If the rename *is* detected:
then a three-way content merge is done. Since the version of the
file in MERGE_BASE and MERGE_SIDE1 are identical, the three-way merge
will produce exactly the version of the file whose abbreviated
object id is 5e1ec7. It will record that file at the path new.c,
while removing old.c from the directory.
Note that these two results are identical -- a single file named 'new.c'
with object id 5e1ec7. In other words, it doesn't matter if the rename
is detected in the case where the file is unmodified on the unrenamed
side.
Use this information to compute whether we need rename detection for
each source created in add_pair().
It's probably worth noting that there used to be a few other edge or
corner cases besides three-way content merges and directory rename
detection where lack of rename detection could have affected the result,
but those cases actually highlighted where conflict resolution methods
were not consistent with each other. Fixing those inconsistencies were
thus critically important to enabling this optimization. That work
involved the following:
* bringing consistency to add/add, rename/add, and rename/rename
conflict types, as done back in the topic merged at commit
ac193e0e0a ("Merge branch 'en/merge-path-collision'", 2019-01-04),
and further extended in commits 2a7c16c980 ("t6422, t6426: be more
flexible for add/add conflicts involving renames", 2020-08-10) and
e8eb99d4a6 ("t642[23]: be more flexible for add/add conflicts
involving pair renames", 2020-08-10)
* making rename/delete more consistent with modify/delete
as done in commits 1f3c9ba707 ("t6425: be more flexible with
rename/delete conflict messages", 2020-08-10) and 727c75b23f
("t6404, t6423: expect improved rename/delete handling in ort
backend", 2020-10-26)
Since the set of relevant_sources we compute has not yet been narrowed
down for directory rename detection, we do not pass it to
diffcore_rename_extended() yet. That will be done after subsequent
commits narrow down the list of relevant_sources needed for directory
rename detection reasons.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:25 +00:00
|
|
|
add_pair(opt, names, fullname, side, 1 /* add */,
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
match_mask & filemask,
|
|
|
|
renames->dir_rename_mask);
|
2021-02-14 07:51:51 +00:00
|
|
|
}
|
2021-01-07 21:35:51 +00:00
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:13 +00:00
|
|
|
static int collect_merge_info_callback(int n,
|
|
|
|
unsigned long mask,
|
|
|
|
unsigned long dirmask,
|
|
|
|
struct name_entry *names,
|
|
|
|
struct traverse_info *info)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* n is 3. Always.
|
|
|
|
* common ancestor (mbase) has mask 1, and stored in index 0 of names
|
|
|
|
* head of side 1 (side1) has mask 2, and stored in index 1 of names
|
|
|
|
* head of side 2 (side2) has mask 4, and stored in index 2 of names
|
|
|
|
*/
|
|
|
|
struct merge_options *opt = info->data;
|
|
|
|
struct merge_options_internal *opti = opt->priv;
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
2020-12-13 08:04:16 +00:00
|
|
|
struct string_list_item pi; /* Path Info */
|
|
|
|
struct conflict_info *ci; /* typed alias to pi.util (which is void*) */
|
2020-12-13 08:04:13 +00:00
|
|
|
struct name_entry *p;
|
|
|
|
size_t len;
|
|
|
|
char *fullpath;
|
2020-12-13 08:04:16 +00:00
|
|
|
const char *dirname = opti->current_dir_name;
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
unsigned prev_dir_rename_mask = renames->dir_rename_mask;
|
2020-12-13 08:04:13 +00:00
|
|
|
unsigned filemask = mask & ~dirmask;
|
2020-12-13 08:04:15 +00:00
|
|
|
unsigned match_mask = 0; /* will be updated below */
|
2020-12-13 08:04:13 +00:00
|
|
|
unsigned mbase_null = !(mask & 1);
|
|
|
|
unsigned side1_null = !(mask & 2);
|
|
|
|
unsigned side2_null = !(mask & 4);
|
2020-12-13 08:04:14 +00:00
|
|
|
unsigned side1_matches_mbase = (!side1_null && !mbase_null &&
|
|
|
|
names[0].mode == names[1].mode &&
|
|
|
|
oideq(&names[0].oid, &names[1].oid));
|
|
|
|
unsigned side2_matches_mbase = (!side2_null && !mbase_null &&
|
|
|
|
names[0].mode == names[2].mode &&
|
|
|
|
oideq(&names[0].oid, &names[2].oid));
|
|
|
|
unsigned sides_match = (!side1_null && !side2_null &&
|
|
|
|
names[1].mode == names[2].mode &&
|
|
|
|
oideq(&names[1].oid, &names[2].oid));
|
2020-12-13 08:04:13 +00:00
|
|
|
|
2020-12-13 08:04:15 +00:00
|
|
|
/*
|
|
|
|
* Note: When a path is a file on one side of history and a directory
|
|
|
|
* in another, we have a directory/file conflict. In such cases, if
|
|
|
|
* the conflict doesn't resolve from renames and deletions, then we
|
|
|
|
* always leave directories where they are and move files out of the
|
|
|
|
* way. Thus, while struct conflict_info has a df_conflict field to
|
|
|
|
* track such conflicts, we ignore that field for any directories at
|
|
|
|
* a path and only pay attention to it for files at the given path.
|
|
|
|
* The fact that we leave directories were they are also means that
|
|
|
|
* we do not need to worry about getting additional df_conflict
|
|
|
|
* information propagated from parent directories down to children
|
|
|
|
* (unlike, say traverse_trees_recursive() in unpack-trees.c, which
|
|
|
|
* sets a newinfo.df_conflicts field specifically to propagate it).
|
|
|
|
*/
|
|
|
|
unsigned df_conflict = (filemask != 0) && (dirmask != 0);
|
|
|
|
|
2020-12-13 08:04:13 +00:00
|
|
|
/* n = 3 is a fundamental assumption. */
|
|
|
|
if (n != 3)
|
|
|
|
BUG("Called collect_merge_info_callback wrong");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A bunch of sanity checks verifying that traverse_trees() calls
|
|
|
|
* us the way I expect. Could just remove these at some point,
|
|
|
|
* though maybe they are helpful to future code readers.
|
|
|
|
*/
|
|
|
|
assert(mbase_null == is_null_oid(&names[0].oid));
|
|
|
|
assert(side1_null == is_null_oid(&names[1].oid));
|
|
|
|
assert(side2_null == is_null_oid(&names[2].oid));
|
|
|
|
assert(!mbase_null || !side1_null || !side2_null);
|
|
|
|
assert(mask > 0 && mask < 8);
|
|
|
|
|
2020-12-13 08:04:15 +00:00
|
|
|
/* Determine match_mask */
|
|
|
|
if (side1_matches_mbase)
|
|
|
|
match_mask = (side2_matches_mbase ? 7 : 3);
|
|
|
|
else if (side2_matches_mbase)
|
|
|
|
match_mask = 5;
|
|
|
|
else if (sides_match)
|
|
|
|
match_mask = 6;
|
|
|
|
|
2020-12-13 08:04:13 +00:00
|
|
|
/*
|
|
|
|
* Get the name of the relevant filepath, which we'll pass to
|
|
|
|
* setup_path_info() for tracking.
|
|
|
|
*/
|
|
|
|
p = names;
|
|
|
|
while (!p->mode)
|
|
|
|
p++;
|
|
|
|
len = traverse_path_len(info, p->pathlen);
|
|
|
|
|
|
|
|
/* +1 in both of the following lines to include the NUL byte */
|
2021-07-31 17:27:38 +00:00
|
|
|
fullpath = mem_pool_alloc(&opt->priv->pool, len + 1);
|
2020-12-13 08:04:13 +00:00
|
|
|
make_traverse_path(fullpath, len + 1, info, p->path, p->pathlen);
|
|
|
|
|
2020-12-13 08:04:17 +00:00
|
|
|
/*
|
|
|
|
* If mbase, side1, and side2 all match, we can resolve early. Even
|
|
|
|
* if these are trees, there will be no renames or anything
|
|
|
|
* underneath.
|
|
|
|
*/
|
|
|
|
if (side1_matches_mbase && side2_matches_mbase) {
|
|
|
|
/* mbase, side1, & side2 all match; use mbase as resolution */
|
|
|
|
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
|
2021-07-16 05:22:32 +00:00
|
|
|
names, names+0, mbase_null, 0 /* df_conflict */,
|
|
|
|
filemask, dirmask, 1 /* resolved */);
|
2020-12-13 08:04:17 +00:00
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2021-07-16 05:22:31 +00:00
|
|
|
/*
|
|
|
|
* If the sides match, and all three paths are present and are
|
|
|
|
* files, then we can take either as the resolution. We can't do
|
|
|
|
* this with trees, because there may be rename sources from the
|
|
|
|
* merge_base.
|
|
|
|
*/
|
|
|
|
if (sides_match && filemask == 0x07) {
|
|
|
|
/* use side1 (== side2) version as resolution */
|
|
|
|
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
|
|
|
|
names, names+1, side1_null, 0,
|
|
|
|
filemask, dirmask, 1);
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If side1 matches mbase and all three paths are present and are
|
|
|
|
* files, then we can use side2 as the resolution. We cannot
|
|
|
|
* necessarily do so this for trees, because there may be rename
|
|
|
|
* destinations within side2.
|
|
|
|
*/
|
|
|
|
if (side1_matches_mbase && filemask == 0x07) {
|
|
|
|
/* use side2 version as resolution */
|
|
|
|
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
|
|
|
|
names, names+2, side2_null, 0,
|
|
|
|
filemask, dirmask, 1);
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Similar to above but swapping sides 1 and 2 */
|
|
|
|
if (side2_matches_mbase && filemask == 0x07) {
|
|
|
|
/* use side1 version as resolution */
|
|
|
|
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
|
|
|
|
names, names+1, side1_null, 0,
|
2020-12-13 08:04:17 +00:00
|
|
|
filemask, dirmask, 1);
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
2021-01-07 21:35:51 +00:00
|
|
|
/*
|
2021-07-16 05:22:32 +00:00
|
|
|
* Sometimes we can tell that a source path need not be included in
|
|
|
|
* rename detection -- namely, whenever either
|
|
|
|
* side1_matches_mbase && side2_null
|
|
|
|
* or
|
|
|
|
* side2_matches_mbase && side1_null
|
|
|
|
* However, we call collect_rename_info() even in those cases,
|
|
|
|
* because exact renames are cheap and would let us remove both a
|
|
|
|
* source and destination path. We'll cull the unneeded sources
|
|
|
|
* later.
|
2021-01-07 21:35:51 +00:00
|
|
|
*/
|
|
|
|
collect_rename_info(opt, names, dirname, fullpath,
|
|
|
|
filemask, dirmask, match_mask);
|
|
|
|
|
2020-12-13 08:04:13 +00:00
|
|
|
/*
|
2021-07-16 05:22:32 +00:00
|
|
|
* None of the special cases above matched, so we have a
|
|
|
|
* provisional conflict. (Rename detection might allow us to
|
|
|
|
* unconflict some more cases, but that comes later so all we can
|
|
|
|
* do now is record the different non-null file hashes.)
|
2020-12-13 08:04:13 +00:00
|
|
|
*/
|
2020-12-13 08:04:16 +00:00
|
|
|
setup_path_info(opt, &pi, dirname, info->pathlen, fullpath,
|
|
|
|
names, NULL, 0, df_conflict, filemask, dirmask, 0);
|
|
|
|
|
|
|
|
ci = pi.util;
|
|
|
|
VERIFY_CI(ci);
|
2020-12-13 08:04:15 +00:00
|
|
|
ci->match_mask = match_mask;
|
2020-12-13 08:04:13 +00:00
|
|
|
|
|
|
|
/* If dirmask, recurse into subdirectories */
|
|
|
|
if (dirmask) {
|
|
|
|
struct traverse_info newinfo;
|
|
|
|
struct tree_desc t[3];
|
|
|
|
void *buf[3] = {NULL, NULL, NULL};
|
|
|
|
const char *original_dir_name;
|
2021-07-16 05:22:35 +00:00
|
|
|
int i, ret, side;
|
2020-12-13 08:04:13 +00:00
|
|
|
|
2021-07-16 05:22:35 +00:00
|
|
|
/*
|
|
|
|
* Check for whether we can avoid recursing due to one side
|
|
|
|
* matching the merge base. The side that does NOT match is
|
|
|
|
* the one that might have a rename destination we need.
|
|
|
|
*/
|
|
|
|
assert(!side1_matches_mbase || !side2_matches_mbase);
|
|
|
|
side = side1_matches_mbase ? MERGE_SIDE2 :
|
|
|
|
side2_matches_mbase ? MERGE_SIDE1 : MERGE_BASE;
|
|
|
|
if (filemask == 0 && (dirmask == 2 || dirmask == 4)) {
|
|
|
|
/*
|
|
|
|
* Also defer recursing into new directories; set up a
|
|
|
|
* few variables to let us do so.
|
|
|
|
*/
|
|
|
|
ci->match_mask = (7 - dirmask);
|
|
|
|
side = dirmask / 2;
|
|
|
|
}
|
|
|
|
if (renames->dir_rename_mask != 0x07 &&
|
|
|
|
side != MERGE_BASE &&
|
|
|
|
renames->deferred[side].trivial_merges_okay &&
|
|
|
|
!strset_contains(&renames->deferred[side].target_dirs,
|
|
|
|
pi.string)) {
|
|
|
|
strintmap_set(&renames->deferred[side].possible_trivial_merges,
|
|
|
|
pi.string, renames->dir_rename_mask);
|
|
|
|
renames->dir_rename_mask = prev_dir_rename_mask;
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We need to recurse */
|
2020-12-13 08:04:13 +00:00
|
|
|
ci->match_mask &= filemask;
|
|
|
|
newinfo = *info;
|
|
|
|
newinfo.prev = info;
|
|
|
|
newinfo.name = p->path;
|
|
|
|
newinfo.namelen = p->pathlen;
|
|
|
|
newinfo.pathlen = st_add3(newinfo.pathlen, p->pathlen, 1);
|
2020-12-13 08:04:15 +00:00
|
|
|
/*
|
|
|
|
* If this directory we are about to recurse into cared about
|
|
|
|
* its parent directory (the current directory) having a D/F
|
|
|
|
* conflict, then we'd propagate the masks in this way:
|
|
|
|
* newinfo.df_conflicts |= (mask & ~dirmask);
|
|
|
|
* But we don't worry about propagating D/F conflicts. (See
|
|
|
|
* comment near setting of local df_conflict variable near
|
|
|
|
* the beginning of this function).
|
|
|
|
*/
|
2020-12-13 08:04:13 +00:00
|
|
|
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
2020-12-13 08:04:14 +00:00
|
|
|
if (i == 1 && side1_matches_mbase)
|
|
|
|
t[1] = t[0];
|
|
|
|
else if (i == 2 && side2_matches_mbase)
|
|
|
|
t[2] = t[0];
|
|
|
|
else if (i == 2 && sides_match)
|
|
|
|
t[2] = t[1];
|
|
|
|
else {
|
|
|
|
const struct object_id *oid = NULL;
|
|
|
|
if (dirmask & 1)
|
|
|
|
oid = &names[i].oid;
|
|
|
|
buf[i] = fill_tree_descriptor(opt->repo,
|
|
|
|
t + i, oid);
|
|
|
|
}
|
2020-12-13 08:04:13 +00:00
|
|
|
dirmask >>= 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
original_dir_name = opti->current_dir_name;
|
2020-12-13 08:04:16 +00:00
|
|
|
opti->current_dir_name = pi.string;
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
if (renames->dir_rename_mask == 0 ||
|
|
|
|
renames->dir_rename_mask == 0x07)
|
|
|
|
ret = traverse_trees(NULL, 3, t, &newinfo);
|
|
|
|
else
|
|
|
|
ret = traverse_trees_wrapper(NULL, 3, t, &newinfo);
|
2020-12-13 08:04:13 +00:00
|
|
|
opti->current_dir_name = original_dir_name;
|
merge-ort: precompute whether directory rename detection is needed
The point of directory rename detection is that if one side of history
renames a directory, and the other side adds new files under the old
directory, then the merge can move those new files into the new
directory. This leads to the following important observation:
* If the other side does not add any new files under the old
directory, we do not need to detect any renames for that directory.
Similarly, directory rename detection had an important requirement:
* If a directory still exists on one side of history, it has not been
renamed on that side of history. (See section 4 of t6423 or
Documentation/technical/directory-rename-detection.txt for more
details).
Using these two bits of information, we note that directory rename
detection is only needed in cases where (1) directories exist in the
merge base and on one side of history (i.e. dirmask == 3 or dirmask ==
5), and (2) where there is some new file added to that directory on the
side where it still exists (thus where the file has filemask == 2 or
filemask == 4, respectively). This has to be done in two steps, because
we have the dirmask when we are first considering the directory, and
won't get the filemasks for the files within it until we recurse into
that directory. So, we save
dir_rename_mask = dirmask - 1
when we hit a directory that is missing on one side, and then later look
for cases of
filemask == dir_rename_mask
One final note is that as soon as we hit a directory that needs
directory rename detection, we will need to detect renames in all
subdirectories of that directory as well due to the "majority rules"
decision when files are renamed into different directory hierarchies.
We arbitrarily use the special value of 0x07 to record when we've hit
such a directory.
The combination of all the above mean that we introduce a variable
named dir_rename_mask (couldn't think of a better name) which has one
of the following values as we traverse into a directory:
* 0x00: directory rename detection not needed
* 0x02 or 0x04: directory rename detection only needed if files added
* 0x07: directory rename detection definitely needed
We then pass this value through to add_pairs() so that it can mark
location_relevant as true only when dir_rename_mask is 0x07.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-11 00:38:28 +00:00
|
|
|
renames->dir_rename_mask = prev_dir_rename_mask;
|
2020-12-13 08:04:13 +00:00
|
|
|
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++)
|
|
|
|
free(buf[i]);
|
|
|
|
|
|
|
|
if (ret < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
return mask;
|
|
|
|
}
|
|
|
|
|
merge-ort: avoid recursing into directories when we don't need to
This combines the work of the last several patches, and implements the
conditions when we don't need to recurse into directories. It's perhaps
easiest to see the logic by separating the fact that a directory might
have both rename sources and rename destinations:
* rename sources: only files present in the merge base can serve as
rename sources, and only when one side deletes that file. When the
tree on one side matches the merge base, that means every file
within the subtree matches the merge base. This means that the
skip-irrelevant-rename-detection optimization from before kicks in
and we don't need any of these files as rename sources.
* rename destinations: the tree that does not match the merge base
might have newly added and hence unmatched destination files.
This is what usually prevents us from doing trivial directory
resolutions in the merge machinery. However, the fact that we have
deferred recursing into this directory until the end means we know
whether there are any unmatched relevant potential rename sources
elsewhere in this merge. If there are no unmatched such relevant
sources anywhere, then there is no need to look for unmatched
potential rename destinations to match them with.
This informs our algorithm:
* Search through relevant_sources; if we have entries, they better all
be reflected in cached_pairs or cached_irrelevant, otherwise they
represent an unmatched potential rename source (causing the
optimization to be disallowed).
* For any relevant_source represented in cached_pairs, we do need to
to make sure to get the destination for each source, meaning we need
to recurse into any ancestor directories of those destinations.
* Once we've recursed into all the rename destinations for any
relevant_sources in cached_pairs, we can then do the trivial
directory resolution for the remaining directories.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.235 s ± 0.042 s 205.1 ms ± 3.8 ms
mega-renames: 9.419 s ± 0.107 s 1.564 s ± 0.010 s
just-one-mega: 480.1 ms ± 3.9 ms 479.5 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:36 +00:00
|
|
|
static void resolve_trivial_directory_merge(struct conflict_info *ci, int side)
|
|
|
|
{
|
|
|
|
VERIFY_CI(ci);
|
|
|
|
assert((side == 1 && ci->match_mask == 5) ||
|
|
|
|
(side == 2 && ci->match_mask == 3));
|
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[side].oid);
|
|
|
|
ci->merged.result.mode = ci->stages[side].mode;
|
|
|
|
ci->merged.is_null = is_null_oid(&ci->stages[side].oid);
|
|
|
|
ci->match_mask = 0;
|
|
|
|
ci->merged.clean = 1; /* (ci->filemask == 0); */
|
|
|
|
}
|
|
|
|
|
2021-07-16 05:22:34 +00:00
|
|
|
static int handle_deferred_entries(struct merge_options *opt,
|
|
|
|
struct traverse_info *info)
|
|
|
|
{
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
int side, ret = 0;
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
int path_count_before, path_count_after = 0;
|
2021-07-16 05:22:34 +00:00
|
|
|
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
path_count_before = strmap_get_size(&opt->priv->paths);
|
2021-07-16 05:22:34 +00:00
|
|
|
for (side = MERGE_SIDE1; side <= MERGE_SIDE2; side++) {
|
merge-ort: avoid recursing into directories when we don't need to
This combines the work of the last several patches, and implements the
conditions when we don't need to recurse into directories. It's perhaps
easiest to see the logic by separating the fact that a directory might
have both rename sources and rename destinations:
* rename sources: only files present in the merge base can serve as
rename sources, and only when one side deletes that file. When the
tree on one side matches the merge base, that means every file
within the subtree matches the merge base. This means that the
skip-irrelevant-rename-detection optimization from before kicks in
and we don't need any of these files as rename sources.
* rename destinations: the tree that does not match the merge base
might have newly added and hence unmatched destination files.
This is what usually prevents us from doing trivial directory
resolutions in the merge machinery. However, the fact that we have
deferred recursing into this directory until the end means we know
whether there are any unmatched relevant potential rename sources
elsewhere in this merge. If there are no unmatched such relevant
sources anywhere, then there is no need to look for unmatched
potential rename destinations to match them with.
This informs our algorithm:
* Search through relevant_sources; if we have entries, they better all
be reflected in cached_pairs or cached_irrelevant, otherwise they
represent an unmatched potential rename source (causing the
optimization to be disallowed).
* For any relevant_source represented in cached_pairs, we do need to
to make sure to get the destination for each source, meaning we need
to recurse into any ancestor directories of those destinations.
* Once we've recursed into all the rename destinations for any
relevant_sources in cached_pairs, we can then do the trivial
directory resolution for the remaining directories.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.235 s ± 0.042 s 205.1 ms ± 3.8 ms
mega-renames: 9.419 s ± 0.107 s 1.564 s ± 0.010 s
just-one-mega: 480.1 ms ± 3.9 ms 479.5 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:36 +00:00
|
|
|
unsigned optimization_okay = 1;
|
|
|
|
struct strintmap copy;
|
|
|
|
|
|
|
|
/* Loop over the set of paths we need to know rename info for */
|
|
|
|
strset_for_each_entry(&renames->relevant_sources[side],
|
|
|
|
&iter, entry) {
|
|
|
|
char *rename_target, *dir, *dir_marker;
|
|
|
|
struct strmap_entry *e;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we don't know delete/rename info for this path,
|
|
|
|
* then we need to recurse into all trees to get all
|
|
|
|
* adds to make sure we have it.
|
|
|
|
*/
|
|
|
|
if (strset_contains(&renames->cached_irrelevant[side],
|
|
|
|
entry->key))
|
|
|
|
continue;
|
|
|
|
e = strmap_get_entry(&renames->cached_pairs[side],
|
|
|
|
entry->key);
|
|
|
|
if (!e) {
|
|
|
|
optimization_okay = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If this is a delete, we have enough info already */
|
|
|
|
rename_target = e->value;
|
|
|
|
if (!rename_target)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* If we already walked the rename target, we're good */
|
|
|
|
if (strmap_contains(&opt->priv->paths, rename_target))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Otherwise, we need to get a list of directories that
|
|
|
|
* will need to be recursed into to get this
|
|
|
|
* rename_target.
|
|
|
|
*/
|
|
|
|
dir = xstrdup(rename_target);
|
|
|
|
while ((dir_marker = strrchr(dir, '/'))) {
|
|
|
|
*dir_marker = '\0';
|
|
|
|
if (strset_contains(&renames->deferred[side].target_dirs,
|
|
|
|
dir))
|
|
|
|
break;
|
|
|
|
strset_add(&renames->deferred[side].target_dirs,
|
|
|
|
dir);
|
|
|
|
}
|
|
|
|
free(dir);
|
|
|
|
}
|
|
|
|
renames->deferred[side].trivial_merges_okay = optimization_okay;
|
|
|
|
/*
|
|
|
|
* We need to recurse into any directories in
|
|
|
|
* possible_trivial_merges[side] found in target_dirs[side].
|
|
|
|
* But when we recurse, we may need to queue up some of the
|
|
|
|
* subdirectories for possible_trivial_merges[side]. Since
|
|
|
|
* we can't safely iterate through a hashmap while also adding
|
|
|
|
* entries, move the entries into 'copy', iterate over 'copy',
|
|
|
|
* and then we'll also iterate anything added into
|
|
|
|
* possible_trivial_merges[side] once this loop is done.
|
|
|
|
*/
|
|
|
|
copy = renames->deferred[side].possible_trivial_merges;
|
|
|
|
strintmap_init_with_options(&renames->deferred[side].possible_trivial_merges,
|
|
|
|
0,
|
2021-07-31 17:27:38 +00:00
|
|
|
&opt->priv->pool,
|
merge-ort: avoid recursing into directories when we don't need to
This combines the work of the last several patches, and implements the
conditions when we don't need to recurse into directories. It's perhaps
easiest to see the logic by separating the fact that a directory might
have both rename sources and rename destinations:
* rename sources: only files present in the merge base can serve as
rename sources, and only when one side deletes that file. When the
tree on one side matches the merge base, that means every file
within the subtree matches the merge base. This means that the
skip-irrelevant-rename-detection optimization from before kicks in
and we don't need any of these files as rename sources.
* rename destinations: the tree that does not match the merge base
might have newly added and hence unmatched destination files.
This is what usually prevents us from doing trivial directory
resolutions in the merge machinery. However, the fact that we have
deferred recursing into this directory until the end means we know
whether there are any unmatched relevant potential rename sources
elsewhere in this merge. If there are no unmatched such relevant
sources anywhere, then there is no need to look for unmatched
potential rename destinations to match them with.
This informs our algorithm:
* Search through relevant_sources; if we have entries, they better all
be reflected in cached_pairs or cached_irrelevant, otherwise they
represent an unmatched potential rename source (causing the
optimization to be disallowed).
* For any relevant_source represented in cached_pairs, we do need to
to make sure to get the destination for each source, meaning we need
to recurse into any ancestor directories of those destinations.
* Once we've recursed into all the rename destinations for any
relevant_sources in cached_pairs, we can then do the trivial
directory resolution for the remaining directories.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.235 s ± 0.042 s 205.1 ms ± 3.8 ms
mega-renames: 9.419 s ± 0.107 s 1.564 s ± 0.010 s
just-one-mega: 480.1 ms ± 3.9 ms 479.5 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:36 +00:00
|
|
|
0);
|
|
|
|
strintmap_for_each_entry(©, &iter, entry) {
|
2021-07-16 05:22:34 +00:00
|
|
|
const char *path = entry->key;
|
|
|
|
unsigned dir_rename_mask = (intptr_t)entry->value;
|
|
|
|
struct conflict_info *ci;
|
|
|
|
unsigned dirmask;
|
|
|
|
struct tree_desc t[3];
|
|
|
|
void *buf[3] = {NULL,};
|
|
|
|
int i;
|
|
|
|
|
|
|
|
ci = strmap_get(&opt->priv->paths, path);
|
|
|
|
VERIFY_CI(ci);
|
|
|
|
dirmask = ci->dirmask;
|
|
|
|
|
merge-ort: avoid recursing into directories when we don't need to
This combines the work of the last several patches, and implements the
conditions when we don't need to recurse into directories. It's perhaps
easiest to see the logic by separating the fact that a directory might
have both rename sources and rename destinations:
* rename sources: only files present in the merge base can serve as
rename sources, and only when one side deletes that file. When the
tree on one side matches the merge base, that means every file
within the subtree matches the merge base. This means that the
skip-irrelevant-rename-detection optimization from before kicks in
and we don't need any of these files as rename sources.
* rename destinations: the tree that does not match the merge base
might have newly added and hence unmatched destination files.
This is what usually prevents us from doing trivial directory
resolutions in the merge machinery. However, the fact that we have
deferred recursing into this directory until the end means we know
whether there are any unmatched relevant potential rename sources
elsewhere in this merge. If there are no unmatched such relevant
sources anywhere, then there is no need to look for unmatched
potential rename destinations to match them with.
This informs our algorithm:
* Search through relevant_sources; if we have entries, they better all
be reflected in cached_pairs or cached_irrelevant, otherwise they
represent an unmatched potential rename source (causing the
optimization to be disallowed).
* For any relevant_source represented in cached_pairs, we do need to
to make sure to get the destination for each source, meaning we need
to recurse into any ancestor directories of those destinations.
* Once we've recursed into all the rename destinations for any
relevant_sources in cached_pairs, we can then do the trivial
directory resolution for the remaining directories.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.235 s ± 0.042 s 205.1 ms ± 3.8 ms
mega-renames: 9.419 s ± 0.107 s 1.564 s ± 0.010 s
just-one-mega: 480.1 ms ± 3.9 ms 479.5 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:36 +00:00
|
|
|
if (optimization_okay &&
|
|
|
|
!strset_contains(&renames->deferred[side].target_dirs,
|
|
|
|
path)) {
|
|
|
|
resolve_trivial_directory_merge(ci, side);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2021-07-16 05:22:34 +00:00
|
|
|
info->name = path;
|
|
|
|
info->namelen = strlen(path);
|
|
|
|
info->pathlen = info->namelen + 1;
|
|
|
|
|
|
|
|
for (i = 0; i < 3; i++, dirmask >>= 1) {
|
|
|
|
if (i == 1 && ci->match_mask == 3)
|
|
|
|
t[1] = t[0];
|
|
|
|
else if (i == 2 && ci->match_mask == 5)
|
|
|
|
t[2] = t[0];
|
|
|
|
else if (i == 2 && ci->match_mask == 6)
|
|
|
|
t[2] = t[1];
|
|
|
|
else {
|
|
|
|
const struct object_id *oid = NULL;
|
|
|
|
if (dirmask & 1)
|
|
|
|
oid = &ci->stages[i].oid;
|
|
|
|
buf[i] = fill_tree_descriptor(opt->repo,
|
|
|
|
t+i, oid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ci->match_mask &= ci->filemask;
|
|
|
|
opt->priv->current_dir_name = path;
|
|
|
|
renames->dir_rename_mask = dir_rename_mask;
|
|
|
|
if (renames->dir_rename_mask == 0 ||
|
|
|
|
renames->dir_rename_mask == 0x07)
|
|
|
|
ret = traverse_trees(NULL, 3, t, info);
|
|
|
|
else
|
|
|
|
ret = traverse_trees_wrapper(NULL, 3, t, info);
|
|
|
|
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++)
|
|
|
|
free(buf[i]);
|
|
|
|
|
|
|
|
if (ret < 0)
|
|
|
|
return ret;
|
|
|
|
}
|
merge-ort: avoid recursing into directories when we don't need to
This combines the work of the last several patches, and implements the
conditions when we don't need to recurse into directories. It's perhaps
easiest to see the logic by separating the fact that a directory might
have both rename sources and rename destinations:
* rename sources: only files present in the merge base can serve as
rename sources, and only when one side deletes that file. When the
tree on one side matches the merge base, that means every file
within the subtree matches the merge base. This means that the
skip-irrelevant-rename-detection optimization from before kicks in
and we don't need any of these files as rename sources.
* rename destinations: the tree that does not match the merge base
might have newly added and hence unmatched destination files.
This is what usually prevents us from doing trivial directory
resolutions in the merge machinery. However, the fact that we have
deferred recursing into this directory until the end means we know
whether there are any unmatched relevant potential rename sources
elsewhere in this merge. If there are no unmatched such relevant
sources anywhere, then there is no need to look for unmatched
potential rename destinations to match them with.
This informs our algorithm:
* Search through relevant_sources; if we have entries, they better all
be reflected in cached_pairs or cached_irrelevant, otherwise they
represent an unmatched potential rename source (causing the
optimization to be disallowed).
* For any relevant_source represented in cached_pairs, we do need to
to make sure to get the destination for each source, meaning we need
to recurse into any ancestor directories of those destinations.
* Once we've recursed into all the rename destinations for any
relevant_sources in cached_pairs, we can then do the trivial
directory resolution for the remaining directories.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.235 s ± 0.042 s 205.1 ms ± 3.8 ms
mega-renames: 9.419 s ± 0.107 s 1.564 s ± 0.010 s
just-one-mega: 480.1 ms ± 3.9 ms 479.5 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:36 +00:00
|
|
|
strintmap_clear(©);
|
|
|
|
strintmap_for_each_entry(&renames->deferred[side].possible_trivial_merges,
|
|
|
|
&iter, entry) {
|
|
|
|
const char *path = entry->key;
|
|
|
|
struct conflict_info *ci;
|
|
|
|
|
|
|
|
ci = strmap_get(&opt->priv->paths, path);
|
|
|
|
VERIFY_CI(ci);
|
|
|
|
|
|
|
|
assert(renames->deferred[side].trivial_merges_okay &&
|
|
|
|
!strset_contains(&renames->deferred[side].target_dirs,
|
|
|
|
path));
|
|
|
|
resolve_trivial_directory_merge(ci, side);
|
|
|
|
}
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
if (!optimization_okay || path_count_after)
|
|
|
|
path_count_after = strmap_get_size(&opt->priv->paths);
|
2021-07-16 05:22:34 +00:00
|
|
|
}
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
if (path_count_after) {
|
|
|
|
/*
|
|
|
|
* The choice of wanted_factor here does not affect
|
|
|
|
* correctness, only performance. When the
|
|
|
|
* path_count_after / path_count_before
|
|
|
|
* ratio is high, redoing after renames is a big
|
|
|
|
* performance boost. I suspect that redoing is a wash
|
|
|
|
* somewhere near a value of 2, and below that redoing will
|
|
|
|
* slow things down. I applied a fudge factor and picked
|
|
|
|
* 3; see the commit message when this was introduced for
|
|
|
|
* back of the envelope calculations for this ratio.
|
|
|
|
*/
|
|
|
|
const int wanted_factor = 3;
|
|
|
|
|
|
|
|
/* We should only redo collect_merge_info one time */
|
|
|
|
assert(renames->redo_after_renames == 0);
|
|
|
|
|
|
|
|
if (path_count_after / path_count_before >= wanted_factor) {
|
|
|
|
renames->redo_after_renames = 1;
|
|
|
|
renames->cached_pairs_valid_side = -1;
|
|
|
|
}
|
|
|
|
} else if (renames->redo_after_renames == 2)
|
|
|
|
renames->redo_after_renames = 0;
|
2021-07-16 05:22:34 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:09 +00:00
|
|
|
static int collect_merge_info(struct merge_options *opt,
|
|
|
|
struct tree *merge_base,
|
|
|
|
struct tree *side1,
|
|
|
|
struct tree *side2)
|
|
|
|
{
|
2020-12-13 08:04:13 +00:00
|
|
|
int ret;
|
|
|
|
struct tree_desc t[3];
|
|
|
|
struct traverse_info info;
|
|
|
|
|
2021-01-19 19:53:50 +00:00
|
|
|
opt->priv->toplevel_dir = "";
|
|
|
|
opt->priv->current_dir_name = opt->priv->toplevel_dir;
|
|
|
|
setup_traverse_info(&info, opt->priv->toplevel_dir);
|
2020-12-13 08:04:13 +00:00
|
|
|
info.fn = collect_merge_info_callback;
|
|
|
|
info.data = opt;
|
|
|
|
info.show_all_errors = 1;
|
|
|
|
|
|
|
|
parse_tree(merge_base);
|
|
|
|
parse_tree(side1);
|
|
|
|
parse_tree(side2);
|
|
|
|
init_tree_desc(t + 0, merge_base->buffer, merge_base->size);
|
|
|
|
init_tree_desc(t + 1, side1->buffer, side1->size);
|
|
|
|
init_tree_desc(t + 2, side2->buffer, side2->size);
|
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "traverse_trees", opt->repo);
|
2020-12-13 08:04:13 +00:00
|
|
|
ret = traverse_trees(NULL, 3, t, &info);
|
2021-07-16 05:22:35 +00:00
|
|
|
if (ret == 0)
|
|
|
|
ret = handle_deferred_entries(opt, &info);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "traverse_trees", opt->repo);
|
2020-12-13 08:04:13 +00:00
|
|
|
|
|
|
|
return ret;
|
2020-12-13 08:04:09 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: functions related to threeway content merges ***/
|
|
|
|
|
2021-01-01 02:34:45 +00:00
|
|
|
static int find_first_merges(struct repository *repo,
|
|
|
|
const char *path,
|
|
|
|
struct commit *a,
|
|
|
|
struct commit *b,
|
|
|
|
struct object_array *result)
|
|
|
|
{
|
2021-01-01 02:34:47 +00:00
|
|
|
int i, j;
|
|
|
|
struct object_array merges = OBJECT_ARRAY_INIT;
|
|
|
|
struct commit *commit;
|
|
|
|
int contains_another;
|
|
|
|
|
|
|
|
char merged_revision[GIT_MAX_HEXSZ + 2];
|
|
|
|
const char *rev_args[] = { "rev-list", "--merges", "--ancestry-path",
|
|
|
|
"--all", merged_revision, NULL };
|
|
|
|
struct rev_info revs;
|
|
|
|
struct setup_revision_opt rev_opts;
|
|
|
|
|
|
|
|
memset(result, 0, sizeof(struct object_array));
|
|
|
|
memset(&rev_opts, 0, sizeof(rev_opts));
|
|
|
|
|
|
|
|
/* get all revisions that merge commit a */
|
|
|
|
xsnprintf(merged_revision, sizeof(merged_revision), "^%s",
|
|
|
|
oid_to_hex(&a->object.oid));
|
|
|
|
repo_init_revisions(repo, &revs, NULL);
|
|
|
|
/* FIXME: can't handle linked worktrees in submodules yet */
|
|
|
|
revs.single_worktree = path != NULL;
|
|
|
|
setup_revisions(ARRAY_SIZE(rev_args)-1, rev_args, &revs, &rev_opts);
|
|
|
|
|
|
|
|
/* save all revisions from the above list that contain b */
|
|
|
|
if (prepare_revision_walk(&revs))
|
|
|
|
die("revision walk setup failed");
|
|
|
|
while ((commit = get_revision(&revs)) != NULL) {
|
|
|
|
struct object *o = &(commit->object);
|
2021-09-09 18:47:29 +00:00
|
|
|
if (repo_in_merge_bases(repo, b, commit))
|
2021-01-01 02:34:47 +00:00
|
|
|
add_object_array(o, NULL, &merges);
|
|
|
|
}
|
|
|
|
reset_revision_walk();
|
|
|
|
|
|
|
|
/* Now we've got all merges that contain a and b. Prune all
|
|
|
|
* merges that contain another found merge and save them in
|
|
|
|
* result.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < merges.nr; i++) {
|
|
|
|
struct commit *m1 = (struct commit *) merges.objects[i].item;
|
|
|
|
|
|
|
|
contains_another = 0;
|
|
|
|
for (j = 0; j < merges.nr; j++) {
|
|
|
|
struct commit *m2 = (struct commit *) merges.objects[j].item;
|
2021-09-09 18:47:29 +00:00
|
|
|
if (i != j && repo_in_merge_bases(repo, m2, m1)) {
|
2021-01-01 02:34:47 +00:00
|
|
|
contains_another = 1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!contains_another)
|
|
|
|
add_object_array(merges.objects[i].item, NULL, result);
|
|
|
|
}
|
|
|
|
|
|
|
|
object_array_clear(&merges);
|
2022-04-13 20:01:36 +00:00
|
|
|
release_revisions(&revs);
|
2021-01-01 02:34:47 +00:00
|
|
|
return result->nr;
|
2021-01-01 02:34:45 +00:00
|
|
|
}
|
|
|
|
|
2021-01-01 02:34:43 +00:00
|
|
|
static int merge_submodule(struct merge_options *opt,
|
|
|
|
const char *path,
|
|
|
|
const struct object_id *o,
|
|
|
|
const struct object_id *a,
|
|
|
|
const struct object_id *b,
|
|
|
|
struct object_id *result)
|
|
|
|
{
|
2021-09-09 18:47:29 +00:00
|
|
|
struct repository subrepo;
|
|
|
|
struct strbuf sb = STRBUF_INIT;
|
|
|
|
int ret = 0;
|
2021-01-01 02:34:45 +00:00
|
|
|
struct commit *commit_o, *commit_a, *commit_b;
|
|
|
|
int parent_count;
|
|
|
|
struct object_array merges;
|
|
|
|
|
|
|
|
int i;
|
|
|
|
int search = !opt->priv->call_depth;
|
2022-08-04 19:51:05 +00:00
|
|
|
int sub_not_initialized = 1;
|
2022-08-18 07:15:26 +00:00
|
|
|
int sub_flag = CONFLICT_SUBMODULE_FAILED_TO_MERGE;
|
2021-01-01 02:34:45 +00:00
|
|
|
|
|
|
|
/* store fallback answer in result in case we fail */
|
|
|
|
oidcpy(result, opt->priv->call_depth ? o : a);
|
|
|
|
|
|
|
|
/* we can not handle deletion conflicts */
|
2022-08-04 19:51:05 +00:00
|
|
|
if (is_null_oid(a) || is_null_oid(b))
|
|
|
|
BUG("submodule deleted on one side; this should be handled outside of merge_submodule()");
|
2021-01-01 02:34:45 +00:00
|
|
|
|
2022-08-04 19:51:05 +00:00
|
|
|
if ((sub_not_initialized = repo_submodule_init(&subrepo,
|
|
|
|
opt->repo, path, null_oid()))) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_NOT_INITIALIZED, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
|
|
|
_("Failed to merge submodule %s (not checked out)"),
|
|
|
|
path);
|
2022-08-04 19:51:05 +00:00
|
|
|
sub_flag = CONFLICT_SUBMODULE_NOT_INITIALIZED;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (is_null_oid(o)) {
|
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_NULL_MERGE_BASE, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
|
|
|
_("Failed to merge submodule %s (no merge base)"),
|
|
|
|
path);
|
|
|
|
goto cleanup;
|
2021-09-09 18:47:29 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (!(commit_o = lookup_commit_reference(&subrepo, o)) ||
|
|
|
|
!(commit_a = lookup_commit_reference(&subrepo, a)) ||
|
|
|
|
!(commit_b = lookup_commit_reference(&subrepo, b))) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_HISTORY_NOT_AVAILABLE, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:45 +00:00
|
|
|
_("Failed to merge submodule %s (commits not present)"),
|
|
|
|
path);
|
2022-08-04 19:51:05 +00:00
|
|
|
sub_flag = CONFLICT_SUBMODULE_HISTORY_NOT_AVAILABLE;
|
2021-09-09 18:47:29 +00:00
|
|
|
goto cleanup;
|
2021-01-01 02:34:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* check whether both changes are forward */
|
2021-09-09 18:47:29 +00:00
|
|
|
if (!repo_in_merge_bases(&subrepo, commit_o, commit_a) ||
|
|
|
|
!repo_in_merge_bases(&subrepo, commit_o, commit_b)) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_MAY_HAVE_REWINDS, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:45 +00:00
|
|
|
_("Failed to merge submodule %s "
|
|
|
|
"(commits don't follow merge-base)"),
|
|
|
|
path);
|
2021-09-09 18:47:29 +00:00
|
|
|
goto cleanup;
|
2021-01-01 02:34:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Case #1: a is contained in b or vice versa */
|
2021-09-09 18:47:29 +00:00
|
|
|
if (repo_in_merge_bases(&subrepo, commit_a, commit_b)) {
|
2021-01-01 02:34:45 +00:00
|
|
|
oidcpy(result, b);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, INFO_SUBMODULE_FAST_FORWARDING, 1,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:45 +00:00
|
|
|
_("Note: Fast-forwarding submodule %s to %s"),
|
|
|
|
path, oid_to_hex(b));
|
2021-09-09 18:47:29 +00:00
|
|
|
ret = 1;
|
|
|
|
goto cleanup;
|
2021-01-01 02:34:45 +00:00
|
|
|
}
|
2021-09-09 18:47:29 +00:00
|
|
|
if (repo_in_merge_bases(&subrepo, commit_b, commit_a)) {
|
2021-01-01 02:34:45 +00:00
|
|
|
oidcpy(result, a);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, INFO_SUBMODULE_FAST_FORWARDING, 1,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:45 +00:00
|
|
|
_("Note: Fast-forwarding submodule %s to %s"),
|
|
|
|
path, oid_to_hex(a));
|
2021-09-09 18:47:29 +00:00
|
|
|
ret = 1;
|
|
|
|
goto cleanup;
|
2021-01-01 02:34:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Case #2: There are one or more merges that contain a and b in
|
|
|
|
* the submodule. If there is only one, then present it as a
|
|
|
|
* suggestion to the user, but leave it marked unmerged so the
|
|
|
|
* user needs to confirm the resolution.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Skip the search if makes no sense to the calling context. */
|
|
|
|
if (!search)
|
2021-09-09 18:47:29 +00:00
|
|
|
goto cleanup;
|
2021-01-01 02:34:45 +00:00
|
|
|
|
|
|
|
/* find commit which merges them */
|
2021-09-09 18:47:29 +00:00
|
|
|
parent_count = find_first_merges(&subrepo, path, commit_a, commit_b,
|
2021-01-01 02:34:45 +00:00
|
|
|
&merges);
|
|
|
|
switch (parent_count) {
|
|
|
|
case 0:
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_FAILED_TO_MERGE, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
|
|
|
_("Failed to merge submodule %s"), path);
|
2021-01-01 02:34:45 +00:00
|
|
|
break;
|
|
|
|
|
|
|
|
case 1:
|
2021-10-08 21:08:17 +00:00
|
|
|
format_commit(&sb, 4, &subrepo,
|
2021-01-01 02:34:45 +00:00
|
|
|
(struct commit *)merges.objects[0].item);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_FAILED_TO_MERGE_BUT_POSSIBLE_RESOLUTION, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:45 +00:00
|
|
|
_("Failed to merge submodule %s, but a possible merge "
|
2022-06-18 00:20:51 +00:00
|
|
|
"resolution exists: %s"),
|
2021-01-01 02:34:45 +00:00
|
|
|
path, sb.buf);
|
|
|
|
strbuf_release(&sb);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
for (i = 0; i < merges.nr; i++)
|
2021-10-08 21:08:17 +00:00
|
|
|
format_commit(&sb, 4, &subrepo,
|
2021-01-01 02:34:45 +00:00
|
|
|
(struct commit *)merges.objects[i].item);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_SUBMODULE_FAILED_TO_MERGE_BUT_POSSIBLE_RESOLUTION, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:45 +00:00
|
|
|
_("Failed to merge submodule %s, but multiple "
|
|
|
|
"possible merges exist:\n%s"), path, sb.buf);
|
|
|
|
strbuf_release(&sb);
|
|
|
|
}
|
|
|
|
|
|
|
|
object_array_clear(&merges);
|
2021-09-09 18:47:29 +00:00
|
|
|
cleanup:
|
2022-08-04 19:51:05 +00:00
|
|
|
if (!opt->priv->call_depth && !ret) {
|
|
|
|
struct string_list *csub = &opt->priv->conflicted_submodules;
|
|
|
|
struct conflicted_submodule_item *util;
|
|
|
|
const char *abbrev;
|
|
|
|
|
|
|
|
util = xmalloc(sizeof(*util));
|
|
|
|
util->flag = sub_flag;
|
|
|
|
util->abbrev = NULL;
|
|
|
|
if (!sub_not_initialized) {
|
|
|
|
abbrev = repo_find_unique_abbrev(&subrepo, b, DEFAULT_ABBREV);
|
|
|
|
util->abbrev = xstrdup(abbrev);
|
|
|
|
}
|
|
|
|
string_list_append(csub, path)->util = util;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!sub_not_initialized)
|
|
|
|
repo_clear(&subrepo);
|
2021-09-09 18:47:29 +00:00
|
|
|
return ret;
|
2021-01-01 02:34:43 +00:00
|
|
|
}
|
|
|
|
|
merge-ort: have ll_merge() use a special attr_index for renormalization
ll_merge() needs an index when renormalization is requested. Create one
specifically for just this purpose with just the one needed entry. This
fixes t6418.4 and t6418.5 under GIT_TEST_MERGE_ALGORITHM=ort.
NOTE 1: Even if the user has a working copy or a real index (which is
not a given as merge-ort can be used in bare repositories), we
explicitly ignore any .gitattributes file from either of these
locations. merge-ort can be used to merge two branches that are
unrelated to HEAD, so .gitattributes from the working copy and current
index should not be considered relevant.
NOTE 2: Since we are in the middle of merging, there is a risk that
.gitattributes itself is conflicted...leaving us with an ill-defined
situation about how to perform the rest of the merge. It could be that
the .gitattributes file does not even exist on one of the sides of the
merge, or that it has been modified on both sides. If it's been
modified on both sides, it's possible that it could itself be merged
cleanly, though it's also possible that it only merges cleanly if you
use the right version of the .gitattributes file to drive the merge. It
gets kind of complicated. The only test we ever had that attempted to
test behavior in this area was seemingly unaware of the undefined
behavior, but knew the test wouldn't work for lack of attribute handling
support, marked it as test_expect_failure from the beginning, but
managed to fail for several reasons unrelated to attribute handling.
See commit 6f6e7cfb52 ("t6038: remove problematic test", 2020-08-03) for
details. So there are probably various ways to improve what
initialize_attr_index() picks in the case of a conflicted .gitattributes
but for now I just implemented something simple -- look for whatever
.gitattributes file we can find in any of the higher order stages and
use it.
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-20 00:03:46 +00:00
|
|
|
static void initialize_attr_index(struct merge_options *opt)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* The renormalize_buffer() functions require attributes, and
|
|
|
|
* annoyingly those can only be read from the working tree or from
|
|
|
|
* an index_state. merge-ort doesn't have an index_state, so we
|
|
|
|
* generate a fake one containing only attribute information.
|
|
|
|
*/
|
|
|
|
struct merged_info *mi;
|
|
|
|
struct index_state *attr_index = &opt->priv->attr_index;
|
|
|
|
struct cache_entry *ce;
|
|
|
|
|
|
|
|
attr_index->initialized = 1;
|
|
|
|
|
|
|
|
if (!opt->renormalize)
|
|
|
|
return;
|
|
|
|
|
|
|
|
mi = strmap_get(&opt->priv->paths, GITATTRIBUTES_FILE);
|
|
|
|
if (!mi)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (mi->clean) {
|
|
|
|
int len = strlen(GITATTRIBUTES_FILE);
|
|
|
|
ce = make_empty_cache_entry(attr_index, len);
|
|
|
|
ce->ce_mode = create_ce_mode(mi->result.mode);
|
|
|
|
ce->ce_flags = create_ce_flags(0);
|
|
|
|
ce->ce_namelen = len;
|
|
|
|
oidcpy(&ce->oid, &mi->result.oid);
|
|
|
|
memcpy(ce->name, GITATTRIBUTES_FILE, len);
|
|
|
|
add_index_entry(attr_index, ce,
|
|
|
|
ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
|
|
|
|
get_stream_filter(attr_index, GITATTRIBUTES_FILE, &ce->oid);
|
|
|
|
} else {
|
|
|
|
int stage, len;
|
|
|
|
struct conflict_info *ci;
|
|
|
|
|
|
|
|
ASSIGN_AND_VERIFY_CI(ci, mi);
|
|
|
|
for (stage = 0; stage < 3; stage++) {
|
|
|
|
unsigned stage_mask = (1 << stage);
|
|
|
|
|
|
|
|
if (!(ci->filemask & stage_mask))
|
|
|
|
continue;
|
|
|
|
len = strlen(GITATTRIBUTES_FILE);
|
|
|
|
ce = make_empty_cache_entry(attr_index, len);
|
|
|
|
ce->ce_mode = create_ce_mode(ci->stages[stage].mode);
|
|
|
|
ce->ce_flags = create_ce_flags(stage);
|
|
|
|
ce->ce_namelen = len;
|
|
|
|
oidcpy(&ce->oid, &ci->stages[stage].oid);
|
|
|
|
memcpy(ce->name, GITATTRIBUTES_FILE, len);
|
|
|
|
add_index_entry(attr_index, ce,
|
|
|
|
ADD_CACHE_OK_TO_ADD | ADD_CACHE_OK_TO_REPLACE);
|
|
|
|
get_stream_filter(attr_index, GITATTRIBUTES_FILE,
|
|
|
|
&ce->oid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-01 02:34:43 +00:00
|
|
|
static int merge_3way(struct merge_options *opt,
|
|
|
|
const char *path,
|
|
|
|
const struct object_id *o,
|
|
|
|
const struct object_id *a,
|
|
|
|
const struct object_id *b,
|
|
|
|
const char *pathnames[3],
|
|
|
|
const int extra_marker_size,
|
|
|
|
mmbuffer_t *result_buf)
|
|
|
|
{
|
2021-01-01 02:34:44 +00:00
|
|
|
mmfile_t orig, src1, src2;
|
|
|
|
struct ll_merge_options ll_opts = {0};
|
|
|
|
char *base, *name1, *name2;
|
2022-02-02 02:37:30 +00:00
|
|
|
enum ll_merge_result merge_status;
|
2021-01-01 02:34:44 +00:00
|
|
|
|
merge-ort: have ll_merge() use a special attr_index for renormalization
ll_merge() needs an index when renormalization is requested. Create one
specifically for just this purpose with just the one needed entry. This
fixes t6418.4 and t6418.5 under GIT_TEST_MERGE_ALGORITHM=ort.
NOTE 1: Even if the user has a working copy or a real index (which is
not a given as merge-ort can be used in bare repositories), we
explicitly ignore any .gitattributes file from either of these
locations. merge-ort can be used to merge two branches that are
unrelated to HEAD, so .gitattributes from the working copy and current
index should not be considered relevant.
NOTE 2: Since we are in the middle of merging, there is a risk that
.gitattributes itself is conflicted...leaving us with an ill-defined
situation about how to perform the rest of the merge. It could be that
the .gitattributes file does not even exist on one of the sides of the
merge, or that it has been modified on both sides. If it's been
modified on both sides, it's possible that it could itself be merged
cleanly, though it's also possible that it only merges cleanly if you
use the right version of the .gitattributes file to drive the merge. It
gets kind of complicated. The only test we ever had that attempted to
test behavior in this area was seemingly unaware of the undefined
behavior, but knew the test wouldn't work for lack of attribute handling
support, marked it as test_expect_failure from the beginning, but
managed to fail for several reasons unrelated to attribute handling.
See commit 6f6e7cfb52 ("t6038: remove problematic test", 2020-08-03) for
details. So there are probably various ways to improve what
initialize_attr_index() picks in the case of a conflicted .gitattributes
but for now I just implemented something simple -- look for whatever
.gitattributes file we can find in any of the higher order stages and
use it.
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-20 00:03:46 +00:00
|
|
|
if (!opt->priv->attr_index.initialized)
|
|
|
|
initialize_attr_index(opt);
|
|
|
|
|
2021-01-01 02:34:44 +00:00
|
|
|
ll_opts.renormalize = opt->renormalize;
|
|
|
|
ll_opts.extra_marker_size = extra_marker_size;
|
|
|
|
ll_opts.xdl_opts = opt->xdl_opts;
|
|
|
|
|
|
|
|
if (opt->priv->call_depth) {
|
|
|
|
ll_opts.virtual_ancestor = 1;
|
|
|
|
ll_opts.variant = 0;
|
|
|
|
} else {
|
|
|
|
switch (opt->recursive_variant) {
|
|
|
|
case MERGE_VARIANT_OURS:
|
|
|
|
ll_opts.variant = XDL_MERGE_FAVOR_OURS;
|
|
|
|
break;
|
|
|
|
case MERGE_VARIANT_THEIRS:
|
|
|
|
ll_opts.variant = XDL_MERGE_FAVOR_THEIRS;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
ll_opts.variant = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(pathnames[0] && pathnames[1] && pathnames[2] && opt->ancestor);
|
|
|
|
if (pathnames[0] == pathnames[1] && pathnames[1] == pathnames[2]) {
|
|
|
|
base = mkpathdup("%s", opt->ancestor);
|
|
|
|
name1 = mkpathdup("%s", opt->branch1);
|
|
|
|
name2 = mkpathdup("%s", opt->branch2);
|
|
|
|
} else {
|
|
|
|
base = mkpathdup("%s:%s", opt->ancestor, pathnames[0]);
|
|
|
|
name1 = mkpathdup("%s:%s", opt->branch1, pathnames[1]);
|
|
|
|
name2 = mkpathdup("%s:%s", opt->branch2, pathnames[2]);
|
|
|
|
}
|
|
|
|
|
|
|
|
read_mmblob(&orig, o);
|
|
|
|
read_mmblob(&src1, a);
|
|
|
|
read_mmblob(&src2, b);
|
|
|
|
|
|
|
|
merge_status = ll_merge(result_buf, path, &orig, base,
|
|
|
|
&src1, name1, &src2, name2,
|
merge-ort: have ll_merge() use a special attr_index for renormalization
ll_merge() needs an index when renormalization is requested. Create one
specifically for just this purpose with just the one needed entry. This
fixes t6418.4 and t6418.5 under GIT_TEST_MERGE_ALGORITHM=ort.
NOTE 1: Even if the user has a working copy or a real index (which is
not a given as merge-ort can be used in bare repositories), we
explicitly ignore any .gitattributes file from either of these
locations. merge-ort can be used to merge two branches that are
unrelated to HEAD, so .gitattributes from the working copy and current
index should not be considered relevant.
NOTE 2: Since we are in the middle of merging, there is a risk that
.gitattributes itself is conflicted...leaving us with an ill-defined
situation about how to perform the rest of the merge. It could be that
the .gitattributes file does not even exist on one of the sides of the
merge, or that it has been modified on both sides. If it's been
modified on both sides, it's possible that it could itself be merged
cleanly, though it's also possible that it only merges cleanly if you
use the right version of the .gitattributes file to drive the merge. It
gets kind of complicated. The only test we ever had that attempted to
test behavior in this area was seemingly unaware of the undefined
behavior, but knew the test wouldn't work for lack of attribute handling
support, marked it as test_expect_failure from the beginning, but
managed to fail for several reasons unrelated to attribute handling.
See commit 6f6e7cfb52 ("t6038: remove problematic test", 2020-08-03) for
details. So there are probably various ways to improve what
initialize_attr_index() picks in the case of a conflicted .gitattributes
but for now I just implemented something simple -- look for whatever
.gitattributes file we can find in any of the higher order stages and
use it.
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-20 00:03:46 +00:00
|
|
|
&opt->priv->attr_index, &ll_opts);
|
2022-02-02 02:37:30 +00:00
|
|
|
if (merge_status == LL_MERGE_BINARY_CONFLICT)
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_BINARY, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
2022-02-02 02:37:31 +00:00
|
|
|
"warning: Cannot merge binary files: %s (%s vs. %s)",
|
|
|
|
path, name1, name2);
|
2021-01-01 02:34:44 +00:00
|
|
|
|
|
|
|
free(base);
|
|
|
|
free(name1);
|
|
|
|
free(name2);
|
|
|
|
free(orig.ptr);
|
|
|
|
free(src1.ptr);
|
|
|
|
free(src2.ptr);
|
|
|
|
return merge_status;
|
2021-01-01 02:34:43 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:45 +00:00
|
|
|
static int handle_content_merge(struct merge_options *opt,
|
|
|
|
const char *path,
|
|
|
|
const struct version_info *o,
|
|
|
|
const struct version_info *a,
|
|
|
|
const struct version_info *b,
|
|
|
|
const char *pathnames[3],
|
|
|
|
const int extra_marker_size,
|
|
|
|
struct version_info *result)
|
|
|
|
{
|
2021-01-01 02:34:42 +00:00
|
|
|
/*
|
2021-01-01 02:34:43 +00:00
|
|
|
* path is the target location where we want to put the file, and
|
|
|
|
* is used to determine any normalization rules in ll_merge.
|
|
|
|
*
|
|
|
|
* The normal case is that path and all entries in pathnames are
|
|
|
|
* identical, though renames can affect which path we got one of
|
|
|
|
* the three blobs to merge on various sides of history.
|
|
|
|
*
|
|
|
|
* extra_marker_size is the amount to extend conflict markers in
|
|
|
|
* ll_merge; this is neeed if we have content merges of content
|
|
|
|
* merges, which happens for example with rename/rename(2to1) and
|
|
|
|
* rename/add conflicts.
|
|
|
|
*/
|
|
|
|
unsigned clean = 1;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* handle_content_merge() needs both files to be of the same type, i.e.
|
|
|
|
* both files OR both submodules OR both symlinks. Conflicting types
|
|
|
|
* needs to be handled elsewhere.
|
2021-01-01 02:34:42 +00:00
|
|
|
*/
|
2021-01-01 02:34:43 +00:00
|
|
|
assert((S_IFMT & a->mode) == (S_IFMT & b->mode));
|
|
|
|
|
|
|
|
/* Merge modes */
|
|
|
|
if (a->mode == b->mode || a->mode == o->mode)
|
|
|
|
result->mode = b->mode;
|
|
|
|
else {
|
|
|
|
/* must be the 100644/100755 case */
|
|
|
|
assert(S_ISREG(a->mode));
|
|
|
|
result->mode = a->mode;
|
|
|
|
clean = (b->mode == o->mode);
|
|
|
|
/*
|
|
|
|
* FIXME: If opt->priv->call_depth && !clean, then we really
|
|
|
|
* should not make result->mode match either a->mode or
|
|
|
|
* b->mode; that causes t6036 "check conflicting mode for
|
|
|
|
* regular file" to fail. It would be best to use some other
|
|
|
|
* mode, but we'll confuse all kinds of stuff if we use one
|
|
|
|
* where S_ISREG(result->mode) isn't true, and if we use
|
|
|
|
* something like 0100666, then tree-walk.c's calls to
|
|
|
|
* canon_mode() will just normalize that to 100644 for us and
|
|
|
|
* thus not solve anything.
|
|
|
|
*
|
|
|
|
* Figure out if there's some kind of way we can work around
|
|
|
|
* this...
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Trivial oid merge.
|
|
|
|
*
|
|
|
|
* Note: While one might assume that the next four lines would
|
|
|
|
* be unnecessary due to the fact that match_mask is often
|
|
|
|
* setup and already handled, renames don't always take care
|
|
|
|
* of that.
|
|
|
|
*/
|
|
|
|
if (oideq(&a->oid, &b->oid) || oideq(&a->oid, &o->oid))
|
|
|
|
oidcpy(&result->oid, &b->oid);
|
|
|
|
else if (oideq(&b->oid, &o->oid))
|
|
|
|
oidcpy(&result->oid, &a->oid);
|
|
|
|
|
|
|
|
/* Remaining rules depend on file vs. submodule vs. symlink. */
|
|
|
|
else if (S_ISREG(a->mode)) {
|
|
|
|
mmbuffer_t result_buf;
|
|
|
|
int ret = 0, merge_status;
|
|
|
|
int two_way;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If 'o' is different type, treat it as null so we do a
|
|
|
|
* two-way merge.
|
|
|
|
*/
|
|
|
|
two_way = ((S_IFMT & o->mode) != (S_IFMT & a->mode));
|
|
|
|
|
|
|
|
merge_status = merge_3way(opt, path,
|
2021-04-26 01:02:56 +00:00
|
|
|
two_way ? null_oid() : &o->oid,
|
2021-01-01 02:34:43 +00:00
|
|
|
&a->oid, &b->oid,
|
|
|
|
pathnames, extra_marker_size,
|
|
|
|
&result_buf);
|
|
|
|
|
|
|
|
if ((merge_status < 0) || !result_buf.ptr)
|
|
|
|
ret = err(opt, _("Failed to execute internal merge"));
|
|
|
|
|
|
|
|
if (!ret &&
|
|
|
|
write_object_file(result_buf.ptr, result_buf.size,
|
2022-02-04 23:48:26 +00:00
|
|
|
OBJ_BLOB, &result->oid))
|
2021-01-01 02:34:43 +00:00
|
|
|
ret = err(opt, _("Unable to add %s to database"),
|
|
|
|
path);
|
|
|
|
|
|
|
|
free(result_buf.ptr);
|
|
|
|
if (ret)
|
|
|
|
return -1;
|
|
|
|
clean &= (merge_status == 0);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, INFO_AUTO_MERGING, 1, path, NULL, NULL, NULL,
|
|
|
|
_("Auto-merging %s"), path);
|
2021-01-01 02:34:43 +00:00
|
|
|
} else if (S_ISGITLINK(a->mode)) {
|
|
|
|
int two_way = ((S_IFMT & o->mode) != (S_IFMT & a->mode));
|
|
|
|
clean = merge_submodule(opt, pathnames[0],
|
2021-04-26 01:02:56 +00:00
|
|
|
two_way ? null_oid() : &o->oid,
|
2021-01-01 02:34:43 +00:00
|
|
|
&a->oid, &b->oid, &result->oid);
|
|
|
|
if (opt->priv->call_depth && two_way && !clean) {
|
|
|
|
result->mode = o->mode;
|
|
|
|
oidcpy(&result->oid, &o->oid);
|
|
|
|
}
|
|
|
|
} else if (S_ISLNK(a->mode)) {
|
|
|
|
if (opt->priv->call_depth) {
|
|
|
|
clean = 0;
|
|
|
|
result->mode = o->mode;
|
|
|
|
oidcpy(&result->oid, &o->oid);
|
|
|
|
} else {
|
|
|
|
switch (opt->recursive_variant) {
|
|
|
|
case MERGE_VARIANT_NORMAL:
|
|
|
|
clean = 0;
|
|
|
|
oidcpy(&result->oid, &a->oid);
|
|
|
|
break;
|
|
|
|
case MERGE_VARIANT_OURS:
|
|
|
|
oidcpy(&result->oid, &a->oid);
|
|
|
|
break;
|
|
|
|
case MERGE_VARIANT_THEIRS:
|
|
|
|
oidcpy(&result->oid, &b->oid);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} else
|
|
|
|
BUG("unsupported object type in the tree: %06o for %s",
|
|
|
|
a->mode, path);
|
|
|
|
|
2021-01-01 02:34:42 +00:00
|
|
|
return clean;
|
2020-12-03 15:59:45 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: functions related to detect_and_process_renames(), ***
|
|
|
|
*** which are split into directory and regular rename detection sections. ***/
|
|
|
|
|
|
|
|
/*** Function Grouping: functions related to directory rename detection ***/
|
|
|
|
|
2021-01-19 19:53:45 +00:00
|
|
|
struct collision_info {
|
|
|
|
struct string_list source_files;
|
|
|
|
unsigned reported_already:1;
|
|
|
|
};
|
|
|
|
|
2021-01-19 19:53:46 +00:00
|
|
|
/*
|
|
|
|
* Return a new string that replaces the beginning portion (which matches
|
|
|
|
* rename_info->key), with rename_info->util.new_dir. In perl-speak:
|
|
|
|
* new_path_name = (old_path =~ s/rename_info->key/rename_info->value/);
|
|
|
|
* NOTE:
|
|
|
|
* Caller must ensure that old_path starts with rename_info->key + '/'.
|
|
|
|
*/
|
|
|
|
static char *apply_dir_rename(struct strmap_entry *rename_info,
|
|
|
|
const char *old_path)
|
|
|
|
{
|
2021-01-19 19:53:47 +00:00
|
|
|
struct strbuf new_path = STRBUF_INIT;
|
|
|
|
const char *old_dir = rename_info->key;
|
|
|
|
const char *new_dir = rename_info->value;
|
|
|
|
int oldlen, newlen, new_dir_len;
|
|
|
|
|
|
|
|
oldlen = strlen(old_dir);
|
|
|
|
if (*new_dir == '\0')
|
|
|
|
/*
|
|
|
|
* If someone renamed/merged a subdirectory into the root
|
|
|
|
* directory (e.g. 'some/subdir' -> ''), then we want to
|
|
|
|
* avoid returning
|
|
|
|
* '' + '/filename'
|
|
|
|
* as the rename; we need to make old_path + oldlen advance
|
|
|
|
* past the '/' character.
|
|
|
|
*/
|
|
|
|
oldlen++;
|
|
|
|
new_dir_len = strlen(new_dir);
|
|
|
|
newlen = new_dir_len + (strlen(old_path) - oldlen) + 1;
|
|
|
|
strbuf_grow(&new_path, newlen);
|
|
|
|
strbuf_add(&new_path, new_dir, new_dir_len);
|
|
|
|
strbuf_addstr(&new_path, &old_path[oldlen]);
|
|
|
|
|
|
|
|
return strbuf_detach(&new_path, NULL);
|
2021-01-19 19:53:46 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:49 +00:00
|
|
|
static int path_in_way(struct strmap *paths, const char *path, unsigned side_mask)
|
|
|
|
{
|
|
|
|
struct merged_info *mi = strmap_get(paths, path);
|
|
|
|
struct conflict_info *ci;
|
|
|
|
if (!mi)
|
|
|
|
return 0;
|
|
|
|
INITIALIZE_CI(ci, mi);
|
|
|
|
return mi->clean || (side_mask & (ci->filemask | ci->dirmask));
|
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:48 +00:00
|
|
|
/*
|
|
|
|
* See if there is a directory rename for path, and if there are any file
|
|
|
|
* level conflicts on the given side for the renamed location. If there is
|
|
|
|
* a rename and there are no conflicts, return the new name. Otherwise,
|
|
|
|
* return NULL.
|
|
|
|
*/
|
|
|
|
static char *handle_path_level_conflicts(struct merge_options *opt,
|
|
|
|
const char *path,
|
|
|
|
unsigned side_index,
|
|
|
|
struct strmap_entry *rename_info,
|
|
|
|
struct strmap *collisions)
|
|
|
|
{
|
2021-01-19 19:53:49 +00:00
|
|
|
char *new_path = NULL;
|
|
|
|
struct collision_info *c_info;
|
|
|
|
int clean = 1;
|
|
|
|
struct strbuf collision_paths = STRBUF_INIT;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* entry has the mapping of old directory name to new directory name
|
|
|
|
* that we want to apply to path.
|
|
|
|
*/
|
|
|
|
new_path = apply_dir_rename(rename_info, path);
|
|
|
|
if (!new_path)
|
|
|
|
BUG("Failed to apply directory rename!");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The caller needs to have ensured that it has pre-populated
|
|
|
|
* collisions with all paths that map to new_path. Do a quick check
|
|
|
|
* to ensure that's the case.
|
|
|
|
*/
|
|
|
|
c_info = strmap_get(collisions, new_path);
|
2022-05-02 16:50:37 +00:00
|
|
|
if (!c_info)
|
2021-01-19 19:53:49 +00:00
|
|
|
BUG("c_info is NULL");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for one-sided add/add/.../add conflicts, i.e.
|
|
|
|
* where implicit renames from the other side doing
|
|
|
|
* directory rename(s) can affect this side of history
|
|
|
|
* to put multiple paths into the same location. Warn
|
|
|
|
* and bail on directory renames for such paths.
|
|
|
|
*/
|
|
|
|
if (c_info->reported_already) {
|
|
|
|
clean = 0;
|
|
|
|
} else if (path_in_way(&opt->priv->paths, new_path, 1 << side_index)) {
|
|
|
|
c_info->reported_already = 1;
|
|
|
|
strbuf_add_separated_string_list(&collision_paths, ", ",
|
|
|
|
&c_info->source_files);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DIR_RENAME_FILE_IN_WAY, 0,
|
|
|
|
new_path, NULL, NULL, &c_info->source_files,
|
|
|
|
_("CONFLICT (implicit dir rename): Existing "
|
|
|
|
"file/dir at %s in the way of implicit "
|
|
|
|
"directory rename(s) putting the following "
|
|
|
|
"path(s) there: %s."),
|
|
|
|
new_path, collision_paths.buf);
|
2021-01-19 19:53:49 +00:00
|
|
|
clean = 0;
|
|
|
|
} else if (c_info->source_files.nr > 1) {
|
|
|
|
c_info->reported_already = 1;
|
|
|
|
strbuf_add_separated_string_list(&collision_paths, ", ",
|
|
|
|
&c_info->source_files);
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DIR_RENAME_COLLISION, 0,
|
|
|
|
new_path, NULL, NULL, &c_info->source_files,
|
|
|
|
_("CONFLICT (implicit dir rename): Cannot map "
|
|
|
|
"more than one path to %s; implicit directory "
|
|
|
|
"renames tried to put these paths there: %s"),
|
|
|
|
new_path, collision_paths.buf);
|
2021-01-19 19:53:49 +00:00
|
|
|
clean = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Free memory we no longer need */
|
|
|
|
strbuf_release(&collision_paths);
|
|
|
|
if (!clean && new_path) {
|
|
|
|
free(new_path);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return new_path;
|
2021-01-19 19:53:48 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:40 +00:00
|
|
|
static void get_provisional_directory_renames(struct merge_options *opt,
|
|
|
|
unsigned side,
|
|
|
|
int *clean)
|
|
|
|
{
|
2021-01-19 19:53:41 +00:00
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Collapse
|
|
|
|
* dir_rename_count: old_directory -> {new_directory -> count}
|
|
|
|
* down to
|
|
|
|
* dir_renames: old_directory -> best_new_directory
|
|
|
|
* where best_new_directory is the one with the unique highest count.
|
|
|
|
*/
|
|
|
|
strmap_for_each_entry(&renames->dir_rename_count[side], &iter, entry) {
|
|
|
|
const char *source_dir = entry->key;
|
|
|
|
struct strintmap *counts = entry->value;
|
|
|
|
struct hashmap_iter count_iter;
|
|
|
|
struct strmap_entry *count_entry;
|
|
|
|
int max = 0;
|
|
|
|
int bad_max = 0;
|
|
|
|
const char *best = NULL;
|
|
|
|
|
|
|
|
strintmap_for_each_entry(counts, &count_iter, count_entry) {
|
|
|
|
const char *target_dir = count_entry->key;
|
|
|
|
intptr_t count = (intptr_t)count_entry->value;
|
|
|
|
|
|
|
|
if (count == max)
|
|
|
|
bad_max = max;
|
|
|
|
else if (count > max) {
|
|
|
|
max = count;
|
|
|
|
best = target_dir;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-13 22:22:06 +00:00
|
|
|
if (max == 0)
|
|
|
|
continue;
|
|
|
|
|
2021-01-19 19:53:41 +00:00
|
|
|
if (bad_max == max) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DIR_RENAME_SPLIT, 0,
|
|
|
|
source_dir, NULL, NULL, NULL,
|
|
|
|
_("CONFLICT (directory rename split): "
|
|
|
|
"Unclear where to rename %s to; it was "
|
|
|
|
"renamed to multiple other directories, "
|
|
|
|
"with no destination getting a majority of "
|
|
|
|
"the files."),
|
|
|
|
source_dir);
|
2021-03-20 00:03:54 +00:00
|
|
|
*clean = 0;
|
2021-01-19 19:53:41 +00:00
|
|
|
} else {
|
|
|
|
strmap_put(&renames->dir_renames[side],
|
|
|
|
source_dir, (void*)best);
|
|
|
|
}
|
|
|
|
}
|
2021-01-19 19:53:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void handle_directory_level_conflicts(struct merge_options *opt)
|
|
|
|
{
|
2021-01-19 19:53:44 +00:00
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
struct string_list duplicated = STRING_LIST_INIT_NODUP;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
struct strmap *side1_dir_renames = &renames->dir_renames[MERGE_SIDE1];
|
|
|
|
struct strmap *side2_dir_renames = &renames->dir_renames[MERGE_SIDE2];
|
|
|
|
int i;
|
|
|
|
|
|
|
|
strmap_for_each_entry(side1_dir_renames, &iter, entry) {
|
|
|
|
if (strmap_contains(side2_dir_renames, entry->key))
|
|
|
|
string_list_append(&duplicated, entry->key);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < duplicated.nr; i++) {
|
|
|
|
strmap_remove(side1_dir_renames, duplicated.items[i].string, 0);
|
|
|
|
strmap_remove(side2_dir_renames, duplicated.items[i].string, 0);
|
|
|
|
}
|
|
|
|
string_list_clear(&duplicated, 0);
|
2021-01-19 19:53:40 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:46 +00:00
|
|
|
static struct strmap_entry *check_dir_renamed(const char *path,
|
|
|
|
struct strmap *dir_renames)
|
|
|
|
{
|
2021-01-19 19:53:47 +00:00
|
|
|
char *temp = xstrdup(path);
|
|
|
|
char *end;
|
|
|
|
struct strmap_entry *e = NULL;
|
|
|
|
|
|
|
|
while ((end = strrchr(temp, '/'))) {
|
|
|
|
*end = '\0';
|
|
|
|
e = strmap_get_entry(dir_renames, temp);
|
|
|
|
if (e)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
free(temp);
|
|
|
|
return e;
|
2021-01-19 19:53:46 +00:00
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:45 +00:00
|
|
|
static void compute_collisions(struct strmap *collisions,
|
|
|
|
struct strmap *dir_renames,
|
|
|
|
struct diff_queue_struct *pairs)
|
|
|
|
{
|
2021-01-19 19:53:46 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
strmap_init_with_options(collisions, NULL, 0);
|
|
|
|
if (strmap_empty(dir_renames))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Multiple files can be mapped to the same path due to directory
|
|
|
|
* renames done by the other side of history. Since that other
|
|
|
|
* side of history could have merged multiple directories into one,
|
|
|
|
* if our side of history added the same file basename to each of
|
|
|
|
* those directories, then all N of them would get implicitly
|
|
|
|
* renamed by the directory rename detection into the same path,
|
|
|
|
* and we'd get an add/add/.../add conflict, and all those adds
|
|
|
|
* from *this* side of history. This is not representable in the
|
|
|
|
* index, and users aren't going to easily be able to make sense of
|
|
|
|
* it. So we need to provide a good warning about what's
|
|
|
|
* happening, and fall back to no-directory-rename detection
|
|
|
|
* behavior for those paths.
|
|
|
|
*
|
|
|
|
* See testcases 9e and all of section 5 from t6043 for examples.
|
|
|
|
*/
|
|
|
|
for (i = 0; i < pairs->nr; ++i) {
|
|
|
|
struct strmap_entry *rename_info;
|
|
|
|
struct collision_info *collision_info;
|
|
|
|
char *new_path;
|
|
|
|
struct diff_filepair *pair = pairs->queue[i];
|
|
|
|
|
|
|
|
if (pair->status != 'A' && pair->status != 'R')
|
|
|
|
continue;
|
|
|
|
rename_info = check_dir_renamed(pair->two->path, dir_renames);
|
|
|
|
if (!rename_info)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
new_path = apply_dir_rename(rename_info, pair->two->path);
|
|
|
|
assert(new_path);
|
|
|
|
collision_info = strmap_get(collisions, new_path);
|
|
|
|
if (collision_info) {
|
|
|
|
free(new_path);
|
|
|
|
} else {
|
2021-03-13 16:17:22 +00:00
|
|
|
CALLOC_ARRAY(collision_info, 1);
|
2021-07-01 10:51:29 +00:00
|
|
|
string_list_init_nodup(&collision_info->source_files);
|
2021-01-19 19:53:46 +00:00
|
|
|
strmap_put(collisions, new_path, collision_info);
|
|
|
|
}
|
|
|
|
string_list_insert(&collision_info->source_files,
|
|
|
|
pair->two->path);
|
|
|
|
}
|
2021-01-19 19:53:45 +00:00
|
|
|
}
|
|
|
|
|
2022-07-05 01:33:41 +00:00
|
|
|
static void free_collisions(struct strmap *collisions)
|
|
|
|
{
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
|
|
|
|
/* Free each value in the collisions map */
|
|
|
|
strmap_for_each_entry(collisions, &iter, entry) {
|
|
|
|
struct collision_info *info = entry->value;
|
|
|
|
string_list_clear(&info->source_files, 0);
|
|
|
|
}
|
|
|
|
/*
|
|
|
|
* In compute_collisions(), we set collisions.strdup_strings to 0
|
|
|
|
* so that we wouldn't have to make another copy of the new_path
|
|
|
|
* allocated by apply_dir_rename(). But now that we've used them
|
|
|
|
* and have no other references to these strings, it is time to
|
|
|
|
* deallocate them.
|
|
|
|
*/
|
|
|
|
free_strmap_strings(collisions);
|
|
|
|
strmap_clear(collisions, 1);
|
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:45 +00:00
|
|
|
static char *check_for_directory_rename(struct merge_options *opt,
|
|
|
|
const char *path,
|
|
|
|
unsigned side_index,
|
|
|
|
struct strmap *dir_renames,
|
|
|
|
struct strmap *dir_rename_exclusions,
|
|
|
|
struct strmap *collisions,
|
|
|
|
int *clean_merge)
|
|
|
|
{
|
2022-07-05 01:33:40 +00:00
|
|
|
char *new_path;
|
2021-01-19 19:53:48 +00:00
|
|
|
struct strmap_entry *rename_info;
|
2022-07-05 01:33:40 +00:00
|
|
|
struct strmap_entry *otherinfo;
|
2021-01-19 19:53:48 +00:00
|
|
|
const char *new_dir;
|
merge-ort: fix issue with dual rename and add/add conflict
There is code in both merge-recursive and merge-ort for avoiding doubly
transitive renames (i.e. one side renames directory A/ -> B/, and the
other side renames directory B/ -> C/), because this combination would
otherwise make a mess for new files added to A/ on the first side and
wondering which directory they end up in -- especially if there were
even more renames such as the first side renaming C/ -> D/. In such
cases, it just turns "off" directory rename detection for the higher
order transitive cases.
The testcases added in t6423 a couple commits ago are slightly different
but similar in principle. They involve a similar case of paired
renaming but instead of A/ -> B/ and B/ -> C/, the second side renames
a leading directory of B/ to C/. And both sides add a new file
somewhere under the directory that the other side will rename. While
the new files added start within different directories and thus could
logically end up within different directories, it is weird for a file
on one side to end up where the other one started and not move along
with it. So, let's just turn off directory rename detection in this
case as well.
Another way to look at this is that if the source name involved in a
directory rename on one side is the target name of a directory rename
operation for a file from the other side, then we avoid the doubly
transitive rename. (More concretely, if a directory rename on side D
wants to rename a file on side E from OLD_NAME -> NEW_NAME, and side D
already had a file named NEW_NAME, and a directory rename on side E
wants to rename side D's NEW_NAME -> NEWER_NAME, then we turn off the
directory rename detection for NEW_NAME to prevent the
NEW_NAME -> NEWER_NAME rename, and instead end up with an add/add
conflict on NEW_NAME.)
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 01:33:43 +00:00
|
|
|
int other_side = 3 - side_index;
|
2021-01-19 19:53:48 +00:00
|
|
|
|
merge-ort: fix issue with dual rename and add/add conflict
There is code in both merge-recursive and merge-ort for avoiding doubly
transitive renames (i.e. one side renames directory A/ -> B/, and the
other side renames directory B/ -> C/), because this combination would
otherwise make a mess for new files added to A/ on the first side and
wondering which directory they end up in -- especially if there were
even more renames such as the first side renaming C/ -> D/. In such
cases, it just turns "off" directory rename detection for the higher
order transitive cases.
The testcases added in t6423 a couple commits ago are slightly different
but similar in principle. They involve a similar case of paired
renaming but instead of A/ -> B/ and B/ -> C/, the second side renames
a leading directory of B/ to C/. And both sides add a new file
somewhere under the directory that the other side will rename. While
the new files added start within different directories and thus could
logically end up within different directories, it is weird for a file
on one side to end up where the other one started and not move along
with it. So, let's just turn off directory rename detection in this
case as well.
Another way to look at this is that if the source name involved in a
directory rename on one side is the target name of a directory rename
operation for a file from the other side, then we avoid the doubly
transitive rename. (More concretely, if a directory rename on side D
wants to rename a file on side E from OLD_NAME -> NEW_NAME, and side D
already had a file named NEW_NAME, and a directory rename on side E
wants to rename side D's NEW_NAME -> NEWER_NAME, then we turn off the
directory rename detection for NEW_NAME to prevent the
NEW_NAME -> NEWER_NAME rename, and instead end up with an add/add
conflict on NEW_NAME.)
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 01:33:43 +00:00
|
|
|
/*
|
|
|
|
* Cases where we don't have or don't want a directory rename for
|
|
|
|
* this path.
|
|
|
|
*/
|
2021-01-19 19:53:48 +00:00
|
|
|
if (strmap_empty(dir_renames))
|
2022-07-05 01:33:40 +00:00
|
|
|
return NULL;
|
merge-ort: fix issue with dual rename and add/add conflict
There is code in both merge-recursive and merge-ort for avoiding doubly
transitive renames (i.e. one side renames directory A/ -> B/, and the
other side renames directory B/ -> C/), because this combination would
otherwise make a mess for new files added to A/ on the first side and
wondering which directory they end up in -- especially if there were
even more renames such as the first side renaming C/ -> D/. In such
cases, it just turns "off" directory rename detection for the higher
order transitive cases.
The testcases added in t6423 a couple commits ago are slightly different
but similar in principle. They involve a similar case of paired
renaming but instead of A/ -> B/ and B/ -> C/, the second side renames
a leading directory of B/ to C/. And both sides add a new file
somewhere under the directory that the other side will rename. While
the new files added start within different directories and thus could
logically end up within different directories, it is weird for a file
on one side to end up where the other one started and not move along
with it. So, let's just turn off directory rename detection in this
case as well.
Another way to look at this is that if the source name involved in a
directory rename on one side is the target name of a directory rename
operation for a file from the other side, then we avoid the doubly
transitive rename. (More concretely, if a directory rename on side D
wants to rename a file on side E from OLD_NAME -> NEW_NAME, and side D
already had a file named NEW_NAME, and a directory rename on side E
wants to rename side D's NEW_NAME -> NEWER_NAME, then we turn off the
directory rename detection for NEW_NAME to prevent the
NEW_NAME -> NEWER_NAME rename, and instead end up with an add/add
conflict on NEW_NAME.)
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-05 01:33:43 +00:00
|
|
|
if (strmap_get(&collisions[other_side], path))
|
|
|
|
return NULL;
|
2021-01-19 19:53:48 +00:00
|
|
|
rename_info = check_dir_renamed(path, dir_renames);
|
|
|
|
if (!rename_info)
|
2022-07-05 01:33:40 +00:00
|
|
|
return NULL;
|
2021-01-19 19:53:48 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* This next part is a little weird. We do not want to do an
|
|
|
|
* implicit rename into a directory we renamed on our side, because
|
|
|
|
* that will result in a spurious rename/rename(1to2) conflict. An
|
|
|
|
* example:
|
|
|
|
* Base commit: dumbdir/afile, otherdir/bfile
|
|
|
|
* Side 1: smrtdir/afile, otherdir/bfile
|
|
|
|
* Side 2: dumbdir/afile, dumbdir/bfile
|
|
|
|
* Here, while working on Side 1, we could notice that otherdir was
|
|
|
|
* renamed/merged to dumbdir, and change the diff_filepair for
|
|
|
|
* otherdir/bfile into a rename into dumbdir/bfile. However, Side
|
|
|
|
* 2 will notice the rename from dumbdir to smrtdir, and do the
|
|
|
|
* transitive rename to move it from dumbdir/bfile to
|
|
|
|
* smrtdir/bfile. That gives us bfile in dumbdir vs being in
|
|
|
|
* smrtdir, a rename/rename(1to2) conflict. We really just want
|
|
|
|
* the file to end up in smrtdir. And the way to achieve that is
|
|
|
|
* to not let Side1 do the rename to dumbdir, since we know that is
|
|
|
|
* the source of one of our directory renames.
|
|
|
|
*
|
|
|
|
* That's why otherinfo and dir_rename_exclusions is here.
|
|
|
|
*
|
|
|
|
* As it turns out, this also prevents N-way transient rename
|
|
|
|
* confusion; See testcases 9c and 9d of t6043.
|
|
|
|
*/
|
2022-07-05 01:33:40 +00:00
|
|
|
new_dir = rename_info->value; /* old_dir = rename_info->key; */
|
2021-01-19 19:53:48 +00:00
|
|
|
otherinfo = strmap_get_entry(dir_rename_exclusions, new_dir);
|
|
|
|
if (otherinfo) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, INFO_DIR_RENAME_SKIPPED_DUE_TO_RERENAME, 1,
|
|
|
|
rename_info->key, path, new_dir, NULL,
|
2021-01-19 19:53:48 +00:00
|
|
|
_("WARNING: Avoiding applying %s -> %s rename "
|
|
|
|
"to %s, because %s itself was renamed."),
|
|
|
|
rename_info->key, new_dir, path, new_dir);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
new_path = handle_path_level_conflicts(opt, path, side_index,
|
2022-07-05 01:33:42 +00:00
|
|
|
rename_info,
|
|
|
|
&collisions[side_index]);
|
2021-01-19 19:53:48 +00:00
|
|
|
*clean_merge &= (new_path != NULL);
|
|
|
|
|
|
|
|
return new_path;
|
2021-01-19 19:53:45 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void apply_directory_rename_modifications(struct merge_options *opt,
|
|
|
|
struct diff_filepair *pair,
|
|
|
|
char *new_path)
|
|
|
|
{
|
2021-01-19 19:53:51 +00:00
|
|
|
/*
|
|
|
|
* The basic idea is to get the conflict_info from opt->priv->paths
|
|
|
|
* at old path, and insert it into new_path; basically just this:
|
|
|
|
* ci = strmap_get(&opt->priv->paths, old_path);
|
|
|
|
* strmap_remove(&opt->priv->paths, old_path, 0);
|
|
|
|
* strmap_put(&opt->priv->paths, new_path, ci);
|
|
|
|
* However, there are some factors complicating this:
|
|
|
|
* - opt->priv->paths may already have an entry at new_path
|
|
|
|
* - Each ci tracks its containing directory, so we need to
|
|
|
|
* update that
|
|
|
|
* - If another ci has the same containing directory, then
|
|
|
|
* the two char*'s MUST point to the same location. See the
|
|
|
|
* comment in struct merged_info. strcmp equality is not
|
|
|
|
* enough; we need pointer equality.
|
|
|
|
* - opt->priv->paths must hold the parent directories of any
|
|
|
|
* entries that are added. So, if this directory rename
|
|
|
|
* causes entirely new directories, we must recursively add
|
|
|
|
* parent directories.
|
|
|
|
* - For each parent directory added to opt->priv->paths, we
|
|
|
|
* also need to get its parent directory stored in its
|
|
|
|
* conflict_info->merged.directory_name with all the same
|
|
|
|
* requirements about pointer equality.
|
|
|
|
*/
|
|
|
|
struct string_list dirs_to_insert = STRING_LIST_INIT_NODUP;
|
|
|
|
struct conflict_info *ci, *new_ci;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
const char *branch_with_new_path, *branch_with_dir_rename;
|
|
|
|
const char *old_path = pair->two->path;
|
|
|
|
const char *parent_name;
|
|
|
|
const char *cur_path;
|
|
|
|
int i, len;
|
|
|
|
|
|
|
|
entry = strmap_get_entry(&opt->priv->paths, old_path);
|
|
|
|
old_path = entry->key;
|
|
|
|
ci = entry->value;
|
|
|
|
VERIFY_CI(ci);
|
|
|
|
|
|
|
|
/* Find parent directories missing from opt->priv->paths */
|
2021-07-31 17:27:38 +00:00
|
|
|
cur_path = mem_pool_strdup(&opt->priv->pool, new_path);
|
|
|
|
free((char*)new_path);
|
|
|
|
new_path = (char *)cur_path;
|
2021-07-30 11:47:40 +00:00
|
|
|
|
2021-01-19 19:53:51 +00:00
|
|
|
while (1) {
|
|
|
|
/* Find the parent directory of cur_path */
|
|
|
|
char *last_slash = strrchr(cur_path, '/');
|
|
|
|
if (last_slash) {
|
2021-07-31 17:27:38 +00:00
|
|
|
parent_name = mem_pool_strndup(&opt->priv->pool,
|
|
|
|
cur_path,
|
|
|
|
last_slash - cur_path);
|
2021-01-19 19:53:51 +00:00
|
|
|
} else {
|
|
|
|
parent_name = opt->priv->toplevel_dir;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Look it up in opt->priv->paths */
|
|
|
|
entry = strmap_get_entry(&opt->priv->paths, parent_name);
|
|
|
|
if (entry) {
|
|
|
|
parent_name = entry->key; /* reuse known pointer */
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Record this is one of the directories we need to insert */
|
|
|
|
string_list_append(&dirs_to_insert, parent_name);
|
|
|
|
cur_path = parent_name;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Traverse dirs_to_insert and insert them into opt->priv->paths */
|
|
|
|
for (i = dirs_to_insert.nr-1; i >= 0; --i) {
|
|
|
|
struct conflict_info *dir_ci;
|
|
|
|
char *cur_dir = dirs_to_insert.items[i].string;
|
|
|
|
|
2021-03-13 16:17:22 +00:00
|
|
|
CALLOC_ARRAY(dir_ci, 1);
|
2021-01-19 19:53:51 +00:00
|
|
|
|
|
|
|
dir_ci->merged.directory_name = parent_name;
|
|
|
|
len = strlen(parent_name);
|
|
|
|
/* len+1 because of trailing '/' character */
|
|
|
|
dir_ci->merged.basename_offset = (len > 0 ? len+1 : len);
|
|
|
|
dir_ci->dirmask = ci->filemask;
|
|
|
|
strmap_put(&opt->priv->paths, cur_dir, dir_ci);
|
|
|
|
|
|
|
|
parent_name = cur_dir;
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(ci->filemask == 2 || ci->filemask == 4);
|
merge-ort: fix bug with dir rename vs change dir to symlink
When changing a directory to a symlink on one side of history, and
renaming the parent of that directory to a different directory name
on the other side, e.g. with this kind of setup:
Base commit: Has a file named dir/subdir/file
Side1: Rename dir/ -> renamed-dir/
Side2: delete dir/subdir/file, add dir/subdir as symlink
Then merge-ort was running into an assertion failure:
git: merge-ort.c:2622: apply_directory_rename_modifications: Assertion `ci->dirmask == 0' failed
merge-recursive did not have as obvious an issue handling this case,
likely because we never fixed it to handle the case from commit
902c521a35 ("t6423: more involved directory rename test", 2020-10-15)
where we need to be careful about nested renames when a directory rename
occurs (dir/ -> renamed-dir/ implies dir/subdir/ ->
renamed-dir/subdir/). However, merge-recursive does have multiple
problems with this testcase:
* Incorrect stages for the file: merge-recursive omits the stage in
the index corresponding to the base stage, making `git status`
report "added by us" for renamed-dir/subdir/file instead of the
expected "deleted by them".
* Poor directory/file conflict handling: For the renamed-dir/subdir
symlink, instead of reporting a file/directory conflict as
expected, it reports "Error: Refusing to lose untracked file at
renamed-dir/subdir". This is a lie because there is no untracked
file at that location. It then does the normal suboptimal
merge-recursive thing of having the symlink be tracked in the index
at a location where it can't be written due to D/F conflicts
(namely, renamed-dir/subdir), but writes it to the working tree at
a different location as a new untracked file (namely,
renamed-dir/subdir~B^0)
Technically, these problems don't prevent the user from resolving the
merge if they can figure out to ignore the confusion, but because both
pieces of output are quite confusing I don't want to modify the test
to claim the recursive also passes it even if it doesn't have the bug
that ort did.
So, fix the bug in ort by splitting the conflict_info for "dir/subdir"
into two, one for the directory part, one for the file (i.e. symlink)
part, since the symlink is being renamed by directory rename detection.
The directory part is needed for proper nesting, since there are still
conflict_info fields for files underneath it (though those are marked
as is_null, they are still present until the entries are processed,
and the entry processing wants every non-toplevel entry to have a
parent directory).
Reported-by: Stefano Rivera <stefano@rivera.za.net>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-22 19:04:10 +00:00
|
|
|
assert(ci->dirmask == 0 || ci->dirmask == 1);
|
|
|
|
if (ci->dirmask == 0)
|
|
|
|
strmap_remove(&opt->priv->paths, old_path, 0);
|
|
|
|
else {
|
|
|
|
/*
|
|
|
|
* This file exists on one side, but we still had a directory
|
|
|
|
* at the old location that we can't remove until after
|
|
|
|
* processing all paths below it. So, make a copy of ci in
|
|
|
|
* new_ci and only put the file information into it.
|
|
|
|
*/
|
|
|
|
new_ci = mem_pool_calloc(&opt->priv->pool, 1, sizeof(*new_ci));
|
|
|
|
memcpy(new_ci, ci, sizeof(*ci));
|
|
|
|
assert(!new_ci->match_mask);
|
|
|
|
new_ci->dirmask = 0;
|
|
|
|
new_ci->stages[1].mode = 0;
|
|
|
|
oidcpy(&new_ci->stages[1].oid, null_oid());
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now that we have the file information in new_ci, make sure
|
|
|
|
* ci only has the directory information.
|
|
|
|
*/
|
|
|
|
ci->filemask = 0;
|
|
|
|
ci->merged.clean = 1;
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
|
|
|
if (ci->dirmask & (1 << i))
|
|
|
|
continue;
|
|
|
|
/* zero out any entries related to files */
|
|
|
|
ci->stages[i].mode = 0;
|
|
|
|
oidcpy(&ci->stages[i].oid, null_oid());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Now we want to focus on new_ci, so reassign ci to it
|
|
|
|
ci = new_ci;
|
|
|
|
}
|
2021-01-19 19:53:51 +00:00
|
|
|
|
|
|
|
branch_with_new_path = (ci->filemask == 2) ? opt->branch1 : opt->branch2;
|
|
|
|
branch_with_dir_rename = (ci->filemask == 2) ? opt->branch2 : opt->branch1;
|
|
|
|
|
|
|
|
/* Now, finally update ci and stick it into opt->priv->paths */
|
|
|
|
ci->merged.directory_name = parent_name;
|
|
|
|
len = strlen(parent_name);
|
|
|
|
ci->merged.basename_offset = (len > 0 ? len+1 : len);
|
|
|
|
new_ci = strmap_get(&opt->priv->paths, new_path);
|
|
|
|
if (!new_ci) {
|
|
|
|
/* Place ci back into opt->priv->paths, but at new_path */
|
|
|
|
strmap_put(&opt->priv->paths, new_path, ci);
|
|
|
|
} else {
|
|
|
|
int index;
|
|
|
|
|
|
|
|
/* A few sanity checks */
|
|
|
|
VERIFY_CI(new_ci);
|
|
|
|
assert(ci->filemask == 2 || ci->filemask == 4);
|
|
|
|
assert((new_ci->filemask & ci->filemask) == 0);
|
|
|
|
assert(!new_ci->merged.clean);
|
|
|
|
|
|
|
|
/* Copy stuff from ci into new_ci */
|
|
|
|
new_ci->filemask |= ci->filemask;
|
|
|
|
if (new_ci->dirmask)
|
|
|
|
new_ci->df_conflict = 1;
|
|
|
|
index = (ci->filemask >> 1);
|
|
|
|
new_ci->pathnames[index] = ci->pathnames[index];
|
|
|
|
new_ci->stages[index].mode = ci->stages[index].mode;
|
|
|
|
oidcpy(&new_ci->stages[index].oid, &ci->stages[index].oid);
|
|
|
|
|
|
|
|
ci = new_ci;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_TRUE) {
|
|
|
|
/* Notify user of updated path */
|
|
|
|
if (pair->status == 'A')
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, INFO_DIR_RENAME_APPLIED, 1,
|
|
|
|
new_path, old_path, NULL, NULL,
|
2021-01-19 19:53:51 +00:00
|
|
|
_("Path updated: %s added in %s inside a "
|
|
|
|
"directory that was renamed in %s; moving "
|
|
|
|
"it to %s."),
|
|
|
|
old_path, branch_with_new_path,
|
|
|
|
branch_with_dir_rename, new_path);
|
|
|
|
else
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, INFO_DIR_RENAME_APPLIED, 1,
|
|
|
|
new_path, old_path, NULL, NULL,
|
2021-01-19 19:53:51 +00:00
|
|
|
_("Path updated: %s renamed to %s in %s, "
|
|
|
|
"inside a directory that was renamed in %s; "
|
|
|
|
"moving it to %s."),
|
|
|
|
pair->one->path, old_path, branch_with_new_path,
|
|
|
|
branch_with_dir_rename, new_path);
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* opt->detect_directory_renames has the value
|
|
|
|
* MERGE_DIRECTORY_RENAMES_CONFLICT, so mark these as conflicts.
|
|
|
|
*/
|
|
|
|
ci->path_conflict = 1;
|
|
|
|
if (pair->status == 'A')
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DIR_RENAME_SUGGESTED, 1,
|
|
|
|
new_path, old_path, NULL, NULL,
|
2021-01-19 19:53:51 +00:00
|
|
|
_("CONFLICT (file location): %s added in %s "
|
|
|
|
"inside a directory that was renamed in %s, "
|
|
|
|
"suggesting it should perhaps be moved to "
|
|
|
|
"%s."),
|
|
|
|
old_path, branch_with_new_path,
|
|
|
|
branch_with_dir_rename, new_path);
|
|
|
|
else
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DIR_RENAME_SUGGESTED, 1,
|
|
|
|
new_path, old_path, NULL, NULL,
|
2021-01-19 19:53:51 +00:00
|
|
|
_("CONFLICT (file location): %s renamed to %s "
|
|
|
|
"in %s, inside a directory that was renamed "
|
|
|
|
"in %s, suggesting it should perhaps be "
|
|
|
|
"moved to %s."),
|
|
|
|
pair->one->path, old_path, branch_with_new_path,
|
|
|
|
branch_with_dir_rename, new_path);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Finally, record the new location.
|
|
|
|
*/
|
|
|
|
pair->two->path = new_path;
|
2021-01-19 19:53:45 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: functions related to regular rename detection ***/
|
|
|
|
|
2020-12-14 16:21:31 +00:00
|
|
|
static int process_renames(struct merge_options *opt,
|
|
|
|
struct diff_queue_struct *renames)
|
|
|
|
{
|
2020-12-14 16:21:34 +00:00
|
|
|
int clean_merge = 1, i;
|
|
|
|
|
|
|
|
for (i = 0; i < renames->nr; ++i) {
|
|
|
|
const char *oldpath = NULL, *newpath;
|
|
|
|
struct diff_filepair *pair = renames->queue[i];
|
|
|
|
struct conflict_info *oldinfo = NULL, *newinfo = NULL;
|
|
|
|
struct strmap_entry *old_ent, *new_ent;
|
|
|
|
unsigned int old_sidemask;
|
|
|
|
int target_index, other_source_index;
|
|
|
|
int source_deleted, collision, type_changed;
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
const char *rename_branch = NULL, *delete_branch = NULL;
|
2020-12-14 16:21:34 +00:00
|
|
|
|
|
|
|
old_ent = strmap_get_entry(&opt->priv->paths, pair->one->path);
|
|
|
|
new_ent = strmap_get_entry(&opt->priv->paths, pair->two->path);
|
2021-01-19 19:53:52 +00:00
|
|
|
if (old_ent) {
|
|
|
|
oldpath = old_ent->key;
|
|
|
|
oldinfo = old_ent->value;
|
|
|
|
}
|
|
|
|
newpath = pair->two->path;
|
|
|
|
if (new_ent) {
|
|
|
|
newpath = new_ent->key;
|
|
|
|
newinfo = new_ent->value;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If pair->one->path isn't in opt->priv->paths, that means
|
|
|
|
* that either directory rename detection removed that
|
|
|
|
* path, or a parent directory of oldpath was resolved and
|
|
|
|
* we don't even need the rename; in either case, we can
|
|
|
|
* skip it. If oldinfo->merged.clean, then the other side
|
|
|
|
* of history had no changes to oldpath and we don't need
|
|
|
|
* the rename and can skip it.
|
|
|
|
*/
|
|
|
|
if (!oldinfo || oldinfo->merged.clean)
|
|
|
|
continue;
|
2020-12-14 16:21:34 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* diff_filepairs have copies of pathnames, thus we have to
|
|
|
|
* use standard 'strcmp()' (negated) instead of '=='.
|
|
|
|
*/
|
|
|
|
if (i + 1 < renames->nr &&
|
|
|
|
!strcmp(oldpath, renames->queue[i+1]->one->path)) {
|
|
|
|
/* Handle rename/rename(1to2) or rename/rename(1to1) */
|
|
|
|
const char *pathnames[3];
|
2020-12-15 18:28:01 +00:00
|
|
|
struct version_info merged;
|
|
|
|
struct conflict_info *base, *side1, *side2;
|
merge-ort: add implementation of both sides renaming differently
Implement rename/rename(1to2) handling, i.e. both sides of history
renaming a file and rename it differently. This code replaces the
following from merge-recurisve.c:
* all the 1to2 code in process_renames()
* the RENAME_ONE_FILE_TO_TWO case of process_entry()
* handle_rename_rename_1to2()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here to make
sure that the two different paths aren't marked as clean but are left in
a conflicted state. So process_renames() and process_entry() aren't
quite entirely orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:02 +00:00
|
|
|
unsigned was_binary_blob = 0;
|
2020-12-14 16:21:34 +00:00
|
|
|
|
|
|
|
pathnames[0] = oldpath;
|
|
|
|
pathnames[1] = newpath;
|
|
|
|
pathnames[2] = renames->queue[i+1]->two->path;
|
|
|
|
|
2020-12-15 18:28:01 +00:00
|
|
|
base = strmap_get(&opt->priv->paths, pathnames[0]);
|
|
|
|
side1 = strmap_get(&opt->priv->paths, pathnames[1]);
|
|
|
|
side2 = strmap_get(&opt->priv->paths, pathnames[2]);
|
|
|
|
|
|
|
|
VERIFY_CI(base);
|
|
|
|
VERIFY_CI(side1);
|
|
|
|
VERIFY_CI(side2);
|
|
|
|
|
2020-12-14 16:21:34 +00:00
|
|
|
if (!strcmp(pathnames[1], pathnames[2])) {
|
2021-05-20 06:09:40 +00:00
|
|
|
struct rename_info *ri = &opt->priv->renames;
|
|
|
|
int j;
|
|
|
|
|
2020-12-15 18:28:01 +00:00
|
|
|
/* Both sides renamed the same way */
|
|
|
|
assert(side1 == side2);
|
|
|
|
memcpy(&side1->stages[0], &base->stages[0],
|
|
|
|
sizeof(merged));
|
|
|
|
side1->filemask |= (1 << MERGE_BASE);
|
|
|
|
/* Mark base as resolved by removal */
|
|
|
|
base->merged.is_null = 1;
|
|
|
|
base->merged.clean = 1;
|
2020-12-14 16:21:34 +00:00
|
|
|
|
2021-05-20 06:09:40 +00:00
|
|
|
/*
|
|
|
|
* Disable remembering renames optimization;
|
|
|
|
* rename/rename(1to1) is incredibly rare, and
|
|
|
|
* just disabling the optimization is easier
|
|
|
|
* than purging cached_pairs,
|
|
|
|
* cached_target_names, and dir_rename_counts.
|
|
|
|
*/
|
|
|
|
for (j = 0; j < 3; j++)
|
|
|
|
ri->merge_trees[j] = NULL;
|
|
|
|
|
2020-12-14 16:21:34 +00:00
|
|
|
/* We handled both renames, i.e. i+1 handled */
|
|
|
|
i++;
|
|
|
|
/* Move to next rename */
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* This is a rename/rename(1to2) */
|
merge-ort: add implementation of both sides renaming differently
Implement rename/rename(1to2) handling, i.e. both sides of history
renaming a file and rename it differently. This code replaces the
following from merge-recurisve.c:
* all the 1to2 code in process_renames()
* the RENAME_ONE_FILE_TO_TWO case of process_entry()
* handle_rename_rename_1to2()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here to make
sure that the two different paths aren't marked as clean but are left in
a conflicted state. So process_renames() and process_entry() aren't
quite entirely orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:02 +00:00
|
|
|
clean_merge = handle_content_merge(opt,
|
|
|
|
pair->one->path,
|
|
|
|
&base->stages[0],
|
|
|
|
&side1->stages[1],
|
|
|
|
&side2->stages[2],
|
|
|
|
pathnames,
|
|
|
|
1 + 2 * opt->priv->call_depth,
|
|
|
|
&merged);
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
if (clean_merge < 0)
|
|
|
|
return -1;
|
merge-ort: add implementation of both sides renaming differently
Implement rename/rename(1to2) handling, i.e. both sides of history
renaming a file and rename it differently. This code replaces the
following from merge-recurisve.c:
* all the 1to2 code in process_renames()
* the RENAME_ONE_FILE_TO_TWO case of process_entry()
* handle_rename_rename_1to2()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here to make
sure that the two different paths aren't marked as clean but are left in
a conflicted state. So process_renames() and process_entry() aren't
quite entirely orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:02 +00:00
|
|
|
if (!clean_merge &&
|
|
|
|
merged.mode == side1->stages[1].mode &&
|
|
|
|
oideq(&merged.oid, &side1->stages[1].oid))
|
|
|
|
was_binary_blob = 1;
|
|
|
|
memcpy(&side1->stages[1], &merged, sizeof(merged));
|
|
|
|
if (was_binary_blob) {
|
|
|
|
/*
|
|
|
|
* Getting here means we were attempting to
|
|
|
|
* merge a binary blob.
|
|
|
|
*
|
|
|
|
* Since we can't merge binaries,
|
|
|
|
* handle_content_merge() just takes one
|
|
|
|
* side. But we don't want to copy the
|
|
|
|
* contents of one side to both paths. We
|
|
|
|
* used the contents of side1 above for
|
|
|
|
* side1->stages, let's use the contents of
|
|
|
|
* side2 for side2->stages below.
|
|
|
|
*/
|
|
|
|
oidcpy(&merged.oid, &side2->stages[2].oid);
|
|
|
|
merged.mode = side2->stages[2].mode;
|
|
|
|
}
|
|
|
|
memcpy(&side2->stages[2], &merged, sizeof(merged));
|
|
|
|
|
|
|
|
side1->path_conflict = 1;
|
|
|
|
side2->path_conflict = 1;
|
|
|
|
/*
|
|
|
|
* TODO: For renames we normally remove the path at the
|
|
|
|
* old name. It would thus seem consistent to do the
|
|
|
|
* same for rename/rename(1to2) cases, but we haven't
|
|
|
|
* done so traditionally and a number of the regression
|
|
|
|
* tests now encode an expectation that the file is
|
|
|
|
* left there at stage 1. If we ever decide to change
|
|
|
|
* this, add the following two lines here:
|
|
|
|
* base->merged.is_null = 1;
|
|
|
|
* base->merged.clean = 1;
|
|
|
|
* and remove the setting of base->path_conflict to 1.
|
|
|
|
*/
|
|
|
|
base->path_conflict = 1;
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_RENAME_RENAME, 0,
|
|
|
|
pathnames[0], pathnames[1], pathnames[2], NULL,
|
merge-ort: add implementation of both sides renaming differently
Implement rename/rename(1to2) handling, i.e. both sides of history
renaming a file and rename it differently. This code replaces the
following from merge-recurisve.c:
* all the 1to2 code in process_renames()
* the RENAME_ONE_FILE_TO_TWO case of process_entry()
* handle_rename_rename_1to2()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here to make
sure that the two different paths aren't marked as clean but are left in
a conflicted state. So process_renames() and process_entry() aren't
quite entirely orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:02 +00:00
|
|
|
_("CONFLICT (rename/rename): %s renamed to "
|
|
|
|
"%s in %s and to %s in %s."),
|
|
|
|
pathnames[0],
|
|
|
|
pathnames[1], opt->branch1,
|
|
|
|
pathnames[2], opt->branch2);
|
2020-12-14 16:21:34 +00:00
|
|
|
|
|
|
|
i++; /* We handled both renames, i.e. i+1 handled */
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
VERIFY_CI(oldinfo);
|
|
|
|
VERIFY_CI(newinfo);
|
|
|
|
target_index = pair->score; /* from collect_renames() */
|
|
|
|
assert(target_index == 1 || target_index == 2);
|
|
|
|
other_source_index = 3 - target_index;
|
|
|
|
old_sidemask = (1 << other_source_index); /* 2 or 4 */
|
|
|
|
source_deleted = (oldinfo->filemask == 1);
|
|
|
|
collision = ((newinfo->filemask & old_sidemask) != 0);
|
|
|
|
type_changed = !source_deleted &&
|
|
|
|
(S_ISREG(oldinfo->stages[other_source_index].mode) !=
|
|
|
|
S_ISREG(newinfo->stages[target_index].mode));
|
|
|
|
if (type_changed && collision) {
|
2020-12-15 18:28:06 +00:00
|
|
|
/*
|
|
|
|
* special handling so later blocks can handle this...
|
|
|
|
*
|
|
|
|
* if type_changed && collision are both true, then this
|
|
|
|
* was really a double rename, but one side wasn't
|
|
|
|
* detected due to lack of break detection. I.e.
|
|
|
|
* something like
|
|
|
|
* orig: has normal file 'foo'
|
|
|
|
* side1: renames 'foo' to 'bar', adds 'foo' symlink
|
|
|
|
* side2: renames 'foo' to 'bar'
|
|
|
|
* In this case, the foo->bar rename on side1 won't be
|
|
|
|
* detected because the new symlink named 'foo' is
|
|
|
|
* there and we don't do break detection. But we detect
|
|
|
|
* this here because we don't want to merge the content
|
|
|
|
* of the foo symlink with the foo->bar file, so we
|
|
|
|
* have some logic to handle this special case. The
|
|
|
|
* easiest way to do that is make 'bar' on side1 not
|
|
|
|
* be considered a colliding file but the other part
|
|
|
|
* of a normal rename. If the file is very different,
|
|
|
|
* well we're going to get content merge conflicts
|
|
|
|
* anyway so it doesn't hurt. And if the colliding
|
|
|
|
* file also has a different type, that'll be handled
|
|
|
|
* by the content merge logic in process_entry() too.
|
|
|
|
*
|
|
|
|
* See also t6430, 'rename vs. rename/symlink'
|
|
|
|
*/
|
|
|
|
collision = 0;
|
2020-12-14 16:21:34 +00:00
|
|
|
}
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
if (source_deleted) {
|
|
|
|
if (target_index == 1) {
|
|
|
|
rename_branch = opt->branch1;
|
|
|
|
delete_branch = opt->branch2;
|
|
|
|
} else {
|
|
|
|
rename_branch = opt->branch2;
|
|
|
|
delete_branch = opt->branch1;
|
|
|
|
}
|
|
|
|
}
|
2020-12-14 16:21:34 +00:00
|
|
|
|
|
|
|
assert(source_deleted || oldinfo->filemask & old_sidemask);
|
|
|
|
|
|
|
|
/* Need to check for special types of rename conflicts... */
|
|
|
|
if (collision && !source_deleted) {
|
|
|
|
/* collision: rename/add or rename/rename(2to1) */
|
merge-ort: add implementation of rename collisions
Implement rename/rename(2to1) and rename/add handling, i.e. a file is
renamed into a location where another file is added (with that other
file either being a plain add or itself coming from a rename). Note
that rename collisions can also have a special case stacked on top: the
file being renamed on one side of history is deleted on the other
(yielding either a rename/add/delete conflict or perhaps a
rename/rename(2to1)/delete[/delete]) conflict.
One thing to note here is that when there is a double rename, the code
in question only handles one of them at a time; a later iteration
through the loop will handle the other. After they've both been
handled, process_entry()'s normal add/add code can handle the collision.
This code replaces the following from merge-recurisve.c:
* all the 2to1 code in process_renames()
* the RENAME_TWO_FILES_TO_ONE case of process_entry()
* handle_rename_rename_2to1()
* handle_rename_add()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of six separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:04 +00:00
|
|
|
const char *pathnames[3];
|
|
|
|
struct version_info merged;
|
|
|
|
|
|
|
|
struct conflict_info *base, *side1, *side2;
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
int clean;
|
merge-ort: add implementation of rename collisions
Implement rename/rename(2to1) and rename/add handling, i.e. a file is
renamed into a location where another file is added (with that other
file either being a plain add or itself coming from a rename). Note
that rename collisions can also have a special case stacked on top: the
file being renamed on one side of history is deleted on the other
(yielding either a rename/add/delete conflict or perhaps a
rename/rename(2to1)/delete[/delete]) conflict.
One thing to note here is that when there is a double rename, the code
in question only handles one of them at a time; a later iteration
through the loop will handle the other. After they've both been
handled, process_entry()'s normal add/add code can handle the collision.
This code replaces the following from merge-recurisve.c:
* all the 2to1 code in process_renames()
* the RENAME_TWO_FILES_TO_ONE case of process_entry()
* handle_rename_rename_2to1()
* handle_rename_add()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of six separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:04 +00:00
|
|
|
|
|
|
|
pathnames[0] = oldpath;
|
|
|
|
pathnames[other_source_index] = oldpath;
|
|
|
|
pathnames[target_index] = newpath;
|
|
|
|
|
|
|
|
base = strmap_get(&opt->priv->paths, pathnames[0]);
|
|
|
|
side1 = strmap_get(&opt->priv->paths, pathnames[1]);
|
|
|
|
side2 = strmap_get(&opt->priv->paths, pathnames[2]);
|
|
|
|
|
|
|
|
VERIFY_CI(base);
|
|
|
|
VERIFY_CI(side1);
|
|
|
|
VERIFY_CI(side2);
|
|
|
|
|
|
|
|
clean = handle_content_merge(opt, pair->one->path,
|
|
|
|
&base->stages[0],
|
|
|
|
&side1->stages[1],
|
|
|
|
&side2->stages[2],
|
|
|
|
pathnames,
|
|
|
|
1 + 2 * opt->priv->call_depth,
|
|
|
|
&merged);
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
if (clean < 0)
|
|
|
|
return -1;
|
merge-ort: add implementation of rename collisions
Implement rename/rename(2to1) and rename/add handling, i.e. a file is
renamed into a location where another file is added (with that other
file either being a plain add or itself coming from a rename). Note
that rename collisions can also have a special case stacked on top: the
file being renamed on one side of history is deleted on the other
(yielding either a rename/add/delete conflict or perhaps a
rename/rename(2to1)/delete[/delete]) conflict.
One thing to note here is that when there is a double rename, the code
in question only handles one of them at a time; a later iteration
through the loop will handle the other. After they've both been
handled, process_entry()'s normal add/add code can handle the collision.
This code replaces the following from merge-recurisve.c:
* all the 2to1 code in process_renames()
* the RENAME_TWO_FILES_TO_ONE case of process_entry()
* handle_rename_rename_2to1()
* handle_rename_add()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of six separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:04 +00:00
|
|
|
|
|
|
|
memcpy(&newinfo->stages[target_index], &merged,
|
|
|
|
sizeof(merged));
|
|
|
|
if (!clean) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_RENAME_COLLIDES, 0,
|
|
|
|
newpath, oldpath, NULL, NULL,
|
merge-ort: add implementation of rename collisions
Implement rename/rename(2to1) and rename/add handling, i.e. a file is
renamed into a location where another file is added (with that other
file either being a plain add or itself coming from a rename). Note
that rename collisions can also have a special case stacked on top: the
file being renamed on one side of history is deleted on the other
(yielding either a rename/add/delete conflict or perhaps a
rename/rename(2to1)/delete[/delete]) conflict.
One thing to note here is that when there is a double rename, the code
in question only handles one of them at a time; a later iteration
through the loop will handle the other. After they've both been
handled, process_entry()'s normal add/add code can handle the collision.
This code replaces the following from merge-recurisve.c:
* all the 2to1 code in process_renames()
* the RENAME_TWO_FILES_TO_ONE case of process_entry()
* handle_rename_rename_2to1()
* handle_rename_add()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of six separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:04 +00:00
|
|
|
_("CONFLICT (rename involved in "
|
|
|
|
"collision): rename of %s -> %s has "
|
|
|
|
"content conflicts AND collides "
|
|
|
|
"with another path; this may result "
|
|
|
|
"in nested conflict markers."),
|
|
|
|
oldpath, newpath);
|
|
|
|
}
|
2020-12-14 16:21:34 +00:00
|
|
|
} else if (collision && source_deleted) {
|
merge-ort: add implementation of rename collisions
Implement rename/rename(2to1) and rename/add handling, i.e. a file is
renamed into a location where another file is added (with that other
file either being a plain add or itself coming from a rename). Note
that rename collisions can also have a special case stacked on top: the
file being renamed on one side of history is deleted on the other
(yielding either a rename/add/delete conflict or perhaps a
rename/rename(2to1)/delete[/delete]) conflict.
One thing to note here is that when there is a double rename, the code
in question only handles one of them at a time; a later iteration
through the loop will handle the other. After they've both been
handled, process_entry()'s normal add/add code can handle the collision.
This code replaces the following from merge-recurisve.c:
* all the 2to1 code in process_renames()
* the RENAME_TWO_FILES_TO_ONE case of process_entry()
* handle_rename_rename_2to1()
* handle_rename_add()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of six separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:04 +00:00
|
|
|
/*
|
|
|
|
* rename/add/delete or rename/rename(2to1)/delete:
|
|
|
|
* since oldpath was deleted on the side that didn't
|
|
|
|
* do the rename, there's not much of a content merge
|
|
|
|
* we can do for the rename. oldinfo->merged.is_null
|
|
|
|
* was already set, so we just leave things as-is so
|
|
|
|
* they look like an add/add conflict.
|
|
|
|
*/
|
|
|
|
|
|
|
|
newinfo->path_conflict = 1;
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_RENAME_DELETE, 0,
|
|
|
|
newpath, oldpath, NULL, NULL,
|
merge-ort: add implementation of rename collisions
Implement rename/rename(2to1) and rename/add handling, i.e. a file is
renamed into a location where another file is added (with that other
file either being a plain add or itself coming from a rename). Note
that rename collisions can also have a special case stacked on top: the
file being renamed on one side of history is deleted on the other
(yielding either a rename/add/delete conflict or perhaps a
rename/rename(2to1)/delete[/delete]) conflict.
One thing to note here is that when there is a double rename, the code
in question only handles one of them at a time; a later iteration
through the loop will handle the other. After they've both been
handled, process_entry()'s normal add/add code can handle the collision.
This code replaces the following from merge-recurisve.c:
* all the 2to1 code in process_renames()
* the RENAME_TWO_FILES_TO_ONE case of process_entry()
* handle_rename_rename_2to1()
* handle_rename_add()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of six separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:04 +00:00
|
|
|
_("CONFLICT (rename/delete): %s renamed "
|
|
|
|
"to %s in %s, but deleted in %s."),
|
|
|
|
oldpath, newpath, rename_branch, delete_branch);
|
2020-12-14 16:21:34 +00:00
|
|
|
} else {
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
/*
|
|
|
|
* a few different cases...start by copying the
|
|
|
|
* existing stage(s) from oldinfo over the newinfo
|
|
|
|
* and update the pathname(s).
|
|
|
|
*/
|
|
|
|
memcpy(&newinfo->stages[0], &oldinfo->stages[0],
|
|
|
|
sizeof(newinfo->stages[0]));
|
|
|
|
newinfo->filemask |= (1 << MERGE_BASE);
|
|
|
|
newinfo->pathnames[0] = oldpath;
|
2020-12-14 16:21:34 +00:00
|
|
|
if (type_changed) {
|
|
|
|
/* rename vs. typechange */
|
2020-12-15 18:28:06 +00:00
|
|
|
/* Mark the original as resolved by removal */
|
2021-04-26 01:02:56 +00:00
|
|
|
memcpy(&oldinfo->stages[0].oid, null_oid(),
|
2020-12-15 18:28:06 +00:00
|
|
|
sizeof(oldinfo->stages[0].oid));
|
|
|
|
oldinfo->stages[0].mode = 0;
|
|
|
|
oldinfo->filemask &= 0x06;
|
2020-12-14 16:21:34 +00:00
|
|
|
} else if (source_deleted) {
|
|
|
|
/* rename/delete */
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
newinfo->path_conflict = 1;
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_RENAME_DELETE, 0,
|
|
|
|
newpath, oldpath, NULL, NULL,
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
_("CONFLICT (rename/delete): %s renamed"
|
|
|
|
" to %s in %s, but deleted in %s."),
|
|
|
|
oldpath, newpath,
|
|
|
|
rename_branch, delete_branch);
|
2020-12-14 16:21:34 +00:00
|
|
|
} else {
|
|
|
|
/* normal rename */
|
merge-ort: add implementation of normal rename handling
Implement handling of normal renames. This code replaces the following
from merge-recurisve.c:
* the code relevant to RENAME_NORMAL in process_renames()
* the RENAME_NORMAL case of process_entry()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_rename_normal()
* setup_rename_conflict_info()
The consolidation of four separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
(To be fair, the code for handling normal renames wasn't all that
complicated beforehand, but it's still much simpler now.)
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:05 +00:00
|
|
|
memcpy(&newinfo->stages[other_source_index],
|
|
|
|
&oldinfo->stages[other_source_index],
|
|
|
|
sizeof(newinfo->stages[0]));
|
|
|
|
newinfo->filemask |= (1 << other_source_index);
|
|
|
|
newinfo->pathnames[other_source_index] = oldpath;
|
2020-12-14 16:21:34 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!type_changed) {
|
|
|
|
/* Mark the original as resolved by removal */
|
|
|
|
oldinfo->merged.is_null = 1;
|
|
|
|
oldinfo->merged.clean = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
return clean_merge;
|
2020-12-14 16:21:31 +00:00
|
|
|
}
|
|
|
|
|
2021-03-11 00:38:30 +00:00
|
|
|
static inline int possible_side_renames(struct rename_info *renames,
|
|
|
|
unsigned side_index)
|
|
|
|
{
|
|
|
|
return renames->pairs[side_index].nr > 0 &&
|
2021-03-13 22:22:02 +00:00
|
|
|
!strintmap_empty(&renames->relevant_sources[side_index]);
|
2021-03-11 00:38:30 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline int possible_renames(struct rename_info *renames)
|
|
|
|
{
|
|
|
|
return possible_side_renames(renames, 1) ||
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
possible_side_renames(renames, 2) ||
|
|
|
|
!strmap_empty(&renames->cached_pairs[1]) ||
|
|
|
|
!strmap_empty(&renames->cached_pairs[2]);
|
2021-03-11 00:38:30 +00:00
|
|
|
}
|
|
|
|
|
2021-02-14 07:51:51 +00:00
|
|
|
static void resolve_diffpair_statuses(struct diff_queue_struct *q)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* A simplified version of diff_resolve_rename_copy(); would probably
|
|
|
|
* just use that function but it's static...
|
|
|
|
*/
|
|
|
|
int i;
|
|
|
|
struct diff_filepair *p;
|
|
|
|
|
|
|
|
for (i = 0; i < q->nr; ++i) {
|
|
|
|
p = q->queue[i];
|
|
|
|
p->status = 0; /* undecided */
|
|
|
|
if (!DIFF_FILE_VALID(p->one))
|
|
|
|
p->status = DIFF_STATUS_ADDED;
|
|
|
|
else if (!DIFF_FILE_VALID(p->two))
|
|
|
|
p->status = DIFF_STATUS_DELETED;
|
|
|
|
else if (DIFF_PAIR_RENAME(p))
|
|
|
|
p->status = DIFF_STATUS_RENAMED;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-20 06:09:39 +00:00
|
|
|
static void prune_cached_from_relevant(struct rename_info *renames,
|
|
|
|
unsigned side)
|
|
|
|
{
|
|
|
|
/* Reason for this function described in add_pair() */
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
|
|
|
|
/* Remove from relevant_sources all entries in cached_pairs[side] */
|
|
|
|
strmap_for_each_entry(&renames->cached_pairs[side], &iter, entry) {
|
|
|
|
strintmap_remove(&renames->relevant_sources[side],
|
|
|
|
entry->key);
|
|
|
|
}
|
|
|
|
/* Remove from relevant_sources all entries in cached_irrelevant[side] */
|
|
|
|
strset_for_each_entry(&renames->cached_irrelevant[side], &iter, entry) {
|
|
|
|
strintmap_remove(&renames->relevant_sources[side],
|
|
|
|
entry->key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void use_cached_pairs(struct merge_options *opt,
|
|
|
|
struct strmap *cached_pairs,
|
|
|
|
struct diff_queue_struct *pairs)
|
|
|
|
{
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *entry;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add to side_pairs all entries from renames->cached_pairs[side_index].
|
|
|
|
* (Info in cached_irrelevant[side_index] is not relevant here.)
|
|
|
|
*/
|
|
|
|
strmap_for_each_entry(cached_pairs, &iter, entry) {
|
|
|
|
struct diff_filespec *one, *two;
|
|
|
|
const char *old_name = entry->key;
|
|
|
|
const char *new_name = entry->value;
|
|
|
|
if (!new_name)
|
|
|
|
new_name = old_name;
|
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
/*
|
|
|
|
* cached_pairs has *copies* of old_name and new_name,
|
|
|
|
* because it has to persist across merges. Since
|
|
|
|
* pool_alloc_filespec() will just re-use the existing
|
|
|
|
* filenames, which will also get re-used by
|
|
|
|
* opt->priv->paths if they become renames, and then
|
|
|
|
* get freed at the end of the merge, that would leave
|
|
|
|
* the copy in cached_pairs dangling. Avoid this by
|
|
|
|
* making a copy here.
|
|
|
|
*/
|
|
|
|
old_name = mem_pool_strdup(&opt->priv->pool, old_name);
|
|
|
|
new_name = mem_pool_strdup(&opt->priv->pool, new_name);
|
2021-05-20 06:09:39 +00:00
|
|
|
|
|
|
|
/* We don't care about oid/mode, only filenames and status */
|
2021-07-31 17:27:38 +00:00
|
|
|
one = pool_alloc_filespec(&opt->priv->pool, old_name);
|
|
|
|
two = pool_alloc_filespec(&opt->priv->pool, new_name);
|
|
|
|
pool_diff_queue(&opt->priv->pool, pairs, one, two);
|
2021-05-20 06:09:39 +00:00
|
|
|
pairs->queue[pairs->nr-1]->status = entry->value ? 'R' : 'D';
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-05-20 06:09:35 +00:00
|
|
|
static void cache_new_pair(struct rename_info *renames,
|
|
|
|
int side,
|
|
|
|
char *old_path,
|
|
|
|
char *new_path,
|
|
|
|
int free_old_value)
|
|
|
|
{
|
|
|
|
char *old_value;
|
|
|
|
new_path = xstrdup(new_path);
|
|
|
|
old_value = strmap_put(&renames->cached_pairs[side],
|
|
|
|
old_path, new_path);
|
|
|
|
strset_add(&renames->cached_target_names[side], new_path);
|
|
|
|
if (free_old_value)
|
|
|
|
free(old_value);
|
|
|
|
else
|
|
|
|
assert(!old_value);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void possibly_cache_new_pair(struct rename_info *renames,
|
|
|
|
struct diff_filepair *p,
|
|
|
|
unsigned side,
|
|
|
|
char *new_path)
|
|
|
|
{
|
|
|
|
int dir_renamed_side = 0;
|
|
|
|
|
|
|
|
if (new_path) {
|
|
|
|
/*
|
|
|
|
* Directory renames happen on the other side of history from
|
|
|
|
* the side that adds new files to the old directory.
|
|
|
|
*/
|
|
|
|
dir_renamed_side = 3 - side;
|
|
|
|
} else {
|
|
|
|
int val = strintmap_get(&renames->relevant_sources[side],
|
|
|
|
p->one->path);
|
|
|
|
if (val == RELEVANT_NO_MORE) {
|
|
|
|
assert(p->status == 'D');
|
|
|
|
strset_add(&renames->cached_irrelevant[side],
|
|
|
|
p->one->path);
|
|
|
|
}
|
|
|
|
if (val <= 0)
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (p->status == 'D') {
|
|
|
|
/*
|
|
|
|
* If we already had this delete, we'll just set it's value
|
|
|
|
* to NULL again, so no harm.
|
|
|
|
*/
|
|
|
|
strmap_put(&renames->cached_pairs[side], p->one->path, NULL);
|
|
|
|
} else if (p->status == 'R') {
|
|
|
|
if (!new_path)
|
|
|
|
new_path = p->two->path;
|
|
|
|
else
|
|
|
|
cache_new_pair(renames, dir_renamed_side,
|
|
|
|
p->two->path, new_path, 0);
|
|
|
|
cache_new_pair(renames, side, p->one->path, new_path, 1);
|
|
|
|
} else if (p->status == 'A' && new_path) {
|
|
|
|
cache_new_pair(renames, dir_renamed_side,
|
|
|
|
p->two->path, new_path, 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-14 16:21:31 +00:00
|
|
|
static int compare_pairs(const void *a_, const void *b_)
|
|
|
|
{
|
2020-12-14 16:21:33 +00:00
|
|
|
const struct diff_filepair *a = *((const struct diff_filepair **)a_);
|
|
|
|
const struct diff_filepair *b = *((const struct diff_filepair **)b_);
|
|
|
|
|
|
|
|
return strcmp(a->one->path, b->one->path);
|
2020-12-14 16:21:31 +00:00
|
|
|
}
|
|
|
|
|
2021-06-08 16:11:41 +00:00
|
|
|
/* Call diffcore_rename() to update deleted/added pairs into rename pairs */
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
static int detect_regular_renames(struct merge_options *opt,
|
|
|
|
unsigned side_index)
|
2020-12-14 16:21:31 +00:00
|
|
|
{
|
2020-12-14 16:21:32 +00:00
|
|
|
struct diff_options diff_opts;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
prune_cached_from_relevant(renames, side_index);
|
2021-03-11 00:38:30 +00:00
|
|
|
if (!possible_side_renames(renames, side_index)) {
|
|
|
|
/*
|
|
|
|
* No rename detection needed for this side, but we still need
|
|
|
|
* to make sure 'adds' are marked correctly in case the other
|
|
|
|
* side had directory renames.
|
|
|
|
*/
|
|
|
|
resolve_diffpair_statuses(&renames->pairs[side_index]);
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
return 0;
|
2021-03-11 00:38:30 +00:00
|
|
|
}
|
|
|
|
|
2021-05-20 06:09:38 +00:00
|
|
|
partial_clear_dir_rename_count(&renames->dir_rename_count[side_index]);
|
2020-12-14 16:21:32 +00:00
|
|
|
repo_diff_setup(opt->repo, &diff_opts);
|
|
|
|
diff_opts.flags.recursive = 1;
|
|
|
|
diff_opts.flags.rename_empty = 0;
|
|
|
|
diff_opts.detect_rename = DIFF_DETECT_RENAME;
|
|
|
|
diff_opts.rename_limit = opt->rename_limit;
|
|
|
|
if (opt->rename_limit <= 0)
|
rename: bump limit defaults yet again
These were last bumped in commit 92c57e5c1d29 (bump rename limit
defaults (again), 2011-02-19), and were bumped both because processors
had gotten faster, and because people were getting ugly merges that
caused problems and reporting it to the mailing list (suggesting that
folks were willing to spend more time waiting).
Since that time:
* Linus has continued recommending kernel folks to set
diff.renameLimit=0 (maps to 32767, currently)
* Folks with repositories with lots of renames were happy to set
merge.renameLimit above 32767, once the code supported that, to
get correct cherry-picks
* Processors have gotten faster
* It has been discovered that the timing methodology used last time
probably used too large example files.
The last point is probably worth explaining a bit more:
* The "average" file size used appears to have been average blob size
in the linux kernel history at the time (probably v2.6.25 or
something close to it).
* Since bigger files are modified more frequently, such a computation
weights towards larger files.
* Larger files may be more likely to be modified over time, but are
not more likely to be renamed -- the mean and median blob size
within a tree are a bit higher than the mean and median of blob
sizes in the history leading up to that version for the linux
kernel.
* The mean blob size in v2.6.25 was half the average blob size in
history leading to that point
* The median blob size in v2.6.25 was about 40% of the mean blob size
in v2.6.25.
* Since the mean blob size is more than double the median blob size,
any file as big as the mean will not be compared to any files of
median size or less (because they'd be more than 50% dissimilar).
* Since it is the number of files compared that provides the O(n^2)
behavior, median-sized files should matter more than mean-sized
ones.
The combined effect of the above is that the file size used in past
calculations was likely about 5x too large. Combine that with a CPU
performance improvement of ~30%, and we can increase the limits by
a factor of sqrt(5/(1-.3)) = 2.67, while keeping the original stated
time limits.
Keeping the same approximate time limit probably makes sense for
diff.renameLimit (there is no progress feedback in e.g. git log -p),
but the experience above suggests merge.renameLimit could be extended
significantly. In fact, it probably would make sense to have an
unlimited default setting for merge.renameLimit, but that would
likely need to be coupled with changes to how progress is displayed.
(See https://lore.kernel.org/git/YOx+Ok%2FEYvLqRMzJ@coredump.intra.peff.net/
for details in that area.) For now, let's just bump the approximate
time limit from 10s to 1m.
(Note: We do not want to use actual time limits, because getting results
that depend on how loaded your system is that day feels bad, and because
we don't discover that we won't get all the renames until after we've
put in a lot of work rather than just upfront telling the user there are
too many files involved.)
Using the original time limit of 2s for diff.renameLimit, and bumping
merge.renameLimit from 10s to 60s, I found the following timings using
the simple script at the end of this commit message (on an AWS c5.xlarge
which reports as "Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz"):
N Timing
1300 1.995s
7100 59.973s
So let's round down to nice even numbers and bump the limits from
400->1000, and from 1000->7000.
Here is the measure_rename_perf script (adapted from
https://lore.kernel.org/git/20080211113516.GB6344@coredump.intra.peff.net/
in particular to avoid triggering the linear handling from
basename-guided rename detection):
#!/bin/bash
n=$1; shift
rm -rf repo
mkdir repo && cd repo
git init -q -b main
mkdata() {
mkdir $1
for i in `seq 1 $2`; do
(sed "s/^/$i /" <../sample
echo tag: $1
) >$1/$i
done
}
mkdata initial $n
git add .
git commit -q -m initial
mkdata new $n
git add .
cd new
for i in *; do git mv $i $i.renamed; done
cd ..
git rm -q -rf initial
git commit -q -m new
time git diff-tree -M -l0 --summary HEAD^ HEAD
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-15 00:45:24 +00:00
|
|
|
diff_opts.rename_limit = 7000;
|
2020-12-14 16:21:32 +00:00
|
|
|
diff_opts.rename_score = opt->rename_score;
|
|
|
|
diff_opts.show_rename_progress = opt->show_rename_progress;
|
|
|
|
diff_opts.output_format = DIFF_FORMAT_NO_OUTPUT;
|
|
|
|
diff_setup_done(&diff_opts);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
|
2021-02-14 07:51:51 +00:00
|
|
|
diff_queued_diff = renames->pairs[side_index];
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("diff", "diffcore_rename", opt->repo);
|
2021-02-27 00:30:42 +00:00
|
|
|
diffcore_rename_extended(&diff_opts,
|
2021-07-31 17:27:38 +00:00
|
|
|
&opt->priv->pool,
|
2021-03-11 00:38:29 +00:00
|
|
|
&renames->relevant_sources[side_index],
|
2021-02-27 00:30:42 +00:00
|
|
|
&renames->dirs_removed[side_index],
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
&renames->dir_rename_count[side_index],
|
|
|
|
&renames->cached_pairs[side_index]);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("diff", "diffcore_rename", opt->repo);
|
2021-02-14 07:51:51 +00:00
|
|
|
resolve_diffpair_statuses(&diff_queued_diff);
|
2020-12-14 16:21:32 +00:00
|
|
|
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
if (diff_opts.needed_rename_limit > 0)
|
|
|
|
renames->redo_after_renames = 0;
|
2020-12-14 16:21:32 +00:00
|
|
|
if (diff_opts.needed_rename_limit > renames->needed_limit)
|
|
|
|
renames->needed_limit = diff_opts.needed_rename_limit;
|
|
|
|
|
|
|
|
renames->pairs[side_index] = diff_queued_diff;
|
|
|
|
|
|
|
|
diff_opts.output_format = DIFF_FORMAT_NO_OUTPUT;
|
|
|
|
diff_queued_diff.nr = 0;
|
|
|
|
diff_queued_diff.queue = NULL;
|
|
|
|
diff_flush(&diff_opts);
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
|
|
|
|
return 1;
|
2020-12-14 16:21:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2021-06-08 16:11:41 +00:00
|
|
|
* Get information of all renames which occurred in 'side_pairs', making use
|
|
|
|
* of any implicit directory renames in side_dir_renames (also making use of
|
|
|
|
* implicit directory renames rename_exclusions as needed by
|
|
|
|
* check_for_directory_rename()). Add all (updated) renames into result.
|
2020-12-14 16:21:31 +00:00
|
|
|
*/
|
|
|
|
static int collect_renames(struct merge_options *opt,
|
|
|
|
struct diff_queue_struct *result,
|
2021-01-19 19:53:45 +00:00
|
|
|
unsigned side_index,
|
2022-07-05 01:33:42 +00:00
|
|
|
struct strmap *collisions,
|
2021-01-19 19:53:45 +00:00
|
|
|
struct strmap *dir_renames_for_side,
|
|
|
|
struct strmap *rename_exclusions)
|
2020-12-14 16:21:31 +00:00
|
|
|
{
|
2020-12-14 16:21:33 +00:00
|
|
|
int i, clean = 1;
|
|
|
|
struct diff_queue_struct *side_pairs;
|
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
|
|
|
|
|
|
|
side_pairs = &renames->pairs[side_index];
|
|
|
|
|
|
|
|
for (i = 0; i < side_pairs->nr; ++i) {
|
|
|
|
struct diff_filepair *p = side_pairs->queue[i];
|
2021-01-19 19:53:45 +00:00
|
|
|
char *new_path; /* non-NULL only with directory renames */
|
2020-12-14 16:21:33 +00:00
|
|
|
|
2021-01-19 19:53:45 +00:00
|
|
|
if (p->status != 'A' && p->status != 'R') {
|
2021-05-20 06:09:35 +00:00
|
|
|
possibly_cache_new_pair(renames, p, side_index, NULL);
|
2021-07-31 17:27:38 +00:00
|
|
|
pool_diff_free_filepair(&opt->priv->pool, p);
|
2020-12-14 16:21:33 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2021-01-19 19:53:45 +00:00
|
|
|
new_path = check_for_directory_rename(opt, p->two->path,
|
|
|
|
side_index,
|
|
|
|
dir_renames_for_side,
|
|
|
|
rename_exclusions,
|
2022-07-05 01:33:42 +00:00
|
|
|
collisions,
|
2021-01-19 19:53:45 +00:00
|
|
|
&clean);
|
|
|
|
|
2021-05-20 06:09:35 +00:00
|
|
|
possibly_cache_new_pair(renames, p, side_index, new_path);
|
2021-01-19 19:53:45 +00:00
|
|
|
if (p->status != 'R' && !new_path) {
|
2021-07-31 17:27:38 +00:00
|
|
|
pool_diff_free_filepair(&opt->priv->pool, p);
|
2021-01-19 19:53:45 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new_path)
|
|
|
|
apply_directory_rename_modifications(opt, p, new_path);
|
|
|
|
|
2020-12-14 16:21:33 +00:00
|
|
|
/*
|
|
|
|
* p->score comes back from diffcore_rename_extended() with
|
|
|
|
* the similarity of the renamed file. The similarity is
|
|
|
|
* was used to determine that the two files were related
|
|
|
|
* and are a rename, which we have already used, but beyond
|
|
|
|
* that we have no use for the similarity. So p->score is
|
|
|
|
* now irrelevant. However, process_renames() will need to
|
|
|
|
* know which side of the merge this rename was associated
|
|
|
|
* with, so overwrite p->score with that value.
|
|
|
|
*/
|
|
|
|
p->score = side_index;
|
|
|
|
result->queue[result->nr++] = p;
|
|
|
|
}
|
|
|
|
|
|
|
|
return clean;
|
2020-12-14 16:21:31 +00:00
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:09 +00:00
|
|
|
static int detect_and_process_renames(struct merge_options *opt,
|
|
|
|
struct tree *merge_base,
|
|
|
|
struct tree *side1,
|
|
|
|
struct tree *side2)
|
|
|
|
{
|
merge-ort: fix small memory leak in detect_and_process_renames()
detect_and_process_renames() detects renames on both sides of history
and then combines these into a single diff_queue_struct. The combined
diff_queue_struct needs to be able to hold the renames found on either
side, and since it knows the (maximum) size it needs, it pre-emptively
grows the array to the appropriate size:
ALLOC_GROW(combined.queue,
renames->pairs[1].nr + renames->pairs[2].nr,
combined.alloc);
It then collects the items from each side:
collect_renames(opt, &combined, MERGE_SIDE1, ...)
collect_renames(opt, &combined, MERGE_SIDE2, ...)
Note, though, that collect_renames() sometimes determines that some
pairs are unnecessary and does not include them in the combined array.
When it is done, detect_and_process_renames() frees this memory:
if (combined.nr) {
...
free(combined.queue);
}
The problem is that sometimes even when there are pairs, none of them
are necessary. Instead of checking combined.nr, just remove the
if-check; free() knows to skip NULL pointers. This change fixes the
following memory leak, as reported by valgrind:
==PID== 192 bytes in 1 blocks are definitely lost in loss record 107 of 134
==PID== at 0xADDRESS: malloc
==PID== by 0xADDRESS: realloc
==PID== by 0xADDRESS: xrealloc (wrapper.c:126)
==PID== by 0xADDRESS: detect_and_process_renames (merge-ort.c:3134)
==PID== by 0xADDRESS: merge_ort_nonrecursive_internal (merge-ort.c:4610)
==PID== by 0xADDRESS: merge_ort_internal (merge-ort.c:4709)
==PID== by 0xADDRESS: merge_incore_recursive (merge-ort.c:4760)
==PID== by 0xADDRESS: merge_ort_recursive (merge-ort-wrappers.c:57)
==PID== by 0xADDRESS: try_merge_strategy (merge.c:753)
==PID== by 0xADDRESS: cmd_merge (merge.c:1676)
==PID== by 0xADDRESS: run_builtin (git.c:461)
==PID== by 0xADDRESS: handle_builtin (git.c:713)
==PID== by 0xADDRESS: run_argv (git.c:780)
==PID== by 0xADDRESS: cmd_main (git.c:911)
==PID== by 0xADDRESS: main (common-main.c:52)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-20 01:29:50 +00:00
|
|
|
struct diff_queue_struct combined = { 0 };
|
2020-12-14 16:21:31 +00:00
|
|
|
struct rename_info *renames = &opt->priv->renames;
|
2022-07-05 01:33:42 +00:00
|
|
|
struct strmap collisions[3];
|
merge-ort: fix small memory leak in detect_and_process_renames()
detect_and_process_renames() detects renames on both sides of history
and then combines these into a single diff_queue_struct. The combined
diff_queue_struct needs to be able to hold the renames found on either
side, and since it knows the (maximum) size it needs, it pre-emptively
grows the array to the appropriate size:
ALLOC_GROW(combined.queue,
renames->pairs[1].nr + renames->pairs[2].nr,
combined.alloc);
It then collects the items from each side:
collect_renames(opt, &combined, MERGE_SIDE1, ...)
collect_renames(opt, &combined, MERGE_SIDE2, ...)
Note, though, that collect_renames() sometimes determines that some
pairs are unnecessary and does not include them in the combined array.
When it is done, detect_and_process_renames() frees this memory:
if (combined.nr) {
...
free(combined.queue);
}
The problem is that sometimes even when there are pairs, none of them
are necessary. Instead of checking combined.nr, just remove the
if-check; free() knows to skip NULL pointers. This change fixes the
following memory leak, as reported by valgrind:
==PID== 192 bytes in 1 blocks are definitely lost in loss record 107 of 134
==PID== at 0xADDRESS: malloc
==PID== by 0xADDRESS: realloc
==PID== by 0xADDRESS: xrealloc (wrapper.c:126)
==PID== by 0xADDRESS: detect_and_process_renames (merge-ort.c:3134)
==PID== by 0xADDRESS: merge_ort_nonrecursive_internal (merge-ort.c:4610)
==PID== by 0xADDRESS: merge_ort_internal (merge-ort.c:4709)
==PID== by 0xADDRESS: merge_incore_recursive (merge-ort.c:4760)
==PID== by 0xADDRESS: merge_ort_recursive (merge-ort-wrappers.c:57)
==PID== by 0xADDRESS: try_merge_strategy (merge.c:753)
==PID== by 0xADDRESS: cmd_merge (merge.c:1676)
==PID== by 0xADDRESS: run_builtin (git.c:461)
==PID== by 0xADDRESS: handle_builtin (git.c:713)
==PID== by 0xADDRESS: run_argv (git.c:780)
==PID== by 0xADDRESS: cmd_main (git.c:911)
==PID== by 0xADDRESS: main (common-main.c:52)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-20 01:29:50 +00:00
|
|
|
int need_dir_renames, s, i, clean = 1;
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
unsigned detection_run = 0;
|
2020-12-14 16:21:31 +00:00
|
|
|
|
2021-03-11 00:38:30 +00:00
|
|
|
if (!possible_renames(renames))
|
|
|
|
goto cleanup;
|
2020-12-14 16:21:31 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "regular renames", opt->repo);
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
detection_run |= detect_regular_renames(opt, MERGE_SIDE1);
|
|
|
|
detection_run |= detect_regular_renames(opt, MERGE_SIDE2);
|
2022-01-17 18:25:55 +00:00
|
|
|
if (renames->needed_limit) {
|
|
|
|
renames->cached_pairs_valid_side = 0;
|
|
|
|
renames->redo_after_renames = 0;
|
|
|
|
}
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
if (renames->redo_after_renames && detection_run) {
|
|
|
|
int i, side;
|
|
|
|
struct diff_filepair *p;
|
|
|
|
|
|
|
|
/* Cache the renames, we found */
|
|
|
|
for (side = MERGE_SIDE1; side <= MERGE_SIDE2; side++) {
|
|
|
|
for (i = 0; i < renames->pairs[side].nr; ++i) {
|
|
|
|
p = renames->pairs[side].queue[i];
|
|
|
|
possibly_cache_new_pair(renames, p, side, NULL);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Restart the merge with the cached renames */
|
|
|
|
renames->redo_after_renames = 2;
|
|
|
|
trace2_region_leave("merge", "regular renames", opt->repo);
|
|
|
|
goto cleanup;
|
|
|
|
}
|
merge-ort, diffcore-rename: employ cached renames when possible
When there are many renames between the old base of a series of commits
and the new base, the way sequencer.c, merge-recursive.c, and
diffcore-rename.c have traditionally split the work resulted in
redetecting the same renames with each and every commit being
transplanted. To address this, the last several commits have been
creating a cache of rename detection results, determining when it was
safe to use such a cache in subsequent merge operations, adding helper
functions, and so on. See the previous half dozen commit messages for
additional discussion of this optimization, particularly the message a
few commits ago entitled "add code to check for whether cached renames
can be reused". This commit finally ties all of that work together,
modifying the merge algorithm to make use of these cached renames.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.665 s ± 0.129 s 5.622 s ± 0.059 s
mega-renames: 11.435 s ± 0.158 s 10.127 s ± 0.073 s
just-one-mega: 494.2 ms ± 6.1 ms 500.3 ms ± 3.8 ms
That's a fairly small improvement, but mostly because the previous
optimizations were so effective for these particular testcases; this
optimization only kicks in when the others don't. If we undid the
basename-guided rename detection and skip-irrelevant-renames
optimizations, then we'd see that this series by itself improved
performance as follows:
Before Basename Series After Just This Series
no-renames: 13.815 s ± 0.062 s 5.697 s ± 0.080 s
mega-renames: 1799.937 s ± 0.493 s 205.709 s ± 0.457 s
Since this optimization kicks in to help accelerate cases where the
previous optimizations do not apply, this last comparison shows that
this cached-renames optimization has the potential to help signficantly
in cases that don't meet the requirements for the other optimizations to
be effective.
The changes made in this optimization also lay some important groundwork
for a future optimization around having collect_merge_info() avoid
recursing into subtrees in more cases.
However, for this optimization to be effective, merge_switch_to_result()
should only be called when the rebase or cherry-pick operation has
either completed or hit a case where the user needs to resolve a
conflict or edit the result. If it is called after every commit, as
sequencer.c does, then the working tree and index are needlessly updated
with every commit and the cached metadata is tossed, defeating this
optimization. Refactoring sequencer.c to only call
merge_switch_to_result() at the end of the operation is a bigger
undertaking, and the practical benefits of this optimization will not be
realized until that work is performed. Since `test-tool fast-rebase`
only updates at the end of the operation, it was used to obtain the
timings above.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:41 +00:00
|
|
|
use_cached_pairs(opt, &renames->cached_pairs[1], &renames->pairs[1]);
|
|
|
|
use_cached_pairs(opt, &renames->cached_pairs[2], &renames->pairs[2]);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "regular renames", opt->repo);
|
2020-12-14 16:21:31 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "directory renames", opt->repo);
|
2021-01-19 19:53:40 +00:00
|
|
|
need_dir_renames =
|
|
|
|
!opt->priv->call_depth &&
|
|
|
|
(opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_TRUE ||
|
|
|
|
opt->detect_directory_renames == MERGE_DIRECTORY_RENAMES_CONFLICT);
|
|
|
|
|
|
|
|
if (need_dir_renames) {
|
|
|
|
get_provisional_directory_renames(opt, MERGE_SIDE1, &clean);
|
|
|
|
get_provisional_directory_renames(opt, MERGE_SIDE2, &clean);
|
|
|
|
handle_directory_level_conflicts(opt);
|
|
|
|
}
|
|
|
|
|
2020-12-14 16:21:31 +00:00
|
|
|
ALLOC_GROW(combined.queue,
|
|
|
|
renames->pairs[1].nr + renames->pairs[2].nr,
|
|
|
|
combined.alloc);
|
2022-07-05 01:33:42 +00:00
|
|
|
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; i++) {
|
|
|
|
int other_side = 3 - i;
|
|
|
|
compute_collisions(&collisions[i],
|
|
|
|
&renames->dir_renames[other_side],
|
|
|
|
&renames->pairs[i]);
|
|
|
|
}
|
2021-01-19 19:53:45 +00:00
|
|
|
clean &= collect_renames(opt, &combined, MERGE_SIDE1,
|
2022-07-05 01:33:42 +00:00
|
|
|
collisions,
|
2021-01-19 19:53:45 +00:00
|
|
|
&renames->dir_renames[2],
|
|
|
|
&renames->dir_renames[1]);
|
|
|
|
clean &= collect_renames(opt, &combined, MERGE_SIDE2,
|
2022-07-05 01:33:42 +00:00
|
|
|
collisions,
|
2021-01-19 19:53:45 +00:00
|
|
|
&renames->dir_renames[1],
|
|
|
|
&renames->dir_renames[2]);
|
2022-07-05 01:33:42 +00:00
|
|
|
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; i++)
|
|
|
|
free_collisions(&collisions[i]);
|
2021-03-20 00:03:44 +00:00
|
|
|
STABLE_QSORT(combined.queue, combined.nr, compare_pairs);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "directory renames", opt->repo);
|
2020-12-14 16:21:31 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "process renames", opt->repo);
|
2020-12-14 16:21:31 +00:00
|
|
|
clean &= process_renames(opt, &combined);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "process renames", opt->repo);
|
2020-12-14 16:21:31 +00:00
|
|
|
|
2021-03-11 00:38:30 +00:00
|
|
|
goto simple_cleanup; /* collect_renames() handles some of cleanup */
|
|
|
|
|
|
|
|
cleanup:
|
|
|
|
/*
|
|
|
|
* Free now unneeded filepairs, which would have been handled
|
|
|
|
* in collect_renames() normally but we skipped that code.
|
|
|
|
*/
|
|
|
|
for (s = MERGE_SIDE1; s <= MERGE_SIDE2; s++) {
|
|
|
|
struct diff_queue_struct *side_pairs;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
side_pairs = &renames->pairs[s];
|
|
|
|
for (i = 0; i < side_pairs->nr; ++i) {
|
|
|
|
struct diff_filepair *p = side_pairs->queue[i];
|
2021-07-31 17:27:38 +00:00
|
|
|
pool_diff_free_filepair(&opt->priv->pool, p);
|
2021-03-11 00:38:30 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
simple_cleanup:
|
2020-12-14 16:21:31 +00:00
|
|
|
/* Free memory for renames->pairs[] and combined */
|
|
|
|
for (s = MERGE_SIDE1; s <= MERGE_SIDE2; s++) {
|
|
|
|
free(renames->pairs[s].queue);
|
|
|
|
DIFF_QUEUE_CLEAR(&renames->pairs[s]);
|
|
|
|
}
|
merge-ort: fix small memory leak in detect_and_process_renames()
detect_and_process_renames() detects renames on both sides of history
and then combines these into a single diff_queue_struct. The combined
diff_queue_struct needs to be able to hold the renames found on either
side, and since it knows the (maximum) size it needs, it pre-emptively
grows the array to the appropriate size:
ALLOC_GROW(combined.queue,
renames->pairs[1].nr + renames->pairs[2].nr,
combined.alloc);
It then collects the items from each side:
collect_renames(opt, &combined, MERGE_SIDE1, ...)
collect_renames(opt, &combined, MERGE_SIDE2, ...)
Note, though, that collect_renames() sometimes determines that some
pairs are unnecessary and does not include them in the combined array.
When it is done, detect_and_process_renames() frees this memory:
if (combined.nr) {
...
free(combined.queue);
}
The problem is that sometimes even when there are pairs, none of them
are necessary. Instead of checking combined.nr, just remove the
if-check; free() knows to skip NULL pointers. This change fixes the
following memory leak, as reported by valgrind:
==PID== 192 bytes in 1 blocks are definitely lost in loss record 107 of 134
==PID== at 0xADDRESS: malloc
==PID== by 0xADDRESS: realloc
==PID== by 0xADDRESS: xrealloc (wrapper.c:126)
==PID== by 0xADDRESS: detect_and_process_renames (merge-ort.c:3134)
==PID== by 0xADDRESS: merge_ort_nonrecursive_internal (merge-ort.c:4610)
==PID== by 0xADDRESS: merge_ort_internal (merge-ort.c:4709)
==PID== by 0xADDRESS: merge_incore_recursive (merge-ort.c:4760)
==PID== by 0xADDRESS: merge_ort_recursive (merge-ort-wrappers.c:57)
==PID== by 0xADDRESS: try_merge_strategy (merge.c:753)
==PID== by 0xADDRESS: cmd_merge (merge.c:1676)
==PID== by 0xADDRESS: run_builtin (git.c:461)
==PID== by 0xADDRESS: handle_builtin (git.c:713)
==PID== by 0xADDRESS: run_argv (git.c:780)
==PID== by 0xADDRESS: cmd_main (git.c:911)
==PID== by 0xADDRESS: main (common-main.c:52)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-20 01:29:50 +00:00
|
|
|
for (i = 0; i < combined.nr; i++)
|
|
|
|
pool_diff_free_filepair(&opt->priv->pool, combined.queue[i]);
|
|
|
|
free(combined.queue);
|
2020-12-13 08:04:09 +00:00
|
|
|
|
|
|
|
return clean;
|
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: functions related to process_entries() ***/
|
|
|
|
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
static int sort_dirs_next_to_their_children(const char *one, const char *two)
|
2020-12-13 08:04:19 +00:00
|
|
|
{
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
unsigned char c1, c2;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Here we only care that entries for directories appear adjacent
|
|
|
|
* to and before files underneath the directory. We can achieve
|
|
|
|
* that by pretending to add a trailing slash to every file and
|
|
|
|
* then sorting. In other words, we do not want the natural
|
|
|
|
* sorting of
|
|
|
|
* foo
|
|
|
|
* foo.txt
|
|
|
|
* foo/bar
|
|
|
|
* Instead, we want "foo" to sort as though it were "foo/", so that
|
|
|
|
* we instead get
|
|
|
|
* foo.txt
|
|
|
|
* foo
|
|
|
|
* foo/bar
|
|
|
|
* To achieve this, we basically implement our own strcmp, except that
|
|
|
|
* if we get to the end of either string instead of comparing NUL to
|
|
|
|
* another character, we compare '/' to it.
|
|
|
|
*
|
|
|
|
* If this unusual "sort as though '/' were appended" perplexes
|
|
|
|
* you, perhaps it will help to note that this is not the final
|
|
|
|
* sort. write_tree() will sort again without the trailing slash
|
|
|
|
* magic, but just on paths immediately under a given tree.
|
2020-12-13 08:04:19 +00:00
|
|
|
*
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
* The reason to not use df_name_compare directly was that it was
|
|
|
|
* just too expensive (we don't have the string lengths handy), so
|
|
|
|
* it was reimplemented.
|
2020-12-13 08:04:19 +00:00
|
|
|
*/
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
|
2020-12-13 08:04:19 +00:00
|
|
|
/*
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
* NOTE: This function will never be called with two equal strings,
|
|
|
|
* because it is used to sort the keys of a strmap, and strmaps have
|
|
|
|
* unique keys by construction. That simplifies our c1==c2 handling
|
|
|
|
* below.
|
2020-12-13 08:04:19 +00:00
|
|
|
*/
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
|
|
|
|
while (*one && (*one == *two)) {
|
|
|
|
one++;
|
|
|
|
two++;
|
|
|
|
}
|
|
|
|
|
|
|
|
c1 = *one ? *one : '/';
|
|
|
|
c2 = *two ? *two : '/';
|
|
|
|
|
|
|
|
if (c1 == c2) {
|
|
|
|
/* Getting here means one is a leading directory of the other */
|
|
|
|
return (*one) ? 1 : -1;
|
|
|
|
} else
|
|
|
|
return c1 - c2;
|
2020-12-13 08:04:19 +00:00
|
|
|
}
|
|
|
|
|
2021-03-20 00:03:47 +00:00
|
|
|
static int read_oid_strbuf(struct merge_options *opt,
|
|
|
|
const struct object_id *oid,
|
|
|
|
struct strbuf *dst)
|
|
|
|
{
|
|
|
|
void *buf;
|
|
|
|
enum object_type type;
|
|
|
|
unsigned long size;
|
2023-03-28 13:58:50 +00:00
|
|
|
buf = repo_read_object_file(the_repository, oid, &type, &size);
|
2021-03-20 00:03:47 +00:00
|
|
|
if (!buf)
|
|
|
|
return err(opt, _("cannot read object %s"), oid_to_hex(oid));
|
|
|
|
if (type != OBJ_BLOB) {
|
|
|
|
free(buf);
|
|
|
|
return err(opt, _("object %s is not a blob"), oid_to_hex(oid));
|
|
|
|
}
|
|
|
|
strbuf_attach(dst, buf, size, size + 1);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int blob_unchanged(struct merge_options *opt,
|
|
|
|
const struct version_info *base,
|
|
|
|
const struct version_info *side,
|
|
|
|
const char *path)
|
|
|
|
{
|
|
|
|
struct strbuf basebuf = STRBUF_INIT;
|
|
|
|
struct strbuf sidebuf = STRBUF_INIT;
|
|
|
|
int ret = 0; /* assume changed for safety */
|
2021-04-30 04:50:26 +00:00
|
|
|
struct index_state *idx = &opt->priv->attr_index;
|
2021-03-20 00:03:47 +00:00
|
|
|
|
|
|
|
if (!idx->initialized)
|
|
|
|
initialize_attr_index(opt);
|
|
|
|
|
|
|
|
if (base->mode != side->mode)
|
|
|
|
return 0;
|
|
|
|
if (oideq(&base->oid, &side->oid))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
if (read_oid_strbuf(opt, &base->oid, &basebuf) ||
|
|
|
|
read_oid_strbuf(opt, &side->oid, &sidebuf))
|
|
|
|
goto error_return;
|
|
|
|
/*
|
|
|
|
* Note: binary | is used so that both renormalizations are
|
|
|
|
* performed. Comparison can be skipped if both files are
|
|
|
|
* unchanged since their sha1s have already been compared.
|
|
|
|
*/
|
|
|
|
if (renormalize_buffer(idx, path, basebuf.buf, basebuf.len, &basebuf) |
|
|
|
|
renormalize_buffer(idx, path, sidebuf.buf, sidebuf.len, &sidebuf))
|
|
|
|
ret = (basebuf.len == sidebuf.len &&
|
|
|
|
!memcmp(basebuf.buf, sidebuf.buf, basebuf.len));
|
|
|
|
|
|
|
|
error_return:
|
|
|
|
strbuf_release(&basebuf);
|
|
|
|
strbuf_release(&sidebuf);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:20 +00:00
|
|
|
struct directory_versions {
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
/*
|
|
|
|
* versions: list of (basename -> version_info)
|
|
|
|
*
|
|
|
|
* The basenames are in reverse lexicographic order of full pathnames,
|
|
|
|
* as processed in process_entries(). This puts all entries within
|
|
|
|
* a directory together, and covers the directory itself after
|
|
|
|
* everything within it, allowing us to write subtrees before needing
|
|
|
|
* to record information for the tree itself.
|
|
|
|
*/
|
2020-12-13 08:04:20 +00:00
|
|
|
struct string_list versions;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* offsets: list of (full relative path directories -> integer offsets)
|
|
|
|
*
|
|
|
|
* Since versions contains basenames from files in multiple different
|
|
|
|
* directories, we need to know which entries in versions correspond
|
|
|
|
* to which directories. Values of e.g.
|
|
|
|
* "" 0
|
|
|
|
* src 2
|
|
|
|
* src/moduleA 5
|
|
|
|
* Would mean that entries 0-1 of versions are files in the toplevel
|
|
|
|
* directory, entries 2-4 are files under src/, and the remaining
|
|
|
|
* entries starting at index 5 are files under src/moduleA/.
|
|
|
|
*/
|
|
|
|
struct string_list offsets;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* last_directory: directory that previously processed file found in
|
|
|
|
*
|
|
|
|
* last_directory starts NULL, but records the directory in which the
|
|
|
|
* previous file was found within. As soon as
|
|
|
|
* directory(current_file) != last_directory
|
|
|
|
* then we need to start updating accounting in versions & offsets.
|
|
|
|
* Note that last_directory is always the last path in "offsets" (or
|
|
|
|
* NULL if "offsets" is empty) so this exists just for quick access.
|
|
|
|
*/
|
|
|
|
const char *last_directory;
|
|
|
|
|
|
|
|
/* last_directory_len: cached computation of strlen(last_directory) */
|
|
|
|
unsigned last_directory_len;
|
2020-12-13 08:04:20 +00:00
|
|
|
};
|
|
|
|
|
2020-12-13 08:04:21 +00:00
|
|
|
static int tree_entry_order(const void *a_, const void *b_)
|
|
|
|
{
|
|
|
|
const struct string_list_item *a = a_;
|
|
|
|
const struct string_list_item *b = b_;
|
|
|
|
|
|
|
|
const struct merged_info *ami = a->util;
|
|
|
|
const struct merged_info *bmi = b->util;
|
|
|
|
return base_name_compare(a->string, strlen(a->string), ami->result.mode,
|
|
|
|
b->string, strlen(b->string), bmi->result.mode);
|
|
|
|
}
|
|
|
|
|
2022-09-28 07:29:21 +00:00
|
|
|
static int write_tree(struct object_id *result_oid,
|
|
|
|
struct string_list *versions,
|
|
|
|
unsigned int offset,
|
|
|
|
size_t hash_size)
|
2020-12-13 08:04:21 +00:00
|
|
|
{
|
|
|
|
size_t maxlen = 0, extra;
|
2021-04-11 11:05:06 +00:00
|
|
|
unsigned int nr;
|
2020-12-13 08:04:21 +00:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
2022-09-28 07:29:21 +00:00
|
|
|
int i, ret = 0;
|
2020-12-13 08:04:21 +00:00
|
|
|
|
2021-04-11 11:05:06 +00:00
|
|
|
assert(offset <= versions->nr);
|
|
|
|
nr = versions->nr - offset;
|
|
|
|
if (versions->nr)
|
2021-04-16 20:53:34 +00:00
|
|
|
/* No need for STABLE_QSORT -- filenames must be unique */
|
2021-04-11 11:05:06 +00:00
|
|
|
QSORT(versions->items + offset, nr, tree_entry_order);
|
2020-12-13 08:04:21 +00:00
|
|
|
|
|
|
|
/* Pre-allocate some space in buf */
|
|
|
|
extra = hash_size + 8; /* 8: 6 for mode, 1 for space, 1 for NUL char */
|
|
|
|
for (i = 0; i < nr; i++) {
|
|
|
|
maxlen += strlen(versions->items[offset+i].string) + extra;
|
|
|
|
}
|
|
|
|
strbuf_grow(&buf, maxlen);
|
|
|
|
|
|
|
|
/* Write each entry out to buf */
|
|
|
|
for (i = 0; i < nr; i++) {
|
|
|
|
struct merged_info *mi = versions->items[offset+i].util;
|
|
|
|
struct version_info *ri = &mi->result;
|
|
|
|
strbuf_addf(&buf, "%o %s%c",
|
|
|
|
ri->mode,
|
|
|
|
versions->items[offset+i].string, '\0');
|
|
|
|
strbuf_add(&buf, ri->oid.hash, hash_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Write this object file out, and record in result_oid */
|
2022-09-28 07:29:21 +00:00
|
|
|
if (write_object_file(buf.buf, buf.len, OBJ_TREE, result_oid))
|
|
|
|
ret = -1;
|
2020-12-13 08:04:21 +00:00
|
|
|
strbuf_release(&buf);
|
2022-09-28 07:29:21 +00:00
|
|
|
return ret;
|
2020-12-13 08:04:21 +00:00
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:20 +00:00
|
|
|
static void record_entry_for_tree(struct directory_versions *dir_metadata,
|
|
|
|
const char *path,
|
|
|
|
struct merged_info *mi)
|
|
|
|
{
|
|
|
|
const char *basename;
|
|
|
|
|
|
|
|
if (mi->is_null)
|
|
|
|
/* nothing to record */
|
|
|
|
return;
|
|
|
|
|
|
|
|
basename = path + mi->basename_offset;
|
|
|
|
assert(strchr(basename, '/') == NULL);
|
|
|
|
string_list_append(&dir_metadata->versions,
|
|
|
|
basename)->util = &mi->result;
|
|
|
|
}
|
|
|
|
|
2022-09-28 07:29:21 +00:00
|
|
|
static int write_completed_directory(struct merge_options *opt,
|
|
|
|
const char *new_directory_name,
|
|
|
|
struct directory_versions *info)
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
{
|
|
|
|
const char *prev_dir;
|
|
|
|
struct merged_info *dir_info = NULL;
|
2022-09-28 07:29:21 +00:00
|
|
|
unsigned int offset, ret = 0;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Some explanation of info->versions and info->offsets...
|
|
|
|
*
|
|
|
|
* process_entries() iterates over all relevant files AND
|
|
|
|
* directories in reverse lexicographic order, and calls this
|
|
|
|
* function. Thus, an example of the paths that process_entries()
|
|
|
|
* could operate on (along with the directories for those paths
|
|
|
|
* being shown) is:
|
|
|
|
*
|
|
|
|
* xtract.c ""
|
|
|
|
* tokens.txt ""
|
|
|
|
* src/moduleB/umm.c src/moduleB
|
|
|
|
* src/moduleB/stuff.h src/moduleB
|
|
|
|
* src/moduleB/baz.c src/moduleB
|
|
|
|
* src/moduleB src
|
|
|
|
* src/moduleA/foo.c src/moduleA
|
|
|
|
* src/moduleA/bar.c src/moduleA
|
|
|
|
* src/moduleA src
|
|
|
|
* src ""
|
|
|
|
* Makefile ""
|
|
|
|
*
|
|
|
|
* info->versions:
|
|
|
|
*
|
|
|
|
* always contains the unprocessed entries and their
|
|
|
|
* version_info information. For example, after the first five
|
|
|
|
* entries above, info->versions would be:
|
|
|
|
*
|
|
|
|
* xtract.c <xtract.c's version_info>
|
|
|
|
* token.txt <token.txt's version_info>
|
|
|
|
* umm.c <src/moduleB/umm.c's version_info>
|
|
|
|
* stuff.h <src/moduleB/stuff.h's version_info>
|
|
|
|
* baz.c <src/moduleB/baz.c's version_info>
|
|
|
|
*
|
|
|
|
* Once a subdirectory is completed we remove the entries in
|
|
|
|
* that subdirectory from info->versions, writing it as a tree
|
|
|
|
* (write_tree()). Thus, as soon as we get to src/moduleB,
|
|
|
|
* info->versions would be updated to
|
|
|
|
*
|
|
|
|
* xtract.c <xtract.c's version_info>
|
|
|
|
* token.txt <token.txt's version_info>
|
|
|
|
* moduleB <src/moduleB's version_info>
|
|
|
|
*
|
|
|
|
* info->offsets:
|
|
|
|
*
|
|
|
|
* helps us track which entries in info->versions correspond to
|
|
|
|
* which directories. When we are N directories deep (e.g. 4
|
|
|
|
* for src/modA/submod/subdir/), we have up to N+1 unprocessed
|
|
|
|
* directories (+1 because of toplevel dir). Corresponding to
|
|
|
|
* the info->versions example above, after processing five entries
|
|
|
|
* info->offsets will be:
|
|
|
|
*
|
|
|
|
* "" 0
|
|
|
|
* src/moduleB 2
|
|
|
|
*
|
|
|
|
* which is used to know that xtract.c & token.txt are from the
|
|
|
|
* toplevel dirctory, while umm.c & stuff.h & baz.c are from the
|
|
|
|
* src/moduleB directory. Again, following the example above,
|
|
|
|
* once we need to process src/moduleB, then info->offsets is
|
|
|
|
* updated to
|
|
|
|
*
|
|
|
|
* "" 0
|
|
|
|
* src 2
|
|
|
|
*
|
|
|
|
* which says that moduleB (and only moduleB so far) is in the
|
|
|
|
* src directory.
|
|
|
|
*
|
|
|
|
* One unique thing to note about info->offsets here is that
|
|
|
|
* "src" was not added to info->offsets until there was a path
|
|
|
|
* (a file OR directory) immediately below src/ that got
|
|
|
|
* processed.
|
|
|
|
*
|
|
|
|
* Since process_entry() just appends new entries to info->versions,
|
|
|
|
* write_completed_directory() only needs to do work if the next path
|
|
|
|
* is in a directory that is different than the last directory found
|
|
|
|
* in info->offsets.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we are working with the same directory as the last entry, there
|
|
|
|
* is no work to do. (See comments above the directory_name member of
|
|
|
|
* struct merged_info for why we can use pointer comparison instead of
|
|
|
|
* strcmp here.)
|
|
|
|
*/
|
|
|
|
if (new_directory_name == info->last_directory)
|
2022-09-28 07:29:21 +00:00
|
|
|
return 0;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If we are just starting (last_directory is NULL), or last_directory
|
|
|
|
* is a prefix of the current directory, then we can just update
|
|
|
|
* info->offsets to record the offset where we started this directory
|
|
|
|
* and update last_directory to have quick access to it.
|
|
|
|
*/
|
|
|
|
if (info->last_directory == NULL ||
|
|
|
|
!strncmp(new_directory_name, info->last_directory,
|
|
|
|
info->last_directory_len)) {
|
|
|
|
uintptr_t offset = info->versions.nr;
|
|
|
|
|
|
|
|
info->last_directory = new_directory_name;
|
|
|
|
info->last_directory_len = strlen(info->last_directory);
|
|
|
|
/*
|
|
|
|
* Record the offset into info->versions where we will
|
|
|
|
* start recording basenames of paths found within
|
|
|
|
* new_directory_name.
|
|
|
|
*/
|
|
|
|
string_list_append(&info->offsets,
|
|
|
|
info->last_directory)->util = (void*)offset;
|
2022-09-28 07:29:21 +00:00
|
|
|
return 0;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The next entry that will be processed will be within
|
|
|
|
* new_directory_name. Since at this point we know that
|
|
|
|
* new_directory_name is within a different directory than
|
|
|
|
* info->last_directory, we have all entries for info->last_directory
|
|
|
|
* in info->versions and we need to create a tree object for them.
|
|
|
|
*/
|
|
|
|
dir_info = strmap_get(&opt->priv->paths, info->last_directory);
|
|
|
|
assert(dir_info);
|
|
|
|
offset = (uintptr_t)info->offsets.items[info->offsets.nr-1].util;
|
|
|
|
if (offset == info->versions.nr) {
|
|
|
|
/*
|
|
|
|
* Actually, we don't need to create a tree object in this
|
|
|
|
* case. Whenever all files within a directory disappear
|
|
|
|
* during the merge (e.g. unmodified on one side and
|
|
|
|
* deleted on the other, or files were renamed elsewhere),
|
|
|
|
* then we get here and the directory itself needs to be
|
|
|
|
* omitted from its parent tree as well.
|
|
|
|
*/
|
|
|
|
dir_info->is_null = 1;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Write out the tree to the git object directory, and also
|
|
|
|
* record the mode and oid in dir_info->result.
|
|
|
|
*/
|
|
|
|
dir_info->is_null = 0;
|
|
|
|
dir_info->result.mode = S_IFDIR;
|
2022-09-28 07:29:21 +00:00
|
|
|
if (write_tree(&dir_info->result.oid, &info->versions, offset,
|
|
|
|
opt->repo->hash_algo->rawsz) < 0)
|
|
|
|
ret = -1;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We've now used several entries from info->versions and one entry
|
|
|
|
* from info->offsets, so we get rid of those values.
|
|
|
|
*/
|
|
|
|
info->offsets.nr--;
|
|
|
|
info->versions.nr = offset;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now we've taken care of the completed directory, but we need to
|
|
|
|
* prepare things since future entries will be in
|
|
|
|
* new_directory_name. (In particular, process_entry() will be
|
|
|
|
* appending new entries to info->versions.) So, we need to make
|
|
|
|
* sure new_directory_name is the last entry in info->offsets.
|
|
|
|
*/
|
|
|
|
prev_dir = info->offsets.nr == 0 ? NULL :
|
|
|
|
info->offsets.items[info->offsets.nr-1].string;
|
|
|
|
if (new_directory_name != prev_dir) {
|
|
|
|
uintptr_t c = info->versions.nr;
|
|
|
|
string_list_append(&info->offsets,
|
|
|
|
new_directory_name)->util = (void*)c;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* And, of course, we need to update last_directory to match. */
|
|
|
|
info->last_directory = new_directory_name;
|
|
|
|
info->last_directory_len = strlen(info->last_directory);
|
2022-09-28 07:29:21 +00:00
|
|
|
|
|
|
|
return ret;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:18 +00:00
|
|
|
/* Per entry merge function */
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
static int process_entry(struct merge_options *opt,
|
|
|
|
const char *path,
|
|
|
|
struct conflict_info *ci,
|
|
|
|
struct directory_versions *dir_metadata)
|
2020-12-13 08:04:18 +00:00
|
|
|
{
|
2021-01-01 02:34:40 +00:00
|
|
|
int df_file_index = 0;
|
|
|
|
|
2020-12-13 08:04:18 +00:00
|
|
|
VERIFY_CI(ci);
|
|
|
|
assert(ci->filemask >= 0 && ci->filemask <= 7);
|
|
|
|
/* ci->match_mask == 7 was handled in collect_merge_info_callback() */
|
|
|
|
assert(ci->match_mask == 0 || ci->match_mask == 3 ||
|
|
|
|
ci->match_mask == 5 || ci->match_mask == 6);
|
|
|
|
|
2020-12-13 08:04:20 +00:00
|
|
|
if (ci->dirmask) {
|
|
|
|
record_entry_for_tree(dir_metadata, path, &ci->merged);
|
|
|
|
if (ci->filemask == 0)
|
|
|
|
/* nothing else to handle */
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
return 0;
|
2020-12-13 08:04:20 +00:00
|
|
|
assert(ci->df_conflict);
|
|
|
|
}
|
|
|
|
|
2021-01-01 02:34:39 +00:00
|
|
|
if (ci->df_conflict && ci->merged.result.mode == 0) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* directory no longer in the way, but we do have a file we
|
|
|
|
* need to place here so we need to clean away the "directory
|
|
|
|
* merges to nothing" result.
|
|
|
|
*/
|
|
|
|
ci->df_conflict = 0;
|
|
|
|
assert(ci->filemask != 0);
|
|
|
|
ci->merged.clean = 0;
|
|
|
|
ci->merged.is_null = 0;
|
|
|
|
/* and we want to zero out any directory-related entries */
|
|
|
|
ci->match_mask = (ci->match_mask & ~ci->dirmask);
|
|
|
|
ci->dirmask = 0;
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
|
|
|
if (ci->filemask & (1 << i))
|
|
|
|
continue;
|
|
|
|
ci->stages[i].mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&ci->stages[i].oid, null_oid());
|
2021-01-01 02:34:39 +00:00
|
|
|
}
|
|
|
|
} else if (ci->df_conflict && ci->merged.result.mode != 0) {
|
2021-01-01 02:34:40 +00:00
|
|
|
/*
|
|
|
|
* This started out as a D/F conflict, and the entries in
|
|
|
|
* the competing directory were not removed by the merge as
|
|
|
|
* evidenced by write_completed_directory() writing a value
|
|
|
|
* to ci->merged.result.mode.
|
|
|
|
*/
|
|
|
|
struct conflict_info *new_ci;
|
|
|
|
const char *branch;
|
|
|
|
const char *old_path = path;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
assert(ci->merged.result.mode == S_IFDIR);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If filemask is 1, we can just ignore the file as having
|
|
|
|
* been deleted on both sides. We do not want to overwrite
|
|
|
|
* ci->merged.result, since it stores the tree for all the
|
|
|
|
* files under it.
|
|
|
|
*/
|
|
|
|
if (ci->filemask == 1) {
|
|
|
|
ci->filemask = 0;
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
return 0;
|
2021-01-01 02:34:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This file still exists on at least one side, and we want
|
|
|
|
* the directory to remain here, so we need to move this
|
|
|
|
* path to some new location.
|
|
|
|
*/
|
2021-07-31 17:27:38 +00:00
|
|
|
new_ci = mem_pool_calloc(&opt->priv->pool, 1, sizeof(*new_ci));
|
2021-07-30 11:47:40 +00:00
|
|
|
|
2021-01-01 02:34:40 +00:00
|
|
|
/* We don't really want new_ci->merged.result copied, but it'll
|
|
|
|
* be overwritten below so it doesn't matter. We also don't
|
|
|
|
* want any directory mode/oid values copied, but we'll zero
|
|
|
|
* those out immediately. We do want the rest of ci copied.
|
|
|
|
*/
|
|
|
|
memcpy(new_ci, ci, sizeof(*ci));
|
|
|
|
new_ci->match_mask = (new_ci->match_mask & ~new_ci->dirmask);
|
|
|
|
new_ci->dirmask = 0;
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
|
|
|
if (new_ci->filemask & (1 << i))
|
|
|
|
continue;
|
|
|
|
/* zero out any entries related to directories */
|
|
|
|
new_ci->stages[i].mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&new_ci->stages[i].oid, null_oid());
|
2021-01-01 02:34:40 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Find out which side this file came from; note that we
|
|
|
|
* cannot just use ci->filemask, because renames could cause
|
|
|
|
* the filemask to go back to 7. So we use dirmask, then
|
|
|
|
* pick the opposite side's index.
|
|
|
|
*/
|
|
|
|
df_file_index = (ci->dirmask & (1 << 1)) ? 2 : 1;
|
|
|
|
branch = (df_file_index == 1) ? opt->branch1 : opt->branch2;
|
2022-02-20 01:29:51 +00:00
|
|
|
path = unique_path(opt, path, branch);
|
2021-01-01 02:34:40 +00:00
|
|
|
strmap_put(&opt->priv->paths, path, new_ci);
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_FILE_DIRECTORY, 0,
|
|
|
|
path, old_path, NULL, NULL,
|
2021-01-01 02:34:40 +00:00
|
|
|
_("CONFLICT (file/directory): directory in the way "
|
|
|
|
"of %s from %s; moving it to %s instead."),
|
|
|
|
old_path, branch, path);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Zero out the filemask for the old ci. At this point, ci
|
|
|
|
* was just an entry for a directory, so we don't need to
|
|
|
|
* do anything more with it.
|
|
|
|
*/
|
|
|
|
ci->filemask = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now note that we're working on the new entry (path was
|
|
|
|
* updated above.
|
|
|
|
*/
|
|
|
|
ci = new_ci;
|
2020-12-13 08:04:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* NOTE: Below there is a long switch-like if-elseif-elseif... block
|
|
|
|
* which the code goes through even for the df_conflict cases
|
2021-01-01 02:34:40 +00:00
|
|
|
* above.
|
2020-12-13 08:04:18 +00:00
|
|
|
*/
|
|
|
|
if (ci->match_mask) {
|
2021-06-30 17:29:59 +00:00
|
|
|
ci->merged.clean = !ci->df_conflict && !ci->path_conflict;
|
2020-12-13 08:04:18 +00:00
|
|
|
if (ci->match_mask == 6) {
|
|
|
|
/* stages[1] == stages[2] */
|
|
|
|
ci->merged.result.mode = ci->stages[1].mode;
|
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[1].oid);
|
|
|
|
} else {
|
|
|
|
/* determine the mask of the side that didn't match */
|
|
|
|
unsigned int othermask = 7 & ~ci->match_mask;
|
|
|
|
int side = (othermask == 4) ? 2 : 1;
|
|
|
|
|
|
|
|
ci->merged.result.mode = ci->stages[side].mode;
|
|
|
|
ci->merged.is_null = !ci->merged.result.mode;
|
2021-06-30 17:29:59 +00:00
|
|
|
if (ci->merged.is_null)
|
|
|
|
ci->merged.clean = 1;
|
2020-12-13 08:04:18 +00:00
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[side].oid);
|
|
|
|
|
|
|
|
assert(othermask == 2 || othermask == 4);
|
|
|
|
assert(ci->merged.is_null ==
|
|
|
|
(ci->filemask == ci->match_mask));
|
|
|
|
}
|
|
|
|
} else if (ci->filemask >= 6 &&
|
|
|
|
(S_IFMT & ci->stages[1].mode) !=
|
|
|
|
(S_IFMT & ci->stages[2].mode)) {
|
2021-01-01 02:34:48 +00:00
|
|
|
/* Two different items from (file/submodule/symlink) */
|
|
|
|
if (opt->priv->call_depth) {
|
|
|
|
/* Just use the version from the merge base */
|
|
|
|
ci->merged.clean = 0;
|
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[0].oid);
|
|
|
|
ci->merged.result.mode = ci->stages[0].mode;
|
|
|
|
ci->merged.is_null = (ci->merged.result.mode == 0);
|
|
|
|
} else {
|
|
|
|
/* Handle by renaming one or both to separate paths. */
|
|
|
|
unsigned o_mode = ci->stages[0].mode;
|
|
|
|
unsigned a_mode = ci->stages[1].mode;
|
|
|
|
unsigned b_mode = ci->stages[2].mode;
|
|
|
|
struct conflict_info *new_ci;
|
|
|
|
const char *a_path = NULL, *b_path = NULL;
|
|
|
|
int rename_a = 0, rename_b = 0;
|
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
new_ci = mem_pool_alloc(&opt->priv->pool,
|
|
|
|
sizeof(*new_ci));
|
2021-01-01 02:34:48 +00:00
|
|
|
|
|
|
|
if (S_ISREG(a_mode))
|
|
|
|
rename_a = 1;
|
|
|
|
else if (S_ISREG(b_mode))
|
|
|
|
rename_b = 1;
|
|
|
|
else {
|
|
|
|
rename_a = 1;
|
|
|
|
rename_b = 1;
|
|
|
|
}
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
if (rename_a)
|
|
|
|
a_path = unique_path(opt, path, opt->branch1);
|
|
|
|
if (rename_b)
|
|
|
|
b_path = unique_path(opt, path, opt->branch2);
|
|
|
|
|
2021-05-09 21:52:50 +00:00
|
|
|
if (rename_a && rename_b) {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DISTINCT_MODES, 0,
|
|
|
|
path, a_path, b_path, NULL,
|
2021-05-09 21:52:50 +00:00
|
|
|
_("CONFLICT (distinct types): %s had "
|
|
|
|
"different types on each side; "
|
|
|
|
"renamed both of them so each can "
|
|
|
|
"be recorded somewhere."),
|
|
|
|
path);
|
|
|
|
} else {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_DISTINCT_MODES, 0,
|
|
|
|
path, rename_a ? a_path : b_path,
|
|
|
|
NULL, NULL,
|
2021-05-09 21:52:50 +00:00
|
|
|
_("CONFLICT (distinct types): %s had "
|
|
|
|
"different types on each side; "
|
|
|
|
"renamed one of them so each can be "
|
|
|
|
"recorded somewhere."),
|
|
|
|
path);
|
|
|
|
}
|
2021-01-01 02:34:48 +00:00
|
|
|
|
|
|
|
ci->merged.clean = 0;
|
|
|
|
memcpy(new_ci, ci, sizeof(*new_ci));
|
|
|
|
|
|
|
|
/* Put b into new_ci, removing a from stages */
|
|
|
|
new_ci->merged.result.mode = ci->stages[2].mode;
|
|
|
|
oidcpy(&new_ci->merged.result.oid, &ci->stages[2].oid);
|
|
|
|
new_ci->stages[1].mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&new_ci->stages[1].oid, null_oid());
|
2021-01-01 02:34:48 +00:00
|
|
|
new_ci->filemask = 5;
|
|
|
|
if ((S_IFMT & b_mode) != (S_IFMT & o_mode)) {
|
|
|
|
new_ci->stages[0].mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&new_ci->stages[0].oid, null_oid());
|
2021-01-01 02:34:48 +00:00
|
|
|
new_ci->filemask = 4;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Leave only a in ci, fixing stages. */
|
|
|
|
ci->merged.result.mode = ci->stages[1].mode;
|
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[1].oid);
|
|
|
|
ci->stages[2].mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&ci->stages[2].oid, null_oid());
|
2021-01-01 02:34:48 +00:00
|
|
|
ci->filemask = 3;
|
|
|
|
if ((S_IFMT & a_mode) != (S_IFMT & o_mode)) {
|
|
|
|
ci->stages[0].mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&ci->stages[0].oid, null_oid());
|
2021-01-01 02:34:48 +00:00
|
|
|
ci->filemask = 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Insert entries into opt->priv_paths */
|
|
|
|
assert(rename_a || rename_b);
|
2022-06-18 00:20:56 +00:00
|
|
|
if (rename_a)
|
2021-01-01 02:34:48 +00:00
|
|
|
strmap_put(&opt->priv->paths, a_path, ci);
|
|
|
|
|
2022-06-18 00:20:56 +00:00
|
|
|
if (!rename_b)
|
2021-01-01 02:34:48 +00:00
|
|
|
b_path = path;
|
|
|
|
strmap_put(&opt->priv->paths, b_path, new_ci);
|
|
|
|
|
2021-07-31 17:27:38 +00:00
|
|
|
if (rename_a && rename_b)
|
2021-01-01 02:34:48 +00:00
|
|
|
strmap_remove(&opt->priv->paths, path, 0);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do special handling for b_path since process_entry()
|
|
|
|
* won't be called on it specially.
|
|
|
|
*/
|
|
|
|
strmap_put(&opt->priv->conflicted, b_path, new_ci);
|
|
|
|
record_entry_for_tree(dir_metadata, b_path,
|
|
|
|
&new_ci->merged);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remaining code for processing this entry should
|
|
|
|
* think in terms of processing a_path.
|
|
|
|
*/
|
|
|
|
if (a_path)
|
|
|
|
path = a_path;
|
|
|
|
}
|
2020-12-13 08:04:18 +00:00
|
|
|
} else if (ci->filemask >= 6) {
|
2021-01-01 02:34:42 +00:00
|
|
|
/* Need a two-way or three-way content merge */
|
|
|
|
struct version_info merged_file;
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
int clean_merge;
|
2021-01-01 02:34:42 +00:00
|
|
|
struct version_info *o = &ci->stages[0];
|
|
|
|
struct version_info *a = &ci->stages[1];
|
|
|
|
struct version_info *b = &ci->stages[2];
|
|
|
|
|
|
|
|
clean_merge = handle_content_merge(opt, path, o, a, b,
|
|
|
|
ci->pathnames,
|
|
|
|
opt->priv->call_depth * 2,
|
|
|
|
&merged_file);
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
if (clean_merge < 0)
|
|
|
|
return -1;
|
2021-01-01 02:34:42 +00:00
|
|
|
ci->merged.clean = clean_merge &&
|
|
|
|
!ci->df_conflict && !ci->path_conflict;
|
|
|
|
ci->merged.result.mode = merged_file.mode;
|
|
|
|
ci->merged.is_null = (merged_file.mode == 0);
|
|
|
|
oidcpy(&ci->merged.result.oid, &merged_file.oid);
|
|
|
|
if (clean_merge && ci->df_conflict) {
|
|
|
|
assert(df_file_index == 1 || df_file_index == 2);
|
|
|
|
ci->filemask = 1 << df_file_index;
|
|
|
|
ci->stages[df_file_index].mode = merged_file.mode;
|
|
|
|
oidcpy(&ci->stages[df_file_index].oid, &merged_file.oid);
|
|
|
|
}
|
|
|
|
if (!clean_merge) {
|
|
|
|
const char *reason = _("content");
|
|
|
|
if (ci->filemask == 6)
|
|
|
|
reason = _("add/add");
|
|
|
|
if (S_ISGITLINK(merged_file.mode))
|
|
|
|
reason = _("submodule");
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_CONTENTS, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
2021-01-01 02:34:42 +00:00
|
|
|
_("CONFLICT (%s): Merge conflict in %s"),
|
|
|
|
reason, path);
|
|
|
|
}
|
2020-12-13 08:04:18 +00:00
|
|
|
} else if (ci->filemask == 3 || ci->filemask == 5) {
|
|
|
|
/* Modify/delete */
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
const char *modify_branch, *delete_branch;
|
|
|
|
int side = (ci->filemask == 5) ? 2 : 1;
|
|
|
|
int index = opt->priv->call_depth ? 0 : side;
|
|
|
|
|
|
|
|
ci->merged.result.mode = ci->stages[index].mode;
|
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[index].oid);
|
|
|
|
ci->merged.clean = 0;
|
|
|
|
|
|
|
|
modify_branch = (side == 1) ? opt->branch1 : opt->branch2;
|
|
|
|
delete_branch = (side == 1) ? opt->branch2 : opt->branch1;
|
|
|
|
|
2021-03-20 00:03:47 +00:00
|
|
|
if (opt->renormalize &&
|
|
|
|
blob_unchanged(opt, &ci->stages[0], &ci->stages[side],
|
|
|
|
path)) {
|
2021-12-28 00:20:46 +00:00
|
|
|
if (!ci->path_conflict) {
|
|
|
|
/*
|
|
|
|
* Blob unchanged after renormalization, so
|
|
|
|
* there's no modify/delete conflict after all;
|
|
|
|
* we can just remove the file.
|
|
|
|
*/
|
|
|
|
ci->merged.is_null = 1;
|
|
|
|
ci->merged.clean = 1;
|
|
|
|
/*
|
|
|
|
* file goes away => even if there was a
|
|
|
|
* directory/file conflict there isn't one now.
|
|
|
|
*/
|
|
|
|
ci->df_conflict = 0;
|
|
|
|
} else {
|
|
|
|
/* rename/delete, so conflict remains */
|
|
|
|
}
|
2021-03-20 00:03:47 +00:00
|
|
|
} else if (ci->path_conflict &&
|
|
|
|
oideq(&ci->stages[0].oid, &ci->stages[side].oid)) {
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
/*
|
|
|
|
* This came from a rename/delete; no action to take,
|
|
|
|
* but avoid printing "modify/delete" conflict notice
|
|
|
|
* since the contents were not modified.
|
|
|
|
*/
|
|
|
|
} else {
|
2022-06-18 00:20:56 +00:00
|
|
|
path_msg(opt, CONFLICT_MODIFY_DELETE, 0,
|
|
|
|
path, NULL, NULL, NULL,
|
merge-ort: add implementation of rename/delete conflicts
Implement rename/delete conflicts, i.e. one side renames a file and the
other deletes the file. This code replaces the following from
merge-recurisve.c:
* the code relevant to RENAME_DELETE in process_renames()
* the RENAME_DELETE case of process_entry()
* handle_rename_delete()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_change_delete()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here, because
rename/delete cases will also trigger the modify/delete codepath.
However, we only want a modify/delete message to be printed for a
rename/delete conflict if there is a content change in the renamed file
in addition to the rename. So process_renames() and process_entry()
aren't quite fully orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:03 +00:00
|
|
|
_("CONFLICT (modify/delete): %s deleted in %s "
|
|
|
|
"and modified in %s. Version %s of %s left "
|
|
|
|
"in tree."),
|
|
|
|
path, delete_branch, modify_branch,
|
|
|
|
modify_branch, path);
|
|
|
|
}
|
2020-12-13 08:04:18 +00:00
|
|
|
} else if (ci->filemask == 2 || ci->filemask == 4) {
|
|
|
|
/* Added on one side */
|
|
|
|
int side = (ci->filemask == 4) ? 2 : 1;
|
|
|
|
ci->merged.result.mode = ci->stages[side].mode;
|
|
|
|
oidcpy(&ci->merged.result.oid, &ci->stages[side].oid);
|
merge-ort: add implementation of both sides renaming differently
Implement rename/rename(1to2) handling, i.e. both sides of history
renaming a file and rename it differently. This code replaces the
following from merge-recurisve.c:
* all the 1to2 code in process_renames()
* the RENAME_ONE_FILE_TO_TWO case of process_entry()
* handle_rename_rename_1to2()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here to make
sure that the two different paths aren't marked as clean but are left in
a conflicted state. So process_renames() and process_entry() aren't
quite entirely orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:02 +00:00
|
|
|
ci->merged.clean = !ci->df_conflict && !ci->path_conflict;
|
2020-12-13 08:04:18 +00:00
|
|
|
} else if (ci->filemask == 1) {
|
|
|
|
/* Deleted on both sides */
|
|
|
|
ci->merged.is_null = 1;
|
|
|
|
ci->merged.result.mode = 0;
|
2021-04-26 01:02:56 +00:00
|
|
|
oidcpy(&ci->merged.result.oid, null_oid());
|
2021-06-30 17:29:59 +00:00
|
|
|
assert(!ci->df_conflict);
|
merge-ort: add implementation of both sides renaming differently
Implement rename/rename(1to2) handling, i.e. both sides of history
renaming a file and rename it differently. This code replaces the
following from merge-recurisve.c:
* all the 1to2 code in process_renames()
* the RENAME_ONE_FILE_TO_TWO case of process_entry()
* handle_rename_rename_1to2()
Also, there is some shared code from merge-recursive.c for multiple
different rename cases which we will no longer need for this case (or
other rename cases):
* handle_file_collision()
* setup_rename_conflict_info()
The consolidation of five separate codepaths into one is made possible
by a change in design: process_renames() tweaks the conflict_info
entries within opt->priv->paths such that process_entry() can then
handle all the non-rename conflict types (directory/file, modify/delete,
etc.) orthogonally. This means we're much less likely to miss special
implementation of some kind of combination of conflict types (see
commits brought in by 66c62eaec6 ("Merge branch 'en/merge-tests'",
2020-11-18), especially commit ef52778708 ("merge tests: expect improved
directory/file conflict handling in ort", 2020-10-26) for more details).
That, together with letting worktree/index updating be handled
orthogonally in the merge_switch_to_result() function, dramatically
simplifies the code for various special rename cases.
To be fair, there is a _slight_ tweak to process_entry() here to make
sure that the two different paths aren't marked as clean but are left in
a conflicted state. So process_renames() and process_entry() aren't
quite entirely orthogonal, but they are pretty close.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-15 18:28:02 +00:00
|
|
|
ci->merged.clean = !ci->path_conflict;
|
2020-12-13 08:04:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If still conflicted, record it separately. This allows us to later
|
|
|
|
* iterate over just conflicted entries when updating the index instead
|
|
|
|
* of iterating over all entries.
|
|
|
|
*/
|
|
|
|
if (!ci->merged.clean)
|
|
|
|
strmap_put(&opt->priv->conflicted, path, ci);
|
2021-06-08 16:11:42 +00:00
|
|
|
|
|
|
|
/* Record metadata for ci->merged in dir_metadata */
|
2020-12-13 08:04:20 +00:00
|
|
|
record_entry_for_tree(dir_metadata, path, &ci->merged);
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
return 0;
|
2020-12-13 08:04:18 +00:00
|
|
|
}
|
|
|
|
|
merge-ort: add prefetching for content merges
Commit 7fbbcb21b1 ("diff: batch fetching of missing blobs", 2019-04-05)
introduced batching of fetching missing blobs, so that the diff
machinery would have one fetch subprocess grab N blobs instead of N
processes each grabbing 1.
However, the diff machinery is not the only thing in a merge that needs
to work on blobs. The 3-way content merges need them as well. Rather
than download all the blobs 1 at a time, prefetch all the blobs needed
for regular content merges.
This does not cover all possible paths in merge-ort that might need to
download blobs. Others include:
- The blob_unchanged() calls to avoid modify/delete conflicts (when
blob renormalization results in an "unchanged" file)
- Preliminary content merges needed for rename/add and
rename/rename(2to1) style conflicts. (Both of these types of
conflicts can result in nested conflict markers from the need to do
two levels of content merging; the first happens before our new
prefetch_for_content_merges() function.)
The first of these wouldn't be an extreme amount of work to support, and
even the second could be theoretically supported in batching, but all of
these cases seem unusual to me, and this is a minor performance
optimization anyway; in the worst case we only get some of the fetches
batched and have a few additional one-off fetches. So for now, just
handle the regular 3-way content merges in our prefetching.
For the testcase from the previous commit, the number of downloaded
objects remains at 63, but this drops the number of fetches needed from
32 down to 20, a sizeable reduction.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-22 08:04:41 +00:00
|
|
|
static void prefetch_for_content_merges(struct merge_options *opt,
|
|
|
|
struct string_list *plist)
|
|
|
|
{
|
|
|
|
struct string_list_item *e;
|
|
|
|
struct oid_array to_fetch = OID_ARRAY_INIT;
|
|
|
|
|
2023-03-28 13:58:53 +00:00
|
|
|
if (opt->repo != the_repository || !repo_has_promisor_remote(the_repository))
|
merge-ort: add prefetching for content merges
Commit 7fbbcb21b1 ("diff: batch fetching of missing blobs", 2019-04-05)
introduced batching of fetching missing blobs, so that the diff
machinery would have one fetch subprocess grab N blobs instead of N
processes each grabbing 1.
However, the diff machinery is not the only thing in a merge that needs
to work on blobs. The 3-way content merges need them as well. Rather
than download all the blobs 1 at a time, prefetch all the blobs needed
for regular content merges.
This does not cover all possible paths in merge-ort that might need to
download blobs. Others include:
- The blob_unchanged() calls to avoid modify/delete conflicts (when
blob renormalization results in an "unchanged" file)
- Preliminary content merges needed for rename/add and
rename/rename(2to1) style conflicts. (Both of these types of
conflicts can result in nested conflict markers from the need to do
two levels of content merging; the first happens before our new
prefetch_for_content_merges() function.)
The first of these wouldn't be an extreme amount of work to support, and
even the second could be theoretically supported in batching, but all of
these cases seem unusual to me, and this is a minor performance
optimization anyway; in the worst case we only get some of the fetches
batched and have a few additional one-off fetches. So for now, just
handle the regular 3-way content merges in our prefetching.
For the testcase from the previous commit, the number of downloaded
objects remains at 63, but this drops the number of fetches needed from
32 down to 20, a sizeable reduction.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-22 08:04:41 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
for (e = &plist->items[plist->nr-1]; e >= plist->items; --e) {
|
|
|
|
/* char *path = e->string; */
|
|
|
|
struct conflict_info *ci = e->util;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Ignore clean entries */
|
|
|
|
if (ci->merged.clean)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Ignore entries that don't need a content merge */
|
|
|
|
if (ci->match_mask || ci->filemask < 6 ||
|
|
|
|
!S_ISREG(ci->stages[1].mode) ||
|
|
|
|
!S_ISREG(ci->stages[2].mode) ||
|
|
|
|
oideq(&ci->stages[1].oid, &ci->stages[2].oid))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Also don't need content merge if base matches either side */
|
|
|
|
if (ci->filemask == 7 &&
|
|
|
|
S_ISREG(ci->stages[0].mode) &&
|
|
|
|
(oideq(&ci->stages[0].oid, &ci->stages[1].oid) ||
|
|
|
|
oideq(&ci->stages[0].oid, &ci->stages[2].oid)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
for (i = 0; i < 3; i++) {
|
|
|
|
unsigned side_mask = (1 << i);
|
|
|
|
struct version_info *vi = &ci->stages[i];
|
|
|
|
|
|
|
|
if ((ci->filemask & side_mask) &&
|
|
|
|
S_ISREG(vi->mode) &&
|
|
|
|
oid_object_info_extended(opt->repo, &vi->oid, NULL,
|
|
|
|
OBJECT_INFO_FOR_PREFETCH))
|
|
|
|
oid_array_append(&to_fetch, &vi->oid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
promisor_remote_get_direct(opt->repo, to_fetch.oid, to_fetch.nr);
|
|
|
|
oid_array_clear(&to_fetch);
|
|
|
|
}
|
|
|
|
|
2022-09-28 07:29:21 +00:00
|
|
|
static int process_entries(struct merge_options *opt,
|
|
|
|
struct object_id *result_oid)
|
2020-12-13 08:04:09 +00:00
|
|
|
{
|
2020-12-13 08:04:18 +00:00
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *e;
|
2020-12-13 08:04:19 +00:00
|
|
|
struct string_list plist = STRING_LIST_INIT_NODUP;
|
|
|
|
struct string_list_item *entry;
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
struct directory_versions dir_metadata = { STRING_LIST_INIT_NODUP,
|
|
|
|
STRING_LIST_INIT_NODUP,
|
|
|
|
NULL, 0 };
|
2022-09-28 07:29:21 +00:00
|
|
|
int ret = 0;
|
2020-12-13 08:04:18 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "process_entries setup", opt->repo);
|
2020-12-13 08:04:18 +00:00
|
|
|
if (strmap_empty(&opt->priv->paths)) {
|
|
|
|
oidcpy(result_oid, opt->repo->hash_algo->empty_tree);
|
2022-09-28 07:29:21 +00:00
|
|
|
return 0;
|
2020-12-13 08:04:18 +00:00
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:19 +00:00
|
|
|
/* Hack to pre-allocate plist to the desired size */
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "plist grow", opt->repo);
|
2020-12-13 08:04:19 +00:00
|
|
|
ALLOC_GROW(plist.items, strmap_get_size(&opt->priv->paths), plist.alloc);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "plist grow", opt->repo);
|
2020-12-13 08:04:19 +00:00
|
|
|
|
|
|
|
/* Put every entry from paths into plist, then sort */
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "plist copy", opt->repo);
|
2020-12-13 08:04:18 +00:00
|
|
|
strmap_for_each_entry(&opt->priv->paths, &iter, e) {
|
2020-12-13 08:04:19 +00:00
|
|
|
string_list_append(&plist, e->key)->util = e->value;
|
|
|
|
}
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "plist copy", opt->repo);
|
|
|
|
|
|
|
|
trace2_region_enter("merge", "plist special sort", opt->repo);
|
merge-ort: replace string_list_df_name_compare with faster alternative
Gathering accumulated times from trace2 output on the mega-renames
testcase, I saw the following timings (where I'm only showing a few
lines to highlight the portions of interest):
10.120 : label:incore_nonrecursive
4.462 : ..label:process_entries
3.143 : ....label:process_entries setup
2.988 : ......label:plist special sort
1.305 : ....label:processing
2.604 : ..label:collect_merge_info
2.018 : ..label:merge_start
1.018 : ..label:renames
In the above output, note that the 4.462 seconds for process_entries was
split as 3.143 seconds for "process_entries setup" and 1.305 seconds for
"processing" (and a little time for other stuff removed from the
highlight). Most of the "process_entries setup" time was spent on
"plist special sort" which corresponds to the following code:
trace2_region_enter("merge", "plist special sort", opt->repo);
plist.cmp = string_list_df_name_compare;
string_list_sort(&plist);
trace2_region_leave("merge", "plist special sort", opt->repo);
In other words, in a merge strategy that would be invoked by passing
"-sort" to either rebase or merge, sorting an array takes more time than
anything else. Serves me right for naming my merge strategy this way.
Rewrite the comparison function in a way that does not require finding
out the lengths of the strings when comparing them. While at it, tweak
the code for our specific case -- no need to handle a variety of modes,
for example. The combination of these changes reduced the time spent in
"plist special sort" by ~25% in the mega-renames case.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 5.622 s ± 0.059 s 5.235 s ± 0.042 s
mega-renames: 10.127 s ± 0.073 s 9.419 s ± 0.107 s
just-one-mega: 500.3 ms ± 3.8 ms 480.1 ms ± 3.9 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-08 16:11:39 +00:00
|
|
|
plist.cmp = sort_dirs_next_to_their_children;
|
2020-12-13 08:04:19 +00:00
|
|
|
string_list_sort(&plist);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "plist special sort", opt->repo);
|
|
|
|
|
|
|
|
trace2_region_leave("merge", "process_entries setup", opt->repo);
|
2020-12-13 08:04:19 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Iterate over the items in reverse order, so we can handle paths
|
|
|
|
* below a directory before needing to handle the directory itself.
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
*
|
|
|
|
* This allows us to write subtrees before we need to write trees,
|
|
|
|
* and it also enables sane handling of directory/file conflicts
|
|
|
|
* (because it allows us to know whether the directory is still in
|
|
|
|
* the way when it is time to process the file at the same path).
|
2020-12-13 08:04:19 +00:00
|
|
|
*/
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "processing", opt->repo);
|
merge-ort: add prefetching for content merges
Commit 7fbbcb21b1 ("diff: batch fetching of missing blobs", 2019-04-05)
introduced batching of fetching missing blobs, so that the diff
machinery would have one fetch subprocess grab N blobs instead of N
processes each grabbing 1.
However, the diff machinery is not the only thing in a merge that needs
to work on blobs. The 3-way content merges need them as well. Rather
than download all the blobs 1 at a time, prefetch all the blobs needed
for regular content merges.
This does not cover all possible paths in merge-ort that might need to
download blobs. Others include:
- The blob_unchanged() calls to avoid modify/delete conflicts (when
blob renormalization results in an "unchanged" file)
- Preliminary content merges needed for rename/add and
rename/rename(2to1) style conflicts. (Both of these types of
conflicts can result in nested conflict markers from the need to do
two levels of content merging; the first happens before our new
prefetch_for_content_merges() function.)
The first of these wouldn't be an extreme amount of work to support, and
even the second could be theoretically supported in batching, but all of
these cases seem unusual to me, and this is a minor performance
optimization anyway; in the worst case we only get some of the fetches
batched and have a few additional one-off fetches. So for now, just
handle the regular 3-way content merges in our prefetching.
For the testcase from the previous commit, the number of downloaded
objects remains at 63, but this drops the number of fetches needed from
32 down to 20, a sizeable reduction.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-06-22 08:04:41 +00:00
|
|
|
prefetch_for_content_merges(opt, &plist);
|
2020-12-13 08:04:19 +00:00
|
|
|
for (entry = &plist.items[plist.nr-1]; entry >= plist.items; --entry) {
|
|
|
|
char *path = entry->string;
|
2020-12-13 08:04:18 +00:00
|
|
|
/*
|
|
|
|
* NOTE: mi may actually be a pointer to a conflict_info, but
|
|
|
|
* we have to check mi->clean first to see if it's safe to
|
|
|
|
* reassign to such a pointer type.
|
|
|
|
*/
|
2020-12-13 08:04:19 +00:00
|
|
|
struct merged_info *mi = entry->util;
|
2020-12-13 08:04:18 +00:00
|
|
|
|
2022-09-28 07:29:21 +00:00
|
|
|
if (write_completed_directory(opt, mi->directory_name,
|
|
|
|
&dir_metadata) < 0) {
|
|
|
|
ret = -1;
|
|
|
|
goto cleanup;
|
|
|
|
}
|
2020-12-13 08:04:20 +00:00
|
|
|
if (mi->clean)
|
|
|
|
record_entry_for_tree(&dir_metadata, path, mi);
|
|
|
|
else {
|
2020-12-13 08:04:19 +00:00
|
|
|
struct conflict_info *ci = (struct conflict_info *)mi;
|
merge-ort: return early when failing to write a blob
In the previous commit, we fixed a segmentation fault when a tree object
could not be written.
However, before the tree object is written, `merge-ort` wants to write
out a blob object (except in cases where the merge results in a blob
that already exists in the database). And this can fail, too, but we
ignore that write failure so far.
Let's pay close attention and error out early if the blob could not be
written. This reduces the error output of t4301.25 ("merge-ort fails
gracefully in a read-only repository") from:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add greeting to database
error: insufficient permission for adding an object to repository database ./objects
fatal: failure to merge
to:
error: insufficient permission for adding an object to repository database ./objects
error: error: Unable to add numbers to database
fatal: failure to merge
This is _not_ just a cosmetic change: Even though one might assume that
the operation would have failed anyway at the point when the new tree
object is written (and the corresponding tree object _will_ be new if it
contains a blob that is new), but that is not so: As pointed out by
Elijah Newren, when Git has previously been allowed to add loose objects
via `sudo` calls, it is very possible that the blob object cannot be
written (because the corresponding `.git/objects/??/` directory may be
owned by `root`) but the tree object can be written (because the
corresponding objects directory is owned by the current user). This
would result in a corrupt repository because it is missing the blob
object, and with this here patch we prevent that.
Note: This patch adjusts two variable declarations from `unsigned` to
`int` because their purpose is to hold the return value of
`handle_content_merge()`, which is of type `int`. The existing users of
those variables are only interested whether that variable is zero or
non-zero, therefore this type change does not affect the existing code.
Reviewed-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-28 07:29:22 +00:00
|
|
|
if (process_entry(opt, path, ci, &dir_metadata) < 0) {
|
|
|
|
ret = -1;
|
|
|
|
goto cleanup;
|
|
|
|
};
|
2020-12-13 08:04:19 +00:00
|
|
|
}
|
2020-12-13 08:04:18 +00:00
|
|
|
}
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "processing", opt->repo);
|
2020-12-13 08:04:18 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "process_entries cleanup", opt->repo);
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
if (dir_metadata.offsets.nr != 1 ||
|
|
|
|
(uintptr_t)dir_metadata.offsets.items[0].util != 0) {
|
2022-03-07 15:27:08 +00:00
|
|
|
printf("dir_metadata.offsets.nr = %"PRIuMAX" (should be 1)\n",
|
|
|
|
(uintmax_t)dir_metadata.offsets.nr);
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
printf("dir_metadata.offsets.items[0].util = %u (should be 0)\n",
|
|
|
|
(unsigned)(uintptr_t)dir_metadata.offsets.items[0].util);
|
|
|
|
fflush(stdout);
|
|
|
|
BUG("dir_metadata accounting completely off; shouldn't happen");
|
|
|
|
}
|
2022-09-28 07:29:21 +00:00
|
|
|
if (write_tree(result_oid, &dir_metadata.versions, 0,
|
|
|
|
opt->repo->hash_algo->rawsz) < 0)
|
|
|
|
ret = -1;
|
|
|
|
cleanup:
|
2020-12-13 08:04:19 +00:00
|
|
|
string_list_clear(&plist, 0);
|
2020-12-13 08:04:20 +00:00
|
|
|
string_list_clear(&dir_metadata.versions, 0);
|
merge-ort: step 3 of tree writing -- handling subdirectories as we go
Our order for processing of entries means that if we have a tree of
files that looks like
Makefile
src/moduleA/foo.c
src/moduleA/bar.c
src/moduleB/baz.c
src/moduleB/umm.c
tokens.txt
Then we will process paths in the order of the leftmost column below. I
have added two additional columns that help explain the algorithm that
follows; the 2nd column is there to remind us we have oid & mode info we
are tracking for each of these paths (which differs between the paths
which I'm not representing well here), and the third column annotates
the parent directory of the entry:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
src/moduleB <version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
src/moduleA <version_info> src
src <version_info> ""
Makefile <version_info> ""
When the parent directory changes, if it's a subdirectory of the previous
parent directory (e.g. "" -> src/moduleB) then we can just keep appending.
If the parent directory differs from the previous parent directory and is
not a subdirectory, then we should process that directory.
So, for example, when we get to this point:
tokens.txt <version_info> ""
src/moduleB/umm.c <version_info> src/moduleB
src/moduleB/baz.c <version_info> src/moduleB
and note that the next entry (src/moduleB) has a different parent than
the last one that isn't a subdirectory, we should write out a tree for it
100644 blob <HASH> umm.c
100644 blob <HASH> baz.c
then pop all the entries under that directory while recording the new
hash for that directory, leaving us with
tokens.txt <version_info> ""
src/moduleB <new version_info> src
This process repeats until at the end we get to
tokens.txt <version_info> ""
src <new version_info> ""
Makefile <version_info> ""
and then we can write out the toplevel tree. Since we potentially have
entries in our string_list corresponding to multiple different toplevel
directories, e.g. a slightly different repository might have:
whizbang.txt <version_info> ""
tokens.txt <version_info> ""
src/moduleD <new version_info> src
src/moduleC <new version_info> src
src/moduleB <new version_info> src
src/moduleA/foo.c <version_info> src/moduleA
src/moduleA/bar.c <version_info> src/moduleA
When src/moduleA is popped off, we need to know that the "last
directory" reverts back to src, and how many entries in our string_list
are associated with that parent directory. So I use an auxiliary
offsets string_list which would have (parent_directory,offset)
information of the form
"" 0
src 2
src/moduleA 5
Whenever I write out a tree for a subdirectory, I set versions.nr to
the final offset value and then decrement offsets.nr...and then add
an entry to versions with a hash for the new directory.
The idea is relatively simple, there's just a lot of accounting to
implement this.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:22 +00:00
|
|
|
string_list_clear(&dir_metadata.offsets, 0);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "process_entries cleanup", opt->repo);
|
2022-09-28 07:29:21 +00:00
|
|
|
|
|
|
|
return ret;
|
2020-12-13 08:04:09 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: functions related to merge_switch_to_result() ***/
|
|
|
|
|
2020-12-13 08:04:23 +00:00
|
|
|
static int checkout(struct merge_options *opt,
|
|
|
|
struct tree *prev,
|
|
|
|
struct tree *next)
|
|
|
|
{
|
2020-12-13 08:04:24 +00:00
|
|
|
/* Switch the index/working copy from old to new */
|
|
|
|
int ret;
|
|
|
|
struct tree_desc trees[2];
|
|
|
|
struct unpack_trees_options unpack_opts;
|
|
|
|
|
|
|
|
memset(&unpack_opts, 0, sizeof(unpack_opts));
|
|
|
|
unpack_opts.head_idx = -1;
|
|
|
|
unpack_opts.src_index = opt->repo->index;
|
|
|
|
unpack_opts.dst_index = opt->repo->index;
|
|
|
|
|
|
|
|
setup_unpack_trees_porcelain(&unpack_opts, "merge");
|
|
|
|
|
|
|
|
/*
|
|
|
|
* NOTE: if this were just "git checkout" code, we would probably
|
|
|
|
* read or refresh the cache and check for a conflicted index, but
|
|
|
|
* builtin/merge.c or sequencer.c really needs to read the index
|
|
|
|
* and check for conflicted entries before starting merging for a
|
|
|
|
* good user experience (no sense waiting for merges/rebases before
|
|
|
|
* erroring out), so there's no reason to duplicate that work here.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* 2-way merge to the new branch */
|
|
|
|
unpack_opts.update = 1;
|
|
|
|
unpack_opts.merge = 1;
|
|
|
|
unpack_opts.quiet = 0; /* FIXME: sequencer might want quiet? */
|
|
|
|
unpack_opts.verbose_update = (opt->verbosity > 2);
|
|
|
|
unpack_opts.fn = twoway_merge;
|
2021-09-27 16:33:43 +00:00
|
|
|
unpack_opts.preserve_ignored = 0; /* FIXME: !opts->overwrite_ignore */
|
2020-12-13 08:04:24 +00:00
|
|
|
parse_tree(prev);
|
|
|
|
init_tree_desc(&trees[0], prev->buffer, prev->size);
|
|
|
|
parse_tree(next);
|
|
|
|
init_tree_desc(&trees[1], next->buffer, next->size);
|
|
|
|
|
|
|
|
ret = unpack_trees(2, trees, &unpack_opts);
|
|
|
|
clear_unpack_trees_porcelain(&unpack_opts);
|
|
|
|
return ret;
|
2020-12-13 08:04:23 +00:00
|
|
|
}
|
|
|
|
|
2021-03-20 00:03:50 +00:00
|
|
|
static int record_conflicted_index_entries(struct merge_options *opt)
|
2020-12-13 08:04:23 +00:00
|
|
|
{
|
2020-12-13 08:04:26 +00:00
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *e;
|
2021-03-20 00:03:50 +00:00
|
|
|
struct index_state *index = opt->repo->index;
|
|
|
|
struct checkout state = CHECKOUT_INIT;
|
2020-12-13 08:04:26 +00:00
|
|
|
int errs = 0;
|
|
|
|
int original_cache_nr;
|
|
|
|
|
2021-03-20 00:03:50 +00:00
|
|
|
if (strmap_empty(&opt->priv->conflicted))
|
2020-12-13 08:04:23 +00:00
|
|
|
return 0;
|
|
|
|
|
2021-09-08 11:23:57 +00:00
|
|
|
/*
|
|
|
|
* We are in a conflicted state. These conflicts might be inside
|
2021-09-08 11:23:58 +00:00
|
|
|
* sparse-directory entries, so check if any entries are outside
|
|
|
|
* of the sparse-checkout cone preemptively.
|
|
|
|
*
|
|
|
|
* We set original_cache_nr below, but that might change if
|
2021-09-08 11:23:57 +00:00
|
|
|
* index_name_pos() calls ask for paths within sparse directories.
|
|
|
|
*/
|
2021-09-08 11:23:58 +00:00
|
|
|
strmap_for_each_entry(&opt->priv->conflicted, &iter, e) {
|
|
|
|
if (!path_in_sparse_checkout(e->key, index)) {
|
|
|
|
ensure_full_index(index);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-09-08 11:23:57 +00:00
|
|
|
|
2021-03-20 00:03:50 +00:00
|
|
|
/* If any entries have skip_worktree set, we'll have to check 'em out */
|
|
|
|
state.force = 1;
|
|
|
|
state.quiet = 1;
|
|
|
|
state.refresh_cache = 1;
|
|
|
|
state.istate = index;
|
2020-12-13 08:04:26 +00:00
|
|
|
original_cache_nr = index->cache_nr;
|
|
|
|
|
2021-09-19 01:48:55 +00:00
|
|
|
/* Append every entry from conflicted into index, then sort */
|
2021-03-20 00:03:50 +00:00
|
|
|
strmap_for_each_entry(&opt->priv->conflicted, &iter, e) {
|
2020-12-13 08:04:26 +00:00
|
|
|
const char *path = e->key;
|
|
|
|
struct conflict_info *ci = e->value;
|
|
|
|
int pos;
|
|
|
|
struct cache_entry *ce;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
VERIFY_CI(ci);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The index will already have a stage=0 entry for this path,
|
|
|
|
* because we created an as-merged-as-possible version of the
|
|
|
|
* file and checkout() moved the working copy and index over
|
|
|
|
* to that version.
|
|
|
|
*
|
|
|
|
* However, previous iterations through this loop will have
|
|
|
|
* added unstaged entries to the end of the cache which
|
|
|
|
* ignore the standard alphabetical ordering of cache
|
|
|
|
* entries and break invariants needed for index_name_pos()
|
|
|
|
* to work. However, we know the entry we want is before
|
|
|
|
* those appended cache entries, so do a temporary swap on
|
|
|
|
* cache_nr to only look through entries of interest.
|
|
|
|
*/
|
|
|
|
SWAP(index->cache_nr, original_cache_nr);
|
|
|
|
pos = index_name_pos(index, path, strlen(path));
|
|
|
|
SWAP(index->cache_nr, original_cache_nr);
|
|
|
|
if (pos < 0) {
|
|
|
|
if (ci->filemask != 1)
|
|
|
|
BUG("Conflicted %s but nothing in basic working tree or index; this shouldn't happen", path);
|
|
|
|
cache_tree_invalidate_path(index, path);
|
|
|
|
} else {
|
|
|
|
ce = index->cache[pos];
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Clean paths with CE_SKIP_WORKTREE set will not be
|
|
|
|
* written to the working tree by the unpack_trees()
|
|
|
|
* call in checkout(). Our conflicted entries would
|
|
|
|
* have appeared clean to that code since we ignored
|
|
|
|
* the higher order stages. Thus, we need override
|
|
|
|
* the CE_SKIP_WORKTREE bit and manually write those
|
|
|
|
* files to the working disk here.
|
|
|
|
*/
|
2022-08-19 04:45:55 +00:00
|
|
|
if (ce_skip_worktree(ce))
|
2021-03-20 00:03:50 +00:00
|
|
|
errs |= checkout_entry(ce, &state, NULL, NULL);
|
2020-12-13 08:04:26 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Mark this cache entry for removal and instead add
|
|
|
|
* new stage>0 entries corresponding to the
|
|
|
|
* conflicts. If there are many conflicted entries, we
|
|
|
|
* want to avoid memmove'ing O(NM) entries by
|
|
|
|
* inserting the new entries one at a time. So,
|
|
|
|
* instead, we just add the new cache entries to the
|
|
|
|
* end (ignoring normal index requirements on sort
|
|
|
|
* order) and sort the index once we're all done.
|
|
|
|
*/
|
|
|
|
ce->ce_flags |= CE_REMOVE;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
|
|
|
struct version_info *vi;
|
|
|
|
if (!(ci->filemask & (1ul << i)))
|
|
|
|
continue;
|
|
|
|
vi = &ci->stages[i];
|
|
|
|
ce = make_cache_entry(index, vi->mode, &vi->oid,
|
|
|
|
path, i+1, 0);
|
|
|
|
add_index_entry(index, ce, ADD_CACHE_JUST_APPEND);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove the unused cache entries (and invalidate the relevant
|
|
|
|
* cache-trees), then sort the index entries to get the conflicted
|
|
|
|
* entries we added to the end into their right locations.
|
|
|
|
*/
|
|
|
|
remove_marked_cache_entries(index, 1);
|
2021-03-20 00:03:44 +00:00
|
|
|
/*
|
|
|
|
* No need for STABLE_QSORT -- cmp_cache_name_compare sorts primarily
|
|
|
|
* on filename and secondarily on stage, and (name, stage #) are a
|
|
|
|
* unique tuple.
|
|
|
|
*/
|
2020-12-13 08:04:26 +00:00
|
|
|
QSORT(index->cache, index->cache_nr, cmp_cache_name_compare);
|
|
|
|
|
|
|
|
return errs;
|
2020-12-13 08:04:23 +00:00
|
|
|
}
|
|
|
|
|
2022-08-04 19:51:05 +00:00
|
|
|
static void print_submodule_conflict_suggestion(struct string_list *csub) {
|
|
|
|
struct string_list_item *item;
|
|
|
|
struct strbuf msg = STRBUF_INIT;
|
|
|
|
struct strbuf tmp = STRBUF_INIT;
|
|
|
|
struct strbuf subs = STRBUF_INIT;
|
|
|
|
|
|
|
|
if (!csub->nr)
|
|
|
|
return;
|
|
|
|
|
2022-08-18 07:15:27 +00:00
|
|
|
strbuf_add_separated_string_list(&subs, " ", csub);
|
2022-08-04 19:51:05 +00:00
|
|
|
for_each_string_list_item(item, csub) {
|
|
|
|
struct conflicted_submodule_item *util = item->util;
|
|
|
|
|
2022-08-18 07:15:27 +00:00
|
|
|
/*
|
|
|
|
* NEEDSWORK: The steps to resolve these errors deserve a more
|
|
|
|
* detailed explanation than what is currently printed below.
|
|
|
|
*/
|
2022-08-04 19:51:05 +00:00
|
|
|
if (util->flag == CONFLICT_SUBMODULE_NOT_INITIALIZED ||
|
2022-08-18 07:15:27 +00:00
|
|
|
util->flag == CONFLICT_SUBMODULE_HISTORY_NOT_AVAILABLE)
|
|
|
|
continue;
|
2022-08-18 07:15:25 +00:00
|
|
|
|
2022-08-04 19:51:05 +00:00
|
|
|
/*
|
2022-08-18 07:15:25 +00:00
|
|
|
* TRANSLATORS: This is a line of advice to resolve a merge
|
|
|
|
* conflict in a submodule. The first argument is the submodule
|
|
|
|
* name, and the second argument is the abbreviated id of the
|
|
|
|
* commit that needs to be merged. For example:
|
|
|
|
* - go to submodule (mysubmodule), and either merge commit abc1234"
|
2022-08-04 19:51:05 +00:00
|
|
|
*/
|
2022-08-18 07:15:25 +00:00
|
|
|
strbuf_addf(&tmp, _(" - go to submodule (%s), and either merge commit %s\n"
|
|
|
|
" or update to an existing commit which has merged those changes\n"),
|
|
|
|
item->string, util->abbrev);
|
2022-08-04 19:51:05 +00:00
|
|
|
}
|
2022-08-18 07:15:25 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* TRANSLATORS: This is a detailed message for resolving submodule
|
|
|
|
* conflicts. The first argument is string containing one step per
|
|
|
|
* submodule. The second is a space-separated list of submodule names.
|
|
|
|
*/
|
|
|
|
strbuf_addf(&msg,
|
|
|
|
_("Recursive merging with submodules currently only supports trivial cases.\n"
|
|
|
|
"Please manually handle the merging of each conflicted submodule.\n"
|
|
|
|
"This can be accomplished with the following steps:\n"
|
|
|
|
"%s"
|
|
|
|
" - come back to superproject and run:\n\n"
|
|
|
|
" git add %s\n\n"
|
|
|
|
" to record the above merge or update\n"
|
|
|
|
" - resolve any other conflicts in the superproject\n"
|
|
|
|
" - commit the resulting index in the superproject\n"),
|
|
|
|
tmp.buf, subs.buf);
|
2022-08-04 19:51:05 +00:00
|
|
|
|
|
|
|
printf("%s", msg.buf);
|
2022-08-18 07:15:25 +00:00
|
|
|
|
2022-08-04 19:51:05 +00:00
|
|
|
strbuf_release(&subs);
|
|
|
|
strbuf_release(&tmp);
|
|
|
|
strbuf_release(&msg);
|
|
|
|
}
|
|
|
|
|
2022-06-18 00:20:48 +00:00
|
|
|
void merge_display_update_messages(struct merge_options *opt,
|
2022-06-18 00:20:57 +00:00
|
|
|
int detailed,
|
2022-06-18 00:20:48 +00:00
|
|
|
struct merge_result *result)
|
|
|
|
{
|
|
|
|
struct merge_options_internal *opti = result->priv;
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *e;
|
|
|
|
struct string_list olist = STRING_LIST_INIT_NODUP;
|
|
|
|
|
|
|
|
if (opt->record_conflict_msgs_as_headers)
|
|
|
|
BUG("Either display conflict messages or record them as headers, not both");
|
|
|
|
|
|
|
|
trace2_region_enter("merge", "display messages", opt->repo);
|
|
|
|
|
|
|
|
/* Hack to pre-allocate olist to the desired size */
|
2022-06-18 00:20:54 +00:00
|
|
|
ALLOC_GROW(olist.items, strmap_get_size(&opti->conflicts),
|
2022-06-18 00:20:48 +00:00
|
|
|
olist.alloc);
|
|
|
|
|
|
|
|
/* Put every entry from output into olist, then sort */
|
2022-06-18 00:20:54 +00:00
|
|
|
strmap_for_each_entry(&opti->conflicts, &iter, e) {
|
2022-06-18 00:20:48 +00:00
|
|
|
string_list_append(&olist, e->key)->util = e->value;
|
|
|
|
}
|
|
|
|
string_list_sort(&olist);
|
|
|
|
|
|
|
|
/* Iterate over the items, printing them */
|
2022-06-18 00:20:54 +00:00
|
|
|
for (int path_nr = 0; path_nr < olist.nr; ++path_nr) {
|
|
|
|
struct string_list *conflicts = olist.items[path_nr].util;
|
2022-06-18 00:20:57 +00:00
|
|
|
for (int i = 0; i < conflicts->nr; i++) {
|
|
|
|
struct logical_conflict_info *info =
|
|
|
|
conflicts->items[i].util;
|
|
|
|
|
|
|
|
if (detailed) {
|
|
|
|
printf("%lu", (unsigned long)info->paths.nr);
|
|
|
|
putchar('\0');
|
|
|
|
for (int n = 0; n < info->paths.nr; n++) {
|
|
|
|
fputs(info->paths.v[n], stdout);
|
|
|
|
putchar('\0');
|
|
|
|
}
|
|
|
|
fputs(type_short_descriptions[info->type],
|
|
|
|
stdout);
|
|
|
|
putchar('\0');
|
|
|
|
}
|
2022-06-18 00:20:54 +00:00
|
|
|
puts(conflicts->items[i].string);
|
2022-06-18 00:20:57 +00:00
|
|
|
if (detailed)
|
|
|
|
putchar('\0');
|
|
|
|
}
|
2022-06-18 00:20:48 +00:00
|
|
|
}
|
|
|
|
string_list_clear(&olist, 0);
|
|
|
|
|
2022-08-04 19:51:05 +00:00
|
|
|
print_submodule_conflict_suggestion(&opti->conflicted_submodules);
|
|
|
|
|
2022-06-18 00:20:48 +00:00
|
|
|
/* Also include needed rename limit adjustment now */
|
|
|
|
diff_warn_rename_limit("merge.renamelimit",
|
|
|
|
opti->renames.needed_limit, 0);
|
|
|
|
|
|
|
|
trace2_region_leave("merge", "display messages", opt->repo);
|
|
|
|
}
|
|
|
|
|
2022-06-18 00:20:50 +00:00
|
|
|
void merge_get_conflicted_files(struct merge_result *result,
|
|
|
|
struct string_list *conflicted_files)
|
|
|
|
{
|
|
|
|
struct hashmap_iter iter;
|
|
|
|
struct strmap_entry *e;
|
|
|
|
struct merge_options_internal *opti = result->priv;
|
|
|
|
|
|
|
|
strmap_for_each_entry(&opti->conflicted, &iter, e) {
|
|
|
|
const char *path = e->key;
|
|
|
|
struct conflict_info *ci = e->value;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
VERIFY_CI(ci);
|
|
|
|
|
|
|
|
for (i = MERGE_BASE; i <= MERGE_SIDE2; i++) {
|
|
|
|
struct stage_info *si;
|
|
|
|
|
|
|
|
if (!(ci->filemask & (1ul << i)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
si = xmalloc(sizeof(*si));
|
|
|
|
si->stage = i+1;
|
|
|
|
si->mode = ci->stages[i].mode;
|
|
|
|
oidcpy(&si->oid, &ci->stages[i].oid);
|
|
|
|
string_list_append(conflicted_files, path)->util = si;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* string_list_sort() uses a stable sort, so we're good */
|
|
|
|
string_list_sort(conflicted_files);
|
|
|
|
}
|
|
|
|
|
2020-10-27 02:08:07 +00:00
|
|
|
void merge_switch_to_result(struct merge_options *opt,
|
|
|
|
struct tree *head,
|
|
|
|
struct merge_result *result,
|
|
|
|
int update_worktree_and_index,
|
|
|
|
int display_update_msgs)
|
|
|
|
{
|
2020-12-13 08:04:23 +00:00
|
|
|
assert(opt->priv == NULL);
|
|
|
|
if (result->clean >= 0 && update_worktree_and_index) {
|
2021-03-20 00:03:52 +00:00
|
|
|
const char *filename;
|
|
|
|
FILE *fp;
|
2020-12-13 08:04:23 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "checkout", opt->repo);
|
2020-12-13 08:04:23 +00:00
|
|
|
if (checkout(opt, head, result->tree)) {
|
|
|
|
/* failure to function */
|
|
|
|
result->clean = -1;
|
2022-07-29 17:12:06 +00:00
|
|
|
merge_finalize(opt, result);
|
2022-07-29 17:12:07 +00:00
|
|
|
trace2_region_leave("merge", "checkout", opt->repo);
|
2020-12-13 08:04:23 +00:00
|
|
|
return;
|
|
|
|
}
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "checkout", opt->repo);
|
2020-12-13 08:04:23 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "record_conflicted", opt->repo);
|
2021-03-20 00:03:50 +00:00
|
|
|
opt->priv = result->priv;
|
|
|
|
if (record_conflicted_index_entries(opt)) {
|
2020-12-13 08:04:23 +00:00
|
|
|
/* failure to function */
|
2021-03-20 00:03:50 +00:00
|
|
|
opt->priv = NULL;
|
2020-12-13 08:04:23 +00:00
|
|
|
result->clean = -1;
|
2022-07-29 17:12:06 +00:00
|
|
|
merge_finalize(opt, result);
|
2022-07-29 17:12:07 +00:00
|
|
|
trace2_region_leave("merge", "record_conflicted",
|
|
|
|
opt->repo);
|
2020-12-13 08:04:23 +00:00
|
|
|
return;
|
|
|
|
}
|
2021-03-20 00:03:50 +00:00
|
|
|
opt->priv = NULL;
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "record_conflicted", opt->repo);
|
2021-03-20 00:03:52 +00:00
|
|
|
|
|
|
|
trace2_region_enter("merge", "write_auto_merge", opt->repo);
|
|
|
|
filename = git_path_auto_merge(opt->repo);
|
|
|
|
fp = xfopen(filename, "w");
|
|
|
|
fprintf(fp, "%s\n", oid_to_hex(&result->tree->object.oid));
|
|
|
|
fclose(fp);
|
|
|
|
trace2_region_leave("merge", "write_auto_merge", opt->repo);
|
2020-12-13 08:04:23 +00:00
|
|
|
}
|
2022-06-18 00:20:48 +00:00
|
|
|
if (display_update_msgs)
|
2022-06-18 00:20:57 +00:00
|
|
|
merge_display_update_messages(opt, /* detailed */ 0, result);
|
2020-12-13 08:04:23 +00:00
|
|
|
|
2020-10-27 02:08:07 +00:00
|
|
|
merge_finalize(opt, result);
|
|
|
|
}
|
|
|
|
|
|
|
|
void merge_finalize(struct merge_options *opt,
|
|
|
|
struct merge_result *result)
|
|
|
|
{
|
2021-03-20 00:03:45 +00:00
|
|
|
if (opt->renormalize)
|
|
|
|
git_attr_set_direction(GIT_ATTR_CHECKIN);
|
2020-12-13 08:04:27 +00:00
|
|
|
assert(opt->priv == NULL);
|
|
|
|
|
merge-ort: fix calling merge_finalize() with no intermediate merge
If some code sets up the data structures for a merge, but then never
actually performs one before calling merge_finalize(), then
merge_finalize() wouldn't notice that result->priv was NULL and
return early, resulting in following that NULL pointer and getting
a segfault. There is currently no code in the git codebase that does
this, but this issue was found during testing of some proposed patches
that had the following structure:
struct merge_options merge_opt;
struct merge_result result;
init_merge_options(&merge_opt, the_repository);
memset(&result, 0, sizeof(result));
<do N merges, for some value of N>
merge_finalize(&merge_opt, &result);
where some flags could cause the code to have N=0, i.e. doing no merges.
Add a check for result->priv being NULL and return early to avoid a
segfault in these kinds of cases.
While at it, ensure the FREE_AND_NULL() in the function does something
useful with the nulling aspect, namely sets result->priv to NULL rather
than a mere temporary.
Reported-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-22 20:22:10 +00:00
|
|
|
if (result->priv) {
|
|
|
|
clear_or_reinit_internal_opts(result->priv, 0);
|
|
|
|
FREE_AND_NULL(result->priv);
|
|
|
|
}
|
2020-10-27 02:08:07 +00:00
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: helper functions for merge_incore_*() ***/
|
|
|
|
|
2021-03-20 00:03:48 +00:00
|
|
|
static struct tree *shift_tree_object(struct repository *repo,
|
|
|
|
struct tree *one, struct tree *two,
|
|
|
|
const char *subtree_shift)
|
|
|
|
{
|
|
|
|
struct object_id shifted;
|
|
|
|
|
|
|
|
if (!*subtree_shift) {
|
|
|
|
shift_tree(repo, &one->object.oid, &two->object.oid, &shifted, 0);
|
|
|
|
} else {
|
|
|
|
shift_tree_by(repo, &one->object.oid, &two->object.oid, &shifted,
|
|
|
|
subtree_shift);
|
|
|
|
}
|
|
|
|
if (oideq(&two->object.oid, &shifted))
|
|
|
|
return two;
|
|
|
|
return lookup_tree(repo, &shifted);
|
|
|
|
}
|
|
|
|
|
2020-12-16 22:28:00 +00:00
|
|
|
static inline void set_commit_tree(struct commit *c, struct tree *t)
|
|
|
|
{
|
|
|
|
c->maybe_tree = t;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct commit *make_virtual_commit(struct repository *repo,
|
|
|
|
struct tree *tree,
|
|
|
|
const char *comment)
|
|
|
|
{
|
|
|
|
struct commit *commit = alloc_commit_node(repo);
|
|
|
|
|
|
|
|
set_merge_remote_desc(commit, comment, (struct object *)commit);
|
|
|
|
set_commit_tree(commit, tree);
|
|
|
|
commit->object.parsed = 1;
|
|
|
|
return commit;
|
|
|
|
}
|
|
|
|
|
2020-12-13 08:04:09 +00:00
|
|
|
static void merge_start(struct merge_options *opt, struct merge_result *result)
|
|
|
|
{
|
2021-01-07 21:35:50 +00:00
|
|
|
struct rename_info *renames;
|
|
|
|
int i;
|
2021-07-30 11:47:40 +00:00
|
|
|
struct mem_pool *pool = NULL;
|
2021-01-07 21:35:50 +00:00
|
|
|
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
/* Sanity checks on opt */
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "sanity checks", opt->repo);
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
assert(opt->repo);
|
|
|
|
|
|
|
|
assert(opt->branch1 && opt->branch2);
|
|
|
|
|
|
|
|
assert(opt->detect_directory_renames >= MERGE_DIRECTORY_RENAMES_NONE &&
|
|
|
|
opt->detect_directory_renames <= MERGE_DIRECTORY_RENAMES_TRUE);
|
|
|
|
assert(opt->rename_limit >= -1);
|
|
|
|
assert(opt->rename_score >= 0 && opt->rename_score <= MAX_SCORE);
|
|
|
|
assert(opt->show_rename_progress >= 0 && opt->show_rename_progress <= 1);
|
|
|
|
|
|
|
|
assert(opt->xdl_opts >= 0);
|
|
|
|
assert(opt->recursive_variant >= MERGE_VARIANT_NORMAL &&
|
|
|
|
opt->recursive_variant <= MERGE_VARIANT_THEIRS);
|
|
|
|
|
2022-02-02 02:37:33 +00:00
|
|
|
if (opt->msg_header_prefix)
|
|
|
|
assert(opt->record_conflict_msgs_as_headers);
|
|
|
|
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
/*
|
|
|
|
* detect_renames, verbosity, buffer_output, and obuf are ignored
|
|
|
|
* fields that were used by "recursive" rather than "ort" -- but
|
|
|
|
* sanity check them anyway.
|
|
|
|
*/
|
|
|
|
assert(opt->detect_renames >= -1 &&
|
|
|
|
opt->detect_renames <= DIFF_DETECT_COPY);
|
|
|
|
assert(opt->verbosity >= 0 && opt->verbosity <= 5);
|
|
|
|
assert(opt->buffer_output <= 2);
|
|
|
|
assert(opt->obuf.len == 0);
|
|
|
|
|
|
|
|
assert(opt->priv == NULL);
|
merge-ort: avoid accidental API mis-use
Previously, callers of the merge-ort API could have passed an
uninitialized value for struct merge_result *result. However, we want
to check result to see if it has cached renames from a previous merge
that we can reuse; such values would be found behind result->priv.
However, if result->priv is uninitialized, attempting to access behind
it will give a segfault. So, we need result->priv to be NULL (which
will be the case if the caller does a memset(&result, 0)), or be written
by a previous call to the merge-ort machinery. Documenting this
requirement may help, but despite being the person who introduced this
requirement, I still missed it once and it did not fail in a very clear
way and led to a long debugging session.
Add a _properly_initialized field to merge_result; that value will be
0 if the caller zero'ed the merge_result, it will be set to a very
specific value by a previous run by the merge-ort machinery, and if it's
uninitialized it will most likely either be 0 or some value that does
not match the specific one we'd expect allowing us to throw a much more
meaningful error.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:37 +00:00
|
|
|
if (result->_properly_initialized != 0 &&
|
|
|
|
result->_properly_initialized != RESULT_INITIALIZED)
|
|
|
|
BUG("struct merge_result passed to merge_incore_*recursive() must be zeroed or filled with values from a previous run");
|
|
|
|
assert(!!result->priv == !!result->_properly_initialized);
|
2021-01-24 06:01:10 +00:00
|
|
|
if (result->priv) {
|
|
|
|
opt->priv = result->priv;
|
|
|
|
result->priv = NULL;
|
|
|
|
/*
|
|
|
|
* opt->priv non-NULL means we had results from a previous
|
|
|
|
* run; do a few sanity checks that user didn't mess with
|
|
|
|
* it in an obvious fashion.
|
|
|
|
*/
|
|
|
|
assert(opt->priv->call_depth == 0);
|
|
|
|
assert(!opt->priv->toplevel_dir ||
|
|
|
|
0 == strlen(opt->priv->toplevel_dir));
|
|
|
|
}
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "sanity checks", opt->repo);
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
|
merge-ort: use histogram diff
In my cursory investigation, histogram diffs are about 2% slower than
Myers diffs. Others have probably done more detailed benchmarks. But,
in short, histogram diffs have been around for years and in a number of
cases provide obviously better looking diffs where Myers diffs are
unintelligible but the performance hit has kept them from becoming the
default.
However, there are real merge bugs we know about that have triggered on
git.git and linux.git, which I don't have a clue how to address without
the additional information that I believe is provided by histogram
diffs. See the following:
https://lore.kernel.org/git/20190816184051.GB13894@sigill.intra.peff.net/
https://lore.kernel.org/git/CABPp-BHvJHpSJT7sdFwfNcPn_sOXwJi3=o14qjZS3M8Rzcxe2A@mail.gmail.com/
https://lore.kernel.org/git/CABPp-BGtez4qjbtFT1hQoREfcJPmk9MzjhY5eEq1QhXT23tFOw@mail.gmail.com/
I don't like mismerges. I really don't like silent mismerges. While I
am sometimes willing to make performance and correctness tradeoff, I'm
much more interested in correctness in general. I want to fix the above
bugs. I have not yet started doing so, but I believe histogram diff at
least gives me an angle. Unfortunately, I can't rely on using the
information from histogram diff unless it's in use. And it hasn't been
used because of a few percentage performance hit.
In testcases I have looked at, merge-ort is _much_ faster than
merge-recursive for non-trivial merges/rebases/cherry-picks. As such,
this is a golden opportunity to switch out the underlying diff algorithm
(at least the one used by the merge machinery; git-diff and git-log are
separate questions); doing so will allow me to get additional data and
improved diffs, and I believe it will help me fix the above bugs at some
point in the future.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:11 +00:00
|
|
|
/* Default to histogram diff. Actually, just hardcode it...for now. */
|
|
|
|
opt->xdl_opts = DIFF_WITH_ALG(opt, HISTOGRAM_DIFF);
|
|
|
|
|
2021-03-20 00:03:45 +00:00
|
|
|
/* Handle attr direction stuff for renormalization */
|
|
|
|
if (opt->renormalize)
|
|
|
|
git_attr_set_direction(GIT_ATTR_CHECKOUT);
|
|
|
|
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
/* Initialization of opt->priv, our internal merge data */
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "allocate/init", opt->repo);
|
2021-01-24 06:01:10 +00:00
|
|
|
if (opt->priv) {
|
|
|
|
clear_or_reinit_internal_opts(opt->priv, 1);
|
2022-08-04 19:51:05 +00:00
|
|
|
string_list_init_nodup(&opt->priv->conflicted_submodules);
|
2021-01-24 06:01:10 +00:00
|
|
|
trace2_region_leave("merge", "allocate/init", opt->repo);
|
|
|
|
return;
|
|
|
|
}
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
opt->priv = xcalloc(1, sizeof(*opt->priv));
|
|
|
|
|
2021-01-07 21:35:50 +00:00
|
|
|
/* Initialization of various renames fields */
|
|
|
|
renames = &opt->priv->renames;
|
2021-07-31 17:27:38 +00:00
|
|
|
mem_pool_init(&opt->priv->pool, 0);
|
|
|
|
pool = &opt->priv->pool;
|
2021-01-07 21:35:50 +00:00
|
|
|
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; i++) {
|
2021-03-13 22:22:02 +00:00
|
|
|
strintmap_init_with_options(&renames->dirs_removed[i],
|
2021-07-30 11:47:40 +00:00
|
|
|
NOT_RELEVANT, pool, 0);
|
2021-01-07 21:35:50 +00:00
|
|
|
strmap_init_with_options(&renames->dir_rename_count[i],
|
|
|
|
NULL, 1);
|
|
|
|
strmap_init_with_options(&renames->dir_renames[i],
|
|
|
|
NULL, 0);
|
2021-05-20 06:09:35 +00:00
|
|
|
/*
|
|
|
|
* relevant_sources uses -1 for the default, because we need
|
|
|
|
* to be able to distinguish not-in-strintmap from valid
|
|
|
|
* relevant_source values from enum file_rename_relevance.
|
|
|
|
* In particular, possibly_cache_new_pair() expects a negative
|
|
|
|
* value for not-found entries.
|
|
|
|
*/
|
2021-03-13 22:22:02 +00:00
|
|
|
strintmap_init_with_options(&renames->relevant_sources[i],
|
2021-05-20 06:09:35 +00:00
|
|
|
-1 /* explicitly invalid */,
|
2021-07-30 11:47:40 +00:00
|
|
|
pool, 0);
|
2021-05-20 06:09:34 +00:00
|
|
|
strmap_init_with_options(&renames->cached_pairs[i],
|
|
|
|
NULL, 1);
|
|
|
|
strset_init_with_options(&renames->cached_irrelevant[i],
|
|
|
|
NULL, 1);
|
|
|
|
strset_init_with_options(&renames->cached_target_names[i],
|
|
|
|
NULL, 0);
|
2021-01-07 21:35:50 +00:00
|
|
|
}
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
for (i = MERGE_SIDE1; i <= MERGE_SIDE2; i++) {
|
|
|
|
strintmap_init_with_options(&renames->deferred[i].possible_trivial_merges,
|
2021-07-30 11:47:40 +00:00
|
|
|
0, pool, 0);
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
strset_init_with_options(&renames->deferred[i].target_dirs,
|
2021-07-30 11:47:40 +00:00
|
|
|
pool, 1);
|
merge-ort: add data structures for allowable trivial directory resolves
As noted a few commits ago, we can resolve individual files early if all
three sides of the merge have a file at the path and two of the three
sides match. We would really like to do the same thing with
directories, because being able to do a trivial directory resolve means
we don't have to recurse into the directory, potentially saving us a
huge amount of time in both collect_merge_info() and process_entries().
Unfortunately, resolving directories early would mean missing any
renames whose source or destination is underneath that directory.
If we somehow knew there weren't any renames under the directory in
question, then we could resolve it early. Sadly, it is impossible to
determine whether there are renames under the directory in question
without recursing into it, and this has traditionally kept us from ever
implementing such an optimization.
In commit f89b4f2bee ("merge-ort: skip rename detection entirely if
possible", 2021-03-11), we added an additional reason that rename
detection could be skipped entirely -- namely, if no *relevant* sources
were present. Without completing collect_merge_info_callback(), we do
not yet know if there are no relevant sources. However, we do know that
if the current directory on one side matches the merge base, then every
source file within that directory will not be RELEVANT_CONTENT, and a
few simple checks can often let us rule out RELEVANT_LOCATION as well.
This suggests we can just defer recursing into such directories until
the end of collect_merge_info.
Since the deferred directories are known to not add any relevant sources
due to the above properties, then if there are no relevant sources after
we've traversed all paths other than the deferred ones, then we know
there are not any relevant sources. Under those conditions, rename
detection is unnecessary, and that means we can resolve the deferred
directories without recursing into them.
Note that the logic for skipping rename detection was also modified
further in commit 76e253793c ("merge-ort, diffcore-rename: employ cached
renames when possible", 2021-01-30); in particular rename detection can
be skipped if we already have cached renames for each relevant source.
We can take advantage of this information as well with our deferral of
recursing into directories where one side matches the merge base.
Add some data structures that we will use to do these deferrals, with
some lengthy comments explaining their purpose.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:33 +00:00
|
|
|
renames->deferred[i].trivial_merges_okay = 1; /* 1 == maybe */
|
|
|
|
}
|
2021-01-07 21:35:50 +00:00
|
|
|
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
/*
|
|
|
|
* Although we initialize opt->priv->paths with strdup_strings=0,
|
|
|
|
* that's just to avoid making yet another copy of an allocated
|
|
|
|
* string. Putting the entry into paths means we are taking
|
2021-07-31 17:27:38 +00:00
|
|
|
* ownership, so we will later free it.
|
merge-ort: port merge_start() from merge-recursive
merge_start() basically does a bunch of sanity checks, then allocates
and initializes opt->priv -- a struct merge_options_internal.
Most of the sanity checks are usable as-is. The
allocation/intialization is a bit different since merge-ort has a very
different merge_options_internal than merge-recursive, but the idea is
the same.
The weirdest part here is that merge-ort and merge-recursive use the
same struct merge_options, even though merge_options has a number of
fields that are oddly specific to merge-recursive's internal
implementation and don't even make sense with merge-ort's high-level
design (e.g. buffer_output, which merge-ort has to always do). I reused
the same data structure because:
* most the fields made sense to both merge algorithms
* making a new struct would have required making new enums or somehow
externalizing them, and that was getting messy.
* it simplifies converting the existing callers by not having to
have different code paths for merge_options setup.
I also marked detect_renames as ignored. We can revisit that later, but
in short: merge-recursive allowed turning off rename detection because
it was sometimes glacially slow. When you speed something up by a few
orders of magnitude, it's worth revisiting whether that justification is
still relevant. Besides, if folks find it's still too slow, perhaps
they have a better scaling case than I could find and maybe it turns up
some more optimizations we can add. If it still is needed as an option,
it is easy to add later.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-13 08:04:10 +00:00
|
|
|
*
|
|
|
|
* In contrast, conflicted just has a subset of keys from paths, so
|
|
|
|
* we don't want to free those (it'd be a duplicate free).
|
|
|
|
*/
|
2021-07-30 11:47:40 +00:00
|
|
|
strmap_init_with_options(&opt->priv->paths, pool, 0);
|
|
|
|
strmap_init_with_options(&opt->priv->conflicted, pool, 0);
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
|
|
|
|
/*
|
2022-06-18 00:20:54 +00:00
|
|
|
* keys & string_lists in conflicts will sometimes need to outlive
|
|
|
|
* "paths", so it will have a copy of relevant keys. It's probably
|
|
|
|
* a small subset of the overall paths that have special output.
|
merge-ort: add modify/delete handling and delayed output processing
The focus here is on adding a path_msg() which will queue up
warning/conflict/notice messages about the merge for later processing,
storing these in a pathname -> strbuf map. It might seem like a big
change, but it really just is:
* declaration of necessary map with some comments
* initialization and recording of data
* a bunch of code to iterate over the map at print/free time
* at least one caller in order to avoid an error about having an
unused function (which we provide in the form of implementing
modify/delete conflict handling).
At this stage, it is probably not clear why I am opting for delayed
output processing. There are multiple reasons:
1. Merges are supposed to abort if they would overwrite dirty changes
in the working tree. We cannot correctly determine whether changes
would be overwritten until both rename detection has occurred and
full processing of entries with the renames has finalized.
Warning/conflict/notice messages come up at intermediate codepaths
along the way, so unless we want spurious conflict/warning messages
being printed when the merge will be aborted anyway, we need to
save these messages and only print them when relevant.
2. There can be multiple messages for a single path, and we want all
messages for a give path to appear together instead of having them
grouped by conflict/warning type. This was a problem already with
merge-recursive.c but became even more important due to the
splitting apart of conflict types as discussed in the commit
message for 1f3c9ba707 ("t6425: be more flexible with rename/delete
conflict messages", 2020-08-10)
3. Some callers might want to avoid showing the output in certain
cases, such as if the end result is a clean merge. Rebases have
typically done this.
4. Some callers might not want the output to go to stdout or even
stderr, but might want to do something else with it entirely.
For example, a --remerge-diff option to `git show` or `git log
-p` that remerges on the fly and diffs merge commits against the
remerged version would benefit from stdout/stderr not being
written to in the standard form.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-03 15:59:46 +00:00
|
|
|
*/
|
2022-06-18 00:20:54 +00:00
|
|
|
strmap_init(&opt->priv->conflicts);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
|
|
|
|
trace2_region_leave("merge", "allocate/init", opt->repo);
|
2020-12-13 08:04:09 +00:00
|
|
|
}
|
|
|
|
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
static void merge_check_renames_reusable(struct merge_options *opt,
|
|
|
|
struct merge_result *result,
|
|
|
|
struct tree *merge_base,
|
|
|
|
struct tree *side1,
|
|
|
|
struct tree *side2)
|
|
|
|
{
|
|
|
|
struct rename_info *renames;
|
|
|
|
struct tree **merge_trees;
|
|
|
|
struct merge_options_internal *opti = result->priv;
|
|
|
|
|
|
|
|
if (!opti)
|
|
|
|
return;
|
|
|
|
|
|
|
|
renames = &opti->renames;
|
|
|
|
merge_trees = renames->merge_trees;
|
2021-05-20 06:09:40 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle case where previous merge operation did not want cache to
|
|
|
|
* take effect, e.g. because rename/rename(1to1) makes it invalid.
|
|
|
|
*/
|
|
|
|
if (!merge_trees[0]) {
|
|
|
|
assert(!merge_trees[0] && !merge_trees[1] && !merge_trees[2]);
|
|
|
|
renames->cached_pairs_valid_side = 0; /* neither side valid */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Handle other cases; note that merge_trees[0..2] will only
|
|
|
|
* be NULL if opti is, or if all three were manually set to
|
|
|
|
* NULL by e.g. rename/rename(1to1) handling.
|
|
|
|
*/
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
assert(merge_trees[0] && merge_trees[1] && merge_trees[2]);
|
|
|
|
|
|
|
|
/* Check if we meet a condition for re-using cached_pairs */
|
|
|
|
if (oideq(&merge_base->object.oid, &merge_trees[2]->object.oid) &&
|
|
|
|
oideq(&side1->object.oid, &result->tree->object.oid))
|
|
|
|
renames->cached_pairs_valid_side = MERGE_SIDE1;
|
|
|
|
else if (oideq(&merge_base->object.oid, &merge_trees[1]->object.oid) &&
|
|
|
|
oideq(&side2->object.oid, &result->tree->object.oid))
|
|
|
|
renames->cached_pairs_valid_side = MERGE_SIDE2;
|
|
|
|
else
|
|
|
|
renames->cached_pairs_valid_side = 0; /* neither side valid */
|
|
|
|
}
|
|
|
|
|
2020-12-03 15:59:44 +00:00
|
|
|
/*** Function Grouping: merge_incore_*() and their internal variants ***/
|
|
|
|
|
2020-12-13 08:04:09 +00:00
|
|
|
/*
|
|
|
|
* Originally from merge_trees_internal(); heavily adapted, though.
|
|
|
|
*/
|
|
|
|
static void merge_ort_nonrecursive_internal(struct merge_options *opt,
|
|
|
|
struct tree *merge_base,
|
|
|
|
struct tree *side1,
|
|
|
|
struct tree *side2,
|
|
|
|
struct merge_result *result)
|
|
|
|
{
|
|
|
|
struct object_id working_tree_oid;
|
|
|
|
|
2021-03-20 00:03:48 +00:00
|
|
|
if (opt->subtree_shift) {
|
|
|
|
side2 = shift_tree_object(opt->repo, side1, side2,
|
|
|
|
opt->subtree_shift);
|
|
|
|
merge_base = shift_tree_object(opt->repo, side1, merge_base,
|
|
|
|
opt->subtree_shift);
|
|
|
|
}
|
|
|
|
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
redo:
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "collect_merge_info", opt->repo);
|
2020-12-13 08:04:12 +00:00
|
|
|
if (collect_merge_info(opt, merge_base, side1, side2) != 0) {
|
|
|
|
/*
|
|
|
|
* TRANSLATORS: The %s arguments are: 1) tree hash of a merge
|
|
|
|
* base, and 2-3) the trees for the two trees we're merging.
|
|
|
|
*/
|
|
|
|
err(opt, _("collecting merge info failed for trees %s, %s, %s"),
|
|
|
|
oid_to_hex(&merge_base->object.oid),
|
|
|
|
oid_to_hex(&side1->object.oid),
|
|
|
|
oid_to_hex(&side2->object.oid));
|
|
|
|
result->clean = -1;
|
|
|
|
return;
|
|
|
|
}
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "collect_merge_info", opt->repo);
|
2020-12-13 08:04:12 +00:00
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "renames", opt->repo);
|
2020-12-13 08:04:09 +00:00
|
|
|
result->clean = detect_and_process_renames(opt, merge_base,
|
|
|
|
side1, side2);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "renames", opt->repo);
|
merge-ort: restart merge with cached renames to reduce process entry cost
The merge algorithm mostly consists of the following three functions:
collect_merge_info()
detect_and_process_renames()
process_entries()
Prior to the trivial directory resolution optimization of the last half
dozen commits, process_entries() was consistently the slowest, followed
by collect_merge_info(), then detect_and_process_renames(). When the
trivial directory resolution applies, it often dramatically decreases
the amount of time spent in the two slower functions.
Looking at the performance results in the previous commit, the trivial
directory resolution optimization helps amazingly well when there are no
relevant renames. It also helps really well when reapplying a long
series of linear commits (such as in a rebase or cherry-pick), since the
relevant renames may well be cached from the first reapplied commit.
But when there are any relevant renames that are not cached (represented
by the just-one-mega testcase), then the optimization does not help at
all.
Often, I noticed that when the optimization does not apply, it is
because there are a handful of relevant sources -- maybe even only one.
It felt frustrating to need to recurse into potentially hundreds or even
thousands of directories just for a single rename, but it was needed for
correctness.
However, staring at this list of functions and noticing that
process_entries() is the most expensive and knowing I could avoid it if
I had cached renames suggested a simple idea: change
collect_merge_info()
detect_and_process_renames()
process_entries()
into
collect_merge_info()
detect_and_process_renames()
<cache all the renames, and restart>
collect_merge_info()
detect_and_process_renames()
process_entries()
This may seem odd and look like more work. However, note that although
we run collect_merge_info() twice, the second time we get to employ
trivial directory resolves, which makes it much faster, so the increased
time in collect_merge_info() is small. While we run
detect_and_process_renames() again, all renames are cached so it's
nearly a no-op (we don't call into diffcore_rename_extended() but we do
have a little bit of data structure checking and fixing up). And the
big payoff comes from the fact that process_entries(), will be much
faster due to having far fewer entries to process.
This restarting only makes sense if we can save recursing into enough
directories to make it worth our while. Introduce a simple heuristic to
guide this. Note that this heuristic uses a "wanted_factor" that I have
virtually no actual real world data for, just some back-of-the-envelope
quasi-scientific calculations that I included in some comments and then
plucked a simple round number out of thin air. It could be that
tweaking this number to make it either higher or lower improves the
optimization. (There's slightly more here; when I first introduced this
optimization, I used a factor of 10, because I was completely confident
it was big enough to not cause slowdowns in special cases. I was
certain it was higher than needed. Several months later, I added the
rough calculations which make me think the optimal number is close to 2;
but instead of pushing to the limit, I just bumped it to 3 to reduce the
risk that there are special cases where this optimization can result in
slowing down the code a little. If the ratio of path counts is below 3,
we probably will only see minor performance improvements at best
anyway.)
Also, note that while the diffstat looks kind of long (nearly 100
lines), more than half of it is in two comments explaining how things
work.
For the testcases mentioned in commit 557ac0350d ("merge-ort: begin
performance work; instrument with trace2_region_* calls", 2020-10-28),
this change improves the performance as follows:
Before After
no-renames: 205.1 ms ± 3.8 ms 204.2 ms ± 3.0 ms
mega-renames: 1.564 s ± 0.010 s 1.076 s ± 0.015 s
just-one-mega: 479.5 ms ± 3.9 ms 364.1 ms ± 7.0 ms
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-16 05:22:37 +00:00
|
|
|
if (opt->priv->renames.redo_after_renames == 2) {
|
|
|
|
trace2_region_enter("merge", "reset_maps", opt->repo);
|
|
|
|
clear_or_reinit_internal_opts(opt->priv, 1);
|
|
|
|
trace2_region_leave("merge", "reset_maps", opt->repo);
|
|
|
|
goto redo;
|
|
|
|
}
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
|
|
|
|
trace2_region_enter("merge", "process_entries", opt->repo);
|
2022-09-28 07:29:21 +00:00
|
|
|
if (process_entries(opt, &working_tree_oid) < 0)
|
|
|
|
result->clean = -1;
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "process_entries", opt->repo);
|
2020-12-13 08:04:09 +00:00
|
|
|
|
|
|
|
/* Set return values */
|
2022-06-18 00:20:55 +00:00
|
|
|
result->path_messages = &opt->priv->conflicts;
|
2022-06-18 00:20:54 +00:00
|
|
|
|
2022-09-28 07:29:21 +00:00
|
|
|
if (result->clean >= 0) {
|
|
|
|
result->tree = parse_tree_indirect(&working_tree_oid);
|
|
|
|
/* existence of conflicted entries implies unclean */
|
|
|
|
result->clean &= strmap_empty(&opt->priv->conflicted);
|
|
|
|
}
|
2020-12-13 08:04:09 +00:00
|
|
|
if (!opt->priv->call_depth) {
|
|
|
|
result->priv = opt->priv;
|
merge-ort: avoid accidental API mis-use
Previously, callers of the merge-ort API could have passed an
uninitialized value for struct merge_result *result. However, we want
to check result to see if it has cached renames from a previous merge
that we can reuse; such values would be found behind result->priv.
However, if result->priv is uninitialized, attempting to access behind
it will give a segfault. So, we need result->priv to be NULL (which
will be the case if the caller does a memset(&result, 0)), or be written
by a previous call to the merge-ort machinery. Documenting this
requirement may help, but despite being the person who introduced this
requirement, I still missed it once and it did not fail in a very clear
way and led to a long debugging session.
Add a _properly_initialized field to merge_result; that value will be
0 if the caller zero'ed the merge_result, it will be set to a very
specific value by a previous run by the merge-ort machinery, and if it's
uninitialized it will most likely either be 0 or some value that does
not match the specific one we'd expect allowing us to throw a much more
meaningful error.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:37 +00:00
|
|
|
result->_properly_initialized = RESULT_INITIALIZED;
|
2020-12-13 08:04:09 +00:00
|
|
|
opt->priv = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-16 22:28:02 +00:00
|
|
|
/*
|
|
|
|
* Originally from merge_recursive_internal(); somewhat adapted, though.
|
|
|
|
*/
|
|
|
|
static void merge_ort_internal(struct merge_options *opt,
|
|
|
|
struct commit_list *merge_bases,
|
|
|
|
struct commit *h1,
|
|
|
|
struct commit *h2,
|
|
|
|
struct merge_result *result)
|
|
|
|
{
|
merge-ort: fix memory leak in merge_ort_internal()
The documentation for merge_incore_recursive(), modelled after
merge_recursive(), notes that
merge_bases will be consumed (emptied) so make a copy if you need it
However, in merge_ort_internal() (which merge_incore_recursive() calls),
it runs
merged_merge_bases = pop_commit(&merge_bases);
...
for (iter = merge_bases; iter; iter = iter->next) {
...
}
In other words, it only consumes the *first* entry of merge_bases, and
the rest it iterates through. If it iterated through all of them, the
caller could be responsible for free'ing the memory. If it consumed all
of them, the current documentation would be correct and the callers
would need to do nothing. The current middle ground makes it impossible
for callers to avoid memory leaks, since any attempt to use the
merge_bases it passes in would result in a use-after-free.
It turns out this part of the code was copied from merge-recursive.c,
which has had the same bug for 15.5 years. However, since we are trying
to keep merge-recursive.c stable as we sunset it, let's just fix the
leak in in merge_ort_internal() by having it actually consume all the
elements of the merge_bases commit_list.
Testing this commit against t6404 (the first testcase specifically
about recursive merges) under valgrind shows that this patch fixes
the following leak:
32 (16 direct, 16 indirect) bytes in 1 blocks are definitely lost \
in loss record 49 of 126
at 0x484086F: malloc (vg_replace_malloc.c:380)
by 0x69FFEB: do_xmalloc (wrapper.c:41)
by 0x6A0073: xmalloc (wrapper.c:62)
by 0x52A72D: commit_list_insert (commit.c:556)
by 0x47EC86: try_merge_strategy (merge.c:751)
by 0x48143B: cmd_merge (merge.c:1679)
by 0x40686E: run_builtin (git.c:464)
by 0x406C51: handle_builtin (git.c:716)
by 0x406E96: run_argv (git.c:783)
by 0x40730A: cmd_main (git.c:914)
by 0x4E7DFA: main (common-main.c:56)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-20 07:47:14 +00:00
|
|
|
struct commit *next;
|
2020-12-16 22:28:02 +00:00
|
|
|
struct commit *merged_merge_bases;
|
|
|
|
const char *ancestor_name;
|
|
|
|
struct strbuf merge_base_abbrev = STRBUF_INIT;
|
|
|
|
|
|
|
|
if (!merge_bases) {
|
2023-03-28 13:58:47 +00:00
|
|
|
merge_bases = repo_get_merge_bases(the_repository, h1, h2);
|
2020-12-16 22:28:02 +00:00
|
|
|
/* See merge-ort.h:merge_incore_recursive() declaration NOTE */
|
|
|
|
merge_bases = reverse_commit_list(merge_bases);
|
|
|
|
}
|
|
|
|
|
|
|
|
merged_merge_bases = pop_commit(&merge_bases);
|
2022-05-02 16:50:37 +00:00
|
|
|
if (!merged_merge_bases) {
|
2020-12-16 22:28:02 +00:00
|
|
|
/* if there is no common ancestor, use an empty tree */
|
|
|
|
struct tree *tree;
|
|
|
|
|
|
|
|
tree = lookup_tree(opt->repo, opt->repo->hash_algo->empty_tree);
|
|
|
|
merged_merge_bases = make_virtual_commit(opt->repo, tree,
|
|
|
|
"ancestor");
|
|
|
|
ancestor_name = "empty tree";
|
|
|
|
} else if (merge_bases) {
|
|
|
|
ancestor_name = "merged common ancestors";
|
|
|
|
} else {
|
|
|
|
strbuf_add_unique_abbrev(&merge_base_abbrev,
|
|
|
|
&merged_merge_bases->object.oid,
|
|
|
|
DEFAULT_ABBREV);
|
|
|
|
ancestor_name = merge_base_abbrev.buf;
|
|
|
|
}
|
|
|
|
|
merge-ort: fix memory leak in merge_ort_internal()
The documentation for merge_incore_recursive(), modelled after
merge_recursive(), notes that
merge_bases will be consumed (emptied) so make a copy if you need it
However, in merge_ort_internal() (which merge_incore_recursive() calls),
it runs
merged_merge_bases = pop_commit(&merge_bases);
...
for (iter = merge_bases; iter; iter = iter->next) {
...
}
In other words, it only consumes the *first* entry of merge_bases, and
the rest it iterates through. If it iterated through all of them, the
caller could be responsible for free'ing the memory. If it consumed all
of them, the current documentation would be correct and the callers
would need to do nothing. The current middle ground makes it impossible
for callers to avoid memory leaks, since any attempt to use the
merge_bases it passes in would result in a use-after-free.
It turns out this part of the code was copied from merge-recursive.c,
which has had the same bug for 15.5 years. However, since we are trying
to keep merge-recursive.c stable as we sunset it, let's just fix the
leak in in merge_ort_internal() by having it actually consume all the
elements of the merge_bases commit_list.
Testing this commit against t6404 (the first testcase specifically
about recursive merges) under valgrind shows that this patch fixes
the following leak:
32 (16 direct, 16 indirect) bytes in 1 blocks are definitely lost \
in loss record 49 of 126
at 0x484086F: malloc (vg_replace_malloc.c:380)
by 0x69FFEB: do_xmalloc (wrapper.c:41)
by 0x6A0073: xmalloc (wrapper.c:62)
by 0x52A72D: commit_list_insert (commit.c:556)
by 0x47EC86: try_merge_strategy (merge.c:751)
by 0x48143B: cmd_merge (merge.c:1679)
by 0x40686E: run_builtin (git.c:464)
by 0x406C51: handle_builtin (git.c:716)
by 0x406E96: run_argv (git.c:783)
by 0x40730A: cmd_main (git.c:914)
by 0x4E7DFA: main (common-main.c:56)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-20 07:47:14 +00:00
|
|
|
for (next = pop_commit(&merge_bases); next;
|
|
|
|
next = pop_commit(&merge_bases)) {
|
2020-12-16 22:28:02 +00:00
|
|
|
const char *saved_b1, *saved_b2;
|
|
|
|
struct commit *prev = merged_merge_bases;
|
|
|
|
|
|
|
|
opt->priv->call_depth++;
|
|
|
|
/*
|
|
|
|
* When the merge fails, the result contains files
|
|
|
|
* with conflict markers. The cleanness flag is
|
|
|
|
* ignored (unless indicating an error), it was never
|
|
|
|
* actually used, as result of merge_trees has always
|
|
|
|
* overwritten it: the committed "conflicts" were
|
|
|
|
* already resolved.
|
|
|
|
*/
|
|
|
|
saved_b1 = opt->branch1;
|
|
|
|
saved_b2 = opt->branch2;
|
|
|
|
opt->branch1 = "Temporary merge branch 1";
|
|
|
|
opt->branch2 = "Temporary merge branch 2";
|
merge-ort: fix memory leak in merge_ort_internal()
The documentation for merge_incore_recursive(), modelled after
merge_recursive(), notes that
merge_bases will be consumed (emptied) so make a copy if you need it
However, in merge_ort_internal() (which merge_incore_recursive() calls),
it runs
merged_merge_bases = pop_commit(&merge_bases);
...
for (iter = merge_bases; iter; iter = iter->next) {
...
}
In other words, it only consumes the *first* entry of merge_bases, and
the rest it iterates through. If it iterated through all of them, the
caller could be responsible for free'ing the memory. If it consumed all
of them, the current documentation would be correct and the callers
would need to do nothing. The current middle ground makes it impossible
for callers to avoid memory leaks, since any attempt to use the
merge_bases it passes in would result in a use-after-free.
It turns out this part of the code was copied from merge-recursive.c,
which has had the same bug for 15.5 years. However, since we are trying
to keep merge-recursive.c stable as we sunset it, let's just fix the
leak in in merge_ort_internal() by having it actually consume all the
elements of the merge_bases commit_list.
Testing this commit against t6404 (the first testcase specifically
about recursive merges) under valgrind shows that this patch fixes
the following leak:
32 (16 direct, 16 indirect) bytes in 1 blocks are definitely lost \
in loss record 49 of 126
at 0x484086F: malloc (vg_replace_malloc.c:380)
by 0x69FFEB: do_xmalloc (wrapper.c:41)
by 0x6A0073: xmalloc (wrapper.c:62)
by 0x52A72D: commit_list_insert (commit.c:556)
by 0x47EC86: try_merge_strategy (merge.c:751)
by 0x48143B: cmd_merge (merge.c:1679)
by 0x40686E: run_builtin (git.c:464)
by 0x406C51: handle_builtin (git.c:716)
by 0x406E96: run_argv (git.c:783)
by 0x40730A: cmd_main (git.c:914)
by 0x4E7DFA: main (common-main.c:56)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-20 07:47:14 +00:00
|
|
|
merge_ort_internal(opt, NULL, prev, next, result);
|
2020-12-16 22:28:02 +00:00
|
|
|
if (result->clean < 0)
|
|
|
|
return;
|
|
|
|
opt->branch1 = saved_b1;
|
|
|
|
opt->branch2 = saved_b2;
|
|
|
|
opt->priv->call_depth--;
|
|
|
|
|
|
|
|
merged_merge_bases = make_virtual_commit(opt->repo,
|
|
|
|
result->tree,
|
|
|
|
"merged tree");
|
|
|
|
commit_list_insert(prev, &merged_merge_bases->parents);
|
merge-ort: fix memory leak in merge_ort_internal()
The documentation for merge_incore_recursive(), modelled after
merge_recursive(), notes that
merge_bases will be consumed (emptied) so make a copy if you need it
However, in merge_ort_internal() (which merge_incore_recursive() calls),
it runs
merged_merge_bases = pop_commit(&merge_bases);
...
for (iter = merge_bases; iter; iter = iter->next) {
...
}
In other words, it only consumes the *first* entry of merge_bases, and
the rest it iterates through. If it iterated through all of them, the
caller could be responsible for free'ing the memory. If it consumed all
of them, the current documentation would be correct and the callers
would need to do nothing. The current middle ground makes it impossible
for callers to avoid memory leaks, since any attempt to use the
merge_bases it passes in would result in a use-after-free.
It turns out this part of the code was copied from merge-recursive.c,
which has had the same bug for 15.5 years. However, since we are trying
to keep merge-recursive.c stable as we sunset it, let's just fix the
leak in in merge_ort_internal() by having it actually consume all the
elements of the merge_bases commit_list.
Testing this commit against t6404 (the first testcase specifically
about recursive merges) under valgrind shows that this patch fixes
the following leak:
32 (16 direct, 16 indirect) bytes in 1 blocks are definitely lost \
in loss record 49 of 126
at 0x484086F: malloc (vg_replace_malloc.c:380)
by 0x69FFEB: do_xmalloc (wrapper.c:41)
by 0x6A0073: xmalloc (wrapper.c:62)
by 0x52A72D: commit_list_insert (commit.c:556)
by 0x47EC86: try_merge_strategy (merge.c:751)
by 0x48143B: cmd_merge (merge.c:1679)
by 0x40686E: run_builtin (git.c:464)
by 0x406C51: handle_builtin (git.c:716)
by 0x406E96: run_argv (git.c:783)
by 0x40730A: cmd_main (git.c:914)
by 0x4E7DFA: main (common-main.c:56)
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-20 07:47:14 +00:00
|
|
|
commit_list_insert(next, &merged_merge_bases->parents->next);
|
2020-12-16 22:28:02 +00:00
|
|
|
|
|
|
|
clear_or_reinit_internal_opts(opt->priv, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
opt->ancestor = ancestor_name;
|
|
|
|
merge_ort_nonrecursive_internal(opt,
|
|
|
|
repo_get_commit_tree(opt->repo,
|
|
|
|
merged_merge_bases),
|
|
|
|
repo_get_commit_tree(opt->repo, h1),
|
|
|
|
repo_get_commit_tree(opt->repo, h2),
|
|
|
|
result);
|
|
|
|
strbuf_release(&merge_base_abbrev);
|
|
|
|
opt->ancestor = NULL; /* avoid accidental re-use of opt->ancestor */
|
|
|
|
}
|
|
|
|
|
2020-10-27 02:08:07 +00:00
|
|
|
void merge_incore_nonrecursive(struct merge_options *opt,
|
|
|
|
struct tree *merge_base,
|
|
|
|
struct tree *side1,
|
|
|
|
struct tree *side2,
|
|
|
|
struct merge_result *result)
|
|
|
|
{
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "incore_nonrecursive", opt->repo);
|
|
|
|
|
|
|
|
trace2_region_enter("merge", "merge_start", opt->repo);
|
2020-12-13 08:04:09 +00:00
|
|
|
assert(opt->ancestor != NULL);
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
merge_check_renames_reusable(opt, result, merge_base, side1, side2);
|
2020-12-13 08:04:09 +00:00
|
|
|
merge_start(opt, result);
|
merge-ort: add code to check for whether cached renames can be reused
We need to know when renames detected in a previous merge operation can
be reused in a later merge operation. Consider the following setup
(from the git-rebase manpage):
A---B---C topic
/
D---E---F---G master
After rebasing, this will appear as:
A'--B'--C' topic
/
D---E---F---G master
Further, let's say that 'oldfile' was renamed to 'newfile' between E
and G. The rebase or cherry-pick of A onto G will involve a three-way
merge between E (as the merge base) and G and A. After detecting the
rename between E:oldfile and G:newfile, there will be a three-way
content merge of the following:
E:oldfile
G:newfile
A:oldfile
and produce a new result:
A':newfile
Now, when we want to pick B onto A', we will need to do a three-way
merge between A (as the merge-base) and A' and B. This will involve
a three-way content merge of
A:oldfile
A':newfile
B:oldfile
but only if we can detect that A:oldfile is similar enough to A':newfile
to be used together in a three-way content merge, i.e. only if we can
detect that A:oldfile and A':newfile are a rename. But we already know
that A:oldfile and A':newfile are similar enough to be used in a
three-way content merge, because that is precisely where A':newfile came
from in the previous merge.
Note that A & A' both appear in both merges. That gives us the
condition under which we can reuse renames.
There are a couple important points about this optimization:
- If the rebase or cherry-pick halts for user conflicts, these caches
are NOT saved anywhere. Thus, resuming a halted rebase or
cherry-pick will result in no reused renames for the next commit.
This is intentional, as user resolution can change files
significantly and in ways that violate the similarity assumptions
here.
- Technically, in a *very* narrow case this might give slightly
different results for rename detection. Using the example above,
if:
* E:oldfile had 20 lines
* G:newfile added 10 new lines at the beginning of the file
* A:oldfile deleted all but the first three lines of the file
then
=> A':newfile would have 13 lines, 3 of which matches those
in A:oldfile.
Consider the two cases:
* Without this optimization:
- the next step of the rebase operation (moving B to B')
would not detect the rename betwen A:oldfile and A':newfile
- we'd thus get a modify/delete conflict with the rebase
operation halting for the user to resolve, and have both
A':newfile and B:oldfile sitting in the working tree.
* With this optimization:
- the rename between A:oldfile and A':newfile would be detected
via the cache of renames
- a three-way merge between A:oldfile, A':newfile, and B:oldfile
would commence and be written to A':newfile
Now, is the difference in behavior a bug...or a bugfix? I can't
tell. Given that A:oldfile and A':newfile are not very similar,
when we three-way merge with B:oldfile it seems likely we'll hit a
conflict for the user to resolve. And it shouldn't be too hard for
users to see why we did that three-way merge; oldfile and newfile
*were* renames somewhere in the sequence. So, most of these corner
cases will still behave similarly -- namely, a conflict given to the
user to resolve. Also, consider the interesting case when commit B
is a clean revert of commit A. Without this optimization, a rebase
could not both apply a weird patch like A and then immediately
revert it; users would be forced to resolve merge conflicts. With
this optimization, it would successfully apply the clean revert.
So, there is certainly at least one case that behaves better. Even
if it's considered a "difference in behavior", I think both behaviors
are reasonable, and the time savings provided by this optimization
justify using the slightly altered rename heuristics.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-20 06:09:36 +00:00
|
|
|
/*
|
|
|
|
* Record the trees used in this merge, so if there's a next merge in
|
|
|
|
* a cherry-pick or rebase sequence it might be able to take advantage
|
|
|
|
* of the cached_pairs in that next merge.
|
|
|
|
*/
|
|
|
|
opt->priv->renames.merge_trees[0] = merge_base;
|
|
|
|
opt->priv->renames.merge_trees[1] = side1;
|
|
|
|
opt->priv->renames.merge_trees[2] = side2;
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "merge_start", opt->repo);
|
|
|
|
|
2020-12-13 08:04:09 +00:00
|
|
|
merge_ort_nonrecursive_internal(opt, merge_base, side1, side2, result);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "incore_nonrecursive", opt->repo);
|
2020-10-27 02:08:07 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void merge_incore_recursive(struct merge_options *opt,
|
|
|
|
struct commit_list *merge_bases,
|
|
|
|
struct commit *side1,
|
|
|
|
struct commit *side2,
|
|
|
|
struct merge_result *result)
|
|
|
|
{
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "incore_recursive", opt->repo);
|
|
|
|
|
2020-12-16 22:28:02 +00:00
|
|
|
/* We set the ancestor label based on the merge_bases */
|
|
|
|
assert(opt->ancestor == NULL);
|
|
|
|
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_enter("merge", "merge_start", opt->repo);
|
2020-12-16 22:28:02 +00:00
|
|
|
merge_start(opt, result);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "merge_start", opt->repo);
|
|
|
|
|
2020-12-16 22:28:02 +00:00
|
|
|
merge_ort_internal(opt, merge_bases, side1, side2, result);
|
merge-ort: begin performance work; instrument with trace2_region_* calls
Add some timing instrumentation for both merge-ort and diffcore-rename;
I used these to measure and optimize performance in both, and several
future patch series will build on these to reduce the timings of some
select testcases.
=== Setup ===
The primary testcase I used involved rebasing a random topic in the
linux kernel (consisting of 35 patches) against an older version. I
added two variants, one where I rename a toplevel directory, and another
where I only rebase one patch instead of the whole topic. The setup is
as follows:
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
$ git branch hwmon-updates fd8bdb23b91876ac1e624337bb88dc1dcc21d67e
$ git branch hwmon-just-one fd8bdb23b91876ac1e624337bb88dc1dcc21d67e~34
$ git branch base 4703d9119972bf586d2cca76ec6438f819ffa30e
$ git switch -c 5.4-renames v5.4
$ git mv drivers pilots # Introduce over 26,000 renames
$ git commit -m "Rename drivers/ to pilots/"
$ git config merge.renameLimit 30000
$ git config merge.directoryRenames true
=== Testcases ===
Now with REBASE standing for either "git rebase [--merge]" (using
merge-recursive) or "test-tool fast-rebase" (using merge-ort), the
testcases are:
Testcase #1: no-renames
$ git checkout v5.4^0
$ REBASE --onto HEAD base hwmon-updates
Note: technically the name is misleading; there are some renames, but
very few. Rename detection only takes about half the overall time.
Testcase #2: mega-renames
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-updates
Testcase #3: just-one-mega
$ git checkout 5.4-renames^0
$ REBASE --onto HEAD base hwmon-just-one
=== Timing results ===
Overall timings, using hyperfine (1 warmup run, 3 runs for mega-renames,
10 runs for the other two cases):
merge-recursive merge-ort
no-renames: 18.912 s ± 0.174 s 14.263 s ± 0.053 s
mega-renames: 5964.031 s ± 10.459 s 5504.231 s ± 5.150 s
just-one-mega: 149.583 s ± 0.751 s 158.534 s ± 0.498 s
A single re-run of each with some breakdowns:
--- no-renames ---
merge-recursive merge-ort
overall runtime: 19.302 s 14.257 s
inexact rename detection: 7.603 s 7.906 s
everything else: 11.699 s 6.351 s
--- mega-renames ---
merge-recursive merge-ort
overall runtime: 5950.195 s 5499.672 s
inexact rename detection: 5746.309 s 5487.120 s
everything else: 203.886 s 17.552 s
--- just-one-mega ---
merge-recursive merge-ort
overall runtime: 151.001 s 158.582 s
inexact rename detection: 143.448 s 157.835 s
everything else: 7.553 s 0.747 s
=== Timing observations ===
0) Maximum speedup
The "everything else" row represents the maximum speedup we could
achieve if we were to somehow infinitely parallelize inexact rename
detection, but leave everything else alone. The fact that this is so
much smaller than the real runtime (even in the case with virtually no
renames) makes it clear just how overwhelmingly large the time spent on
rename detection can be.
1) no-renames
1a) merge-ort is faster than merge-recursive, which is nice. However,
this still should not be considered good enough. Although the "merge"
backend to rebase (merge-recursive) is sometimes faster than the "apply"
backend, this is one of those cases where it is not. In fact, even
merge-ort is slower. The "apply" backend can complete this testcase in
6.940 s ± 0.485 s
which is about 2x faster than merge-ort and 3x faster than
merge-recursive. One goal of the merge-ort performance work will be to
make it faster than git-am on this (and similar) testcases.
2) mega-renames
2a) Obviously rename detection is a huge cost; it's where most the time
is spent. We need to cut that down. If we could somehow infinitely
parallelize it and drive its time to 0, the merge-recursive time would
drop to about 204s, and the merge-ort time would drop to about 17s. I
think this particular stat shows I've subtly baked a couple performance
improvements into merge-ort and into fast-rebase already.
3) just-one-mega
3a) not much to say here, it just gives some flavor for how rebasing
only one patch compares to rebasing 35.
=== Goals ===
This patch is obviously just the beginning. Here are some of my goals
that this measurement will help us achieve:
* Drive the cost of rename detection down considerably for merges
* After the above has been achieved, see if there are other slowness
factors (which would have previously been overshadowed by rename
detection costs) which we can then focus on and also optimize.
* Ensure our rebase testcase that requires little rename detection
is noticeably faster with merge-ort than with apply-based rebase.
Signed-off-by: Elijah Newren <newren@gmail.com>
Acked-by: Taylor Blau <ttaylorr@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 06:01:12 +00:00
|
|
|
trace2_region_leave("merge", "incore_recursive", opt->repo);
|
2020-10-27 02:08:07 +00:00
|
|
|
}
|