Commit graph

2 commits

Author SHA1 Message Date
Jeff King
f66e0401ab pack-objects: avoid pointless oe_map_new_pack() calls
This patch fixes an extreme slowdown in pack-objects when you have more
than 1023 packs. See below for numbers.

Since 43fa44fa3b (pack-objects: move in_pack out of struct object_entry,
2018-04-14), we use a complicated system to save some per-object memory.

Each object_entry structs gets a 10-bit field to store the index of the
pack it's in. We map those indices into pointers using
packing_data->in_pack_by_idx, which we initialize at the start of the
program. If we have 2^10 or more packs, then we instead create an array
of pack pointers, one per object. This is packing_data->in_pack.

So far so good. But there's one other tricky case: if a new pack arrives
after we've initialized in_pack_by_idx, it won't have an index yet. We
solve that by calling oe_map_new_pack(), which just switches on the fly
to the less-optimal in_pack mechanism, allocating the array and
back-filling it for already-seen objects.

But that logic kicks in even when we've switched to it already (whether
because we really did see a new pack, or because we had too many packs
in the first place). The result doesn't produce a wrong outcome, but
it's very slow. What happens is this:

  - imagine you have a repo with 500k objects and 2000 packs that you
    want to repack.

  - before looking at any objects, we call prepare_in_pack_by_idx(). It
    starts allocating an index for each pack. On the 1024th pack, it
    sees there are too many, so it bails, leaving in_pack_by_idx as
    NULL.

  - while actually adding objects to the packing list, we call
    oe_set_in_pack(), which checks whether the pack already has an
    index. If it's one of the packs after the first 1023, then it
    doesn't have one, and we'll call oe_map_new_pack().

    But there's no useful work for that function to do. We're already
    using in_pack, so it just uselessly walks over the complete list of
    objects, trying to backfill in_pack.

    And we end up doing this for almost 1000 packs (each of which may be
    triggered by more than one object). And each time it triggers, we
    may iterate over up to 500k objects. So in the absolute worst case,
    this is quadratic in the number of objects.

The solution is simple: we don't need to bother checking whether the
pack has an index if we've already converted to using in_pack, since by
definition we're not going to use it. So we can just push the "does the
pack have a valid index" check down into that half of the conditional,
where we know we're going to use it.

The current test in p5303 sadly doesn't notice this problem, since it
maxes out at 1000 packs. If we add a new test to it at 2000 packs, it
does show the improvement:

  Test                      HEAD^               HEAD
  ----------------------------------------------------------------------
  5303.12: repack (2000)    26.72(39.68+0.67)   15.70(28.70+0.66) -41.2%

However, these many-pack test cases are rather expensive to run, so
adding larger and larger numbers isn't appealing. Instead, we can show
it off more easily by using GIT_TEST_FULL_IN_PACK_ARRAY, which forces us
into the absolute worst case: no pack has an index, so we'll trigger
oe_map_new_pack() pointlessly for every single object, making it truly
quadratic.

Here are the numbers (on git.git) with the included change to p5303:

  Test                      HEAD^               HEAD
  ----------------------------------------------------------------------
  5303.3: rev-list (1)      2.05(1.98+0.06)     2.06(1.99+0.06) +0.5%
  5303.4: repack (1)        33.45(33.46+0.19)   2.75(2.73+0.22) -91.8%
  5303.6: rev-list (50)     2.07(2.01+0.06)     2.06(2.01+0.05) -0.5%
  5303.7: repack (50)       34.21(35.18+0.16)   3.49(4.50+0.12) -89.8%
  5303.9: rev-list (1000)   2.87(2.78+0.08)     2.88(2.80+0.07) +0.3%
  5303.10: repack (1000)    41.26(51.30+0.47)   10.75(20.75+0.44) -73.9%

Again, those improvements aren't realistic for the 1-pack case (because
in the real world, the full-array solution doesn't kick in), but it's
more useful to be testing the more-complicated code path.

While we're looking at this issue, we'll tweak one more thing: in
oe_map_new_pack(), we call REALLOC_ARRAY(pack->in_pack). But we'd never
expect to get here unless we're back-filling it for the first time, in
which case it would be NULL. So let's switch that to ALLOC_ARRAY() for
clarity, and add a BUG() to document the expectation. Unfortunately this
code isn't well-covered in the test suite because it's inherently racy
(it only kicks in if somebody else adds a new pack while we're in the
middle of repacking).

Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-12 13:36:36 +09:00
Jeff King
77023ea3c3 t/perf: add tests for many-pack scenarios
Git's pack storage does efficient (log n) lookups in a
single packfile's index, but if we have multiple packfiles,
we have to linearly search each for a given object.  This
patch introduces some timing tests for cases where we have a
large number of packs, so that we can measure any
improvements we make in the following patches.

The main thing we want to time is object lookup. To do this,
we measure "git rev-list --objects --all", which does a
fairly large number of object lookups (essentially one per
object in the repository).

However, we also measure the time to do a full repack, which
is interesting for two reasons. One is that in addition to
the usual pack lookup, it has its own linear iteration over
the list of packs. And two is that because it it is the tool
one uses to go from an inefficient many-pack situation back
to a single pack, we care about its performance not only at
marginal numbers of packs, but at the extreme cases (e.g.,
if you somehow end up with 5,000 packs, it is the only way
to get back to 1 pack, so we need to make sure it performs
well).

We measure the performance of each command in three
scenarios: 1 pack, 50 packs, and 1,000 packs.

The 1-pack case is a baseline; any optimizations we do to
handle multiple packs cannot possibly perform better than
this.

The 50-pack case is as far as Git should generally allow
your repository to go, if you have auto-gc enabled with the
default settings. So this represents the maximum performance
improvement we would expect under normal circumstances.

The 1,000-pack case is hopefully rare, though I have seen it
in the wild where automatic maintenance was broken for some
time (and the repository continued to receive pushes). This
represents cases where we care less about general
performance, but want to make sure that a full repack
command does not take excessively long.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-29 11:05:06 -07:00