pack-objects: enforce --depth limit in reused deltas
Since 898b14c (pack-objects: rework check_delta_limit usage,
2007-04-16), we check the delta depth limit only when
figuring out whether we should make a new delta. We don't
consider it at all when reusing deltas, which means that
packing once with --depth=250, and then again with
--depth=50, the second pack may still contain chains larger
than 50.
This is generally considered a feature, as the results of
earlier high-depth repacks are carried forward, used for
serving fetches, etc. However, since we started using
cross-pack deltas in c9af708b1 (pack-objects: use mru list
when iterating over packs, 2016-08-11), we are no longer
bounded by the length of an existing delta chain in a single
pack.
Here's one particular pathological case: a sequence of N
packs, each with 2 objects, the base of which is stored as a
delta in a previous pack. If we chain all the deltas
together, we have a cycle of length N. We break the cycle,
but the tip delta is still at depth N-1.
This is less unlikely than it might sound. See the included
test for a reconstruction based on real-world actions. I
ran into such a case in the wild, where a client was rapidly
sending packs, and we had accumulated 10,000 before doing a
server-side repack. The pack that "git repack" tried to
generate had a very deep chain, which caused pack-objects to
run out of stack space in the recursive write_one().
This patch bounds the length of delta chains in the output
pack based on --depth, regardless of whether they are caused
by cross-pack deltas or existed in the input packs. This
fixes the problem, but does have two possible downsides:
1. High-depth aggressive repacks followed by "normal"
repacks will throw away the high-depth chains.
In the long run this is probably OK; investigation
showed that high-depth repacks aren't actually
beneficial, and we dropped the aggressive depth default
to match the normal case in 07e7dbf0d (gc: default
aggressive depth to 50, 2016-08-11).
2. If you really do want to store high-depth deltas on
disk, they may be discarded and new delta computed when
serving a fetch, unless you set pack.depth to match
your high-depth size.
The implementation uses the existing search for delta
cycles. That lets us compute the depth of any node based on
the depth of its base, because we know the base is DFS_DONE
by the time we look at it (modulo any cycles in the graph,
but we know there cannot be any because we break them as we
see them).
There is some subtlety worth mentioning, though. We record
the depth of each object as we compute it. It might seem
like we could save the per-object storage space by just
keeping track of the depth of our traversal (i.e., have
break_delta_chains() report how deep it went). But we may
visit an object through multiple delta paths, and on
subsequent paths we want to know its depth immediately,
without having to walk back down to its final base (doing so
would make our graph walk quadratic rather than linear).
Likewise, one could try to record the depth not from the
base, but from our starting point (i.e., start
recursion_depth at 0, and pass "recursion_depth + 1" to each
invocation of break_delta_chains()). And then when
recursion_depth gets too big, we know that we must cut the
delta chain. But that technique is wrong if we do not visit
the nodes in topological order. In a chain A->B->C, it
if we visit "C", then "B", then "A", we will never recurse
deeper than 1 link (because we see at each node that we have
already visited it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-28 00:09:59 +00:00
|
|
|
#!/bin/sh
|
|
|
|
|
|
|
|
test_description='pack-objects breaks long cross-pack delta chains'
|
|
|
|
. ./test-lib.sh
|
|
|
|
|
|
|
|
# This mirrors a repeated push setup:
|
|
|
|
#
|
|
|
|
# 1. A client repeatedly modifies some files, makes a
|
|
|
|
# commit, and pushes the result. It does this N times
|
|
|
|
# before we get around to repacking.
|
|
|
|
#
|
|
|
|
# 2. Each push generates a thin pack with the new version of
|
|
|
|
# various objects. Let's consider some file in the root tree
|
|
|
|
# which is updated in each commit.
|
|
|
|
#
|
|
|
|
# When generating push number X, we feed commit X-1 (and
|
|
|
|
# thus blob X-1) as a preferred base. The resulting pack has
|
|
|
|
# blob X as a thin delta against blob X-1.
|
|
|
|
#
|
|
|
|
# On the receiving end, "index-pack --fix-thin" will
|
|
|
|
# complete the pack with a base copy of blob X-1.
|
|
|
|
#
|
|
|
|
# 3. In older versions of git, if we used the delta from
|
|
|
|
# pack X, then we'd always find blob X-1 as a base in the
|
|
|
|
# same pack (and generate a fresh delta).
|
|
|
|
#
|
|
|
|
# But with the pack mru, we jump from delta to delta
|
|
|
|
# following the traversal order:
|
|
|
|
#
|
|
|
|
# a. We grab blob X from pack X as a delta, putting it at
|
|
|
|
# the tip of our mru list.
|
|
|
|
#
|
|
|
|
# b. Eventually we move onto commit X-1. We need other
|
|
|
|
# objects which are only in pack X-1 (in the test code
|
|
|
|
# below, it's the containing tree). That puts pack X-1
|
|
|
|
# at the tip of our mru list.
|
|
|
|
#
|
|
|
|
# c. Eventually we look for blob X-1, and we find the
|
|
|
|
# version in pack X-1 (because it's the mru tip).
|
|
|
|
#
|
|
|
|
# Now we have blob X as a delta against X-1, which is a delta
|
|
|
|
# against X-2, and so forth.
|
|
|
|
#
|
|
|
|
# In the real world, these small pushes would get exploded by
|
|
|
|
# unpack-objects rather than "index-pack --fix-thin", but the
|
|
|
|
# same principle applies to larger pushes (they only need one
|
|
|
|
# repeatedly-modified file to generate the delta chain).
|
|
|
|
|
|
|
|
test_expect_success 'create series of packs' '
|
2018-03-24 07:44:42 +00:00
|
|
|
test-tool genrandom foo 4096 >content &&
|
pack-objects: enforce --depth limit in reused deltas
Since 898b14c (pack-objects: rework check_delta_limit usage,
2007-04-16), we check the delta depth limit only when
figuring out whether we should make a new delta. We don't
consider it at all when reusing deltas, which means that
packing once with --depth=250, and then again with
--depth=50, the second pack may still contain chains larger
than 50.
This is generally considered a feature, as the results of
earlier high-depth repacks are carried forward, used for
serving fetches, etc. However, since we started using
cross-pack deltas in c9af708b1 (pack-objects: use mru list
when iterating over packs, 2016-08-11), we are no longer
bounded by the length of an existing delta chain in a single
pack.
Here's one particular pathological case: a sequence of N
packs, each with 2 objects, the base of which is stored as a
delta in a previous pack. If we chain all the deltas
together, we have a cycle of length N. We break the cycle,
but the tip delta is still at depth N-1.
This is less unlikely than it might sound. See the included
test for a reconstruction based on real-world actions. I
ran into such a case in the wild, where a client was rapidly
sending packs, and we had accumulated 10,000 before doing a
server-side repack. The pack that "git repack" tried to
generate had a very deep chain, which caused pack-objects to
run out of stack space in the recursive write_one().
This patch bounds the length of delta chains in the output
pack based on --depth, regardless of whether they are caused
by cross-pack deltas or existed in the input packs. This
fixes the problem, but does have two possible downsides:
1. High-depth aggressive repacks followed by "normal"
repacks will throw away the high-depth chains.
In the long run this is probably OK; investigation
showed that high-depth repacks aren't actually
beneficial, and we dropped the aggressive depth default
to match the normal case in 07e7dbf0d (gc: default
aggressive depth to 50, 2016-08-11).
2. If you really do want to store high-depth deltas on
disk, they may be discarded and new delta computed when
serving a fetch, unless you set pack.depth to match
your high-depth size.
The implementation uses the existing search for delta
cycles. That lets us compute the depth of any node based on
the depth of its base, because we know the base is DFS_DONE
by the time we look at it (modulo any cycles in the graph,
but we know there cannot be any because we break them as we
see them).
There is some subtlety worth mentioning, though. We record
the depth of each object as we compute it. It might seem
like we could save the per-object storage space by just
keeping track of the depth of our traversal (i.e., have
break_delta_chains() report how deep it went). But we may
visit an object through multiple delta paths, and on
subsequent paths we want to know its depth immediately,
without having to walk back down to its final base (doing so
would make our graph walk quadratic rather than linear).
Likewise, one could try to record the depth not from the
base, but from our starting point (i.e., start
recursion_depth at 0, and pass "recursion_depth + 1" to each
invocation of break_delta_chains()). And then when
recursion_depth gets too big, we know that we must cut the
delta chain. But that technique is wrong if we do not visit
the nodes in topological order. In a chain A->B->C, it
if we visit "C", then "B", then "A", we will never recurse
deeper than 1 link (because we see at each node that we have
already visited it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-28 00:09:59 +00:00
|
|
|
prev= &&
|
|
|
|
for i in $(test_seq 1 10)
|
|
|
|
do
|
|
|
|
cat content >file &&
|
|
|
|
echo $i >>file &&
|
|
|
|
git add file &&
|
|
|
|
git commit -m $i &&
|
|
|
|
cur=$(git rev-parse HEAD^{tree}) &&
|
|
|
|
{
|
|
|
|
test -n "$prev" && echo "-$prev"
|
|
|
|
echo $cur
|
|
|
|
echo "$(git rev-parse :file) file"
|
|
|
|
} | git pack-objects --stdout >tmp &&
|
|
|
|
git index-pack --stdin --fix-thin <tmp || return 1
|
|
|
|
prev=$cur
|
|
|
|
done
|
|
|
|
'
|
|
|
|
|
|
|
|
max_chain() {
|
|
|
|
git index-pack --verify-stat-only "$1" >output &&
|
|
|
|
perl -lne '
|
|
|
|
/chain length = (\d+)/ and $len = $1;
|
|
|
|
END { print $len }
|
|
|
|
' output
|
|
|
|
}
|
|
|
|
|
|
|
|
# Note that this whole setup is pretty reliant on the current
|
|
|
|
# packing heuristics. We double-check that our test case
|
|
|
|
# actually produces a long chain. If it doesn't, it should be
|
|
|
|
# adjusted (or scrapped if the heuristics have become too unreliable)
|
|
|
|
test_expect_success 'packing produces a long delta' '
|
|
|
|
# Use --window=0 to make sure we are seeing reused deltas,
|
|
|
|
# not computing a new long chain.
|
|
|
|
pack=$(git pack-objects --all --window=0 </dev/null pack) &&
|
2017-05-05 18:19:32 +00:00
|
|
|
echo 9 >expect &&
|
|
|
|
max_chain pack-$pack.pack >actual &&
|
2021-02-11 01:53:53 +00:00
|
|
|
test_cmp expect actual
|
pack-objects: enforce --depth limit in reused deltas
Since 898b14c (pack-objects: rework check_delta_limit usage,
2007-04-16), we check the delta depth limit only when
figuring out whether we should make a new delta. We don't
consider it at all when reusing deltas, which means that
packing once with --depth=250, and then again with
--depth=50, the second pack may still contain chains larger
than 50.
This is generally considered a feature, as the results of
earlier high-depth repacks are carried forward, used for
serving fetches, etc. However, since we started using
cross-pack deltas in c9af708b1 (pack-objects: use mru list
when iterating over packs, 2016-08-11), we are no longer
bounded by the length of an existing delta chain in a single
pack.
Here's one particular pathological case: a sequence of N
packs, each with 2 objects, the base of which is stored as a
delta in a previous pack. If we chain all the deltas
together, we have a cycle of length N. We break the cycle,
but the tip delta is still at depth N-1.
This is less unlikely than it might sound. See the included
test for a reconstruction based on real-world actions. I
ran into such a case in the wild, where a client was rapidly
sending packs, and we had accumulated 10,000 before doing a
server-side repack. The pack that "git repack" tried to
generate had a very deep chain, which caused pack-objects to
run out of stack space in the recursive write_one().
This patch bounds the length of delta chains in the output
pack based on --depth, regardless of whether they are caused
by cross-pack deltas or existed in the input packs. This
fixes the problem, but does have two possible downsides:
1. High-depth aggressive repacks followed by "normal"
repacks will throw away the high-depth chains.
In the long run this is probably OK; investigation
showed that high-depth repacks aren't actually
beneficial, and we dropped the aggressive depth default
to match the normal case in 07e7dbf0d (gc: default
aggressive depth to 50, 2016-08-11).
2. If you really do want to store high-depth deltas on
disk, they may be discarded and new delta computed when
serving a fetch, unless you set pack.depth to match
your high-depth size.
The implementation uses the existing search for delta
cycles. That lets us compute the depth of any node based on
the depth of its base, because we know the base is DFS_DONE
by the time we look at it (modulo any cycles in the graph,
but we know there cannot be any because we break them as we
see them).
There is some subtlety worth mentioning, though. We record
the depth of each object as we compute it. It might seem
like we could save the per-object storage space by just
keeping track of the depth of our traversal (i.e., have
break_delta_chains() report how deep it went). But we may
visit an object through multiple delta paths, and on
subsequent paths we want to know its depth immediately,
without having to walk back down to its final base (doing so
would make our graph walk quadratic rather than linear).
Likewise, one could try to record the depth not from the
base, but from our starting point (i.e., start
recursion_depth at 0, and pass "recursion_depth + 1" to each
invocation of break_delta_chains()). And then when
recursion_depth gets too big, we know that we must cut the
delta chain. But that technique is wrong if we do not visit
the nodes in topological order. In a chain A->B->C, it
if we visit "C", then "B", then "A", we will never recurse
deeper than 1 link (because we see at each node that we have
already visited it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-28 00:09:59 +00:00
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success '--depth limits depth' '
|
|
|
|
pack=$(git pack-objects --all --depth=5 </dev/null pack) &&
|
2017-05-05 18:19:32 +00:00
|
|
|
echo 5 >expect &&
|
|
|
|
max_chain pack-$pack.pack >actual &&
|
2021-02-11 01:53:53 +00:00
|
|
|
test_cmp expect actual
|
pack-objects: enforce --depth limit in reused deltas
Since 898b14c (pack-objects: rework check_delta_limit usage,
2007-04-16), we check the delta depth limit only when
figuring out whether we should make a new delta. We don't
consider it at all when reusing deltas, which means that
packing once with --depth=250, and then again with
--depth=50, the second pack may still contain chains larger
than 50.
This is generally considered a feature, as the results of
earlier high-depth repacks are carried forward, used for
serving fetches, etc. However, since we started using
cross-pack deltas in c9af708b1 (pack-objects: use mru list
when iterating over packs, 2016-08-11), we are no longer
bounded by the length of an existing delta chain in a single
pack.
Here's one particular pathological case: a sequence of N
packs, each with 2 objects, the base of which is stored as a
delta in a previous pack. If we chain all the deltas
together, we have a cycle of length N. We break the cycle,
but the tip delta is still at depth N-1.
This is less unlikely than it might sound. See the included
test for a reconstruction based on real-world actions. I
ran into such a case in the wild, where a client was rapidly
sending packs, and we had accumulated 10,000 before doing a
server-side repack. The pack that "git repack" tried to
generate had a very deep chain, which caused pack-objects to
run out of stack space in the recursive write_one().
This patch bounds the length of delta chains in the output
pack based on --depth, regardless of whether they are caused
by cross-pack deltas or existed in the input packs. This
fixes the problem, but does have two possible downsides:
1. High-depth aggressive repacks followed by "normal"
repacks will throw away the high-depth chains.
In the long run this is probably OK; investigation
showed that high-depth repacks aren't actually
beneficial, and we dropped the aggressive depth default
to match the normal case in 07e7dbf0d (gc: default
aggressive depth to 50, 2016-08-11).
2. If you really do want to store high-depth deltas on
disk, they may be discarded and new delta computed when
serving a fetch, unless you set pack.depth to match
your high-depth size.
The implementation uses the existing search for delta
cycles. That lets us compute the depth of any node based on
the depth of its base, because we know the base is DFS_DONE
by the time we look at it (modulo any cycles in the graph,
but we know there cannot be any because we break them as we
see them).
There is some subtlety worth mentioning, though. We record
the depth of each object as we compute it. It might seem
like we could save the per-object storage space by just
keeping track of the depth of our traversal (i.e., have
break_delta_chains() report how deep it went). But we may
visit an object through multiple delta paths, and on
subsequent paths we want to know its depth immediately,
without having to walk back down to its final base (doing so
would make our graph walk quadratic rather than linear).
Likewise, one could try to record the depth not from the
base, but from our starting point (i.e., start
recursion_depth at 0, and pass "recursion_depth + 1" to each
invocation of break_delta_chains()). And then when
recursion_depth gets too big, we know that we must cut the
delta chain. But that technique is wrong if we do not visit
the nodes in topological order. In a chain A->B->C, it
if we visit "C", then "B", then "A", we will never recurse
deeper than 1 link (because we see at each node that we have
already visited it).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-28 00:09:59 +00:00
|
|
|
'
|
|
|
|
|
|
|
|
test_done
|