2020-03-30 00:31:24 +00:00
|
|
|
#!/bin/sh
|
|
|
|
|
|
|
|
test_description='Testing the various Bloom filter computations in bloom.c'
|
|
|
|
. ./test-lib.sh
|
|
|
|
|
|
|
|
test_expect_success 'compute unseeded murmur3 hash for empty string' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Murmur3 Hash with seed=0:0x00000000
|
|
|
|
EOF
|
|
|
|
test-tool bloom get_murmur3 "" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'compute unseeded murmur3 hash for test string 1' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Murmur3 Hash with seed=0:0x627b0c2c
|
|
|
|
EOF
|
|
|
|
test-tool bloom get_murmur3 "Hello world!" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'compute unseeded murmur3 hash for test string 2' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Murmur3 Hash with seed=0:0x2e4ff723
|
|
|
|
EOF
|
|
|
|
test-tool bloom get_murmur3 "The quick brown fox jumps over the lazy dog" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
bloom.c: introduce core Bloom filter constructs
Introduce the constructs for Bloom filters, Bloom filter keys
and Bloom filter settings.
For details on what Bloom filters are and how they work, refer
to Dr. Derrick Stolee's blog post [1]. It provides a concise
explanation of the adoption of Bloom filters as described in
[2] and [3].
Implementation specifics:
1. We currently use 7 and 10 for the number of hashes and the
size of each entry respectively. They served as great starting
values, the mathematical details behind this choice are
described in [1] and [4]. The implementation, while not
completely open to it at the moment, is flexible enough to allow
for tweaking these settings in the future.
Note: The performance gains we have observed with these values
are significant enough that we did not need to tweak these
settings. The performance numbers are included in the cover letter
of this series and in the commit message of the subsequent commit
where we use Bloom filters to speed up `git log -- path`.
2. As described in [1] and [3], we do not need 7 independent hashing
functions. We use the Murmur3 hashing scheme, seed it twice and
then combine those to procure an arbitrary number of hash values.
3. The filters will be sized according to the number of changes in
each commit, in multiples of 8 bit words.
[1] Derrick Stolee
"Supercharging the Git Commit Graph IV: Bloom Filters"
https://devblogs.microsoft.com/devops/super-charging-the-git-commit-graph-iv-Bloom-filters/
[2] Flavio Bonomi, Michael Mitzenmacher, Rina Panigrahy, Sushil Singh, George Varghese
"An Improved Construction for Counting Bloom Filters"
http://theory.stanford.edu/~rinap/papers/esa2006b.pdf
https://doi.org/10.1007/11841036_61
[3] Peter C. Dillinger and Panagiotis Manolios
"Bloom Filters in Probabilistic Verification"
http://www.ccs.neu.edu/home/pete/pub/Bloom-filters-verification.pdf
https://doi.org/10.1007/978-3-540-30494-4_26
[4] Thomas Mueller Graf, Daniel Lemire
"Xor Filters: Faster and Smaller Than Bloom and Cuckoo Filters"
https://arxiv.org/abs/1912.08258
Helped-by: Derrick Stolee <dstolee@microsoft.com>
Reviewed-by: Jakub Narębski <jnareb@gmail.com>
Signed-off-by: Garima Singh <garima.singh@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-30 00:31:25 +00:00
|
|
|
test_expect_success 'compute bloom key for empty string' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Hashes:0x5615800c|0x5b966560|0x61174ab4|0x66983008|0x6c19155c|0x7199fab0|0x771ae004|
|
|
|
|
Filter_Length:2
|
|
|
|
Filter_Data:11|11|
|
|
|
|
EOF
|
|
|
|
test-tool bloom generate_filter "" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'compute bloom key for whitespace' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Hashes:0xf178874c|0x5f3d6eb6|0xcd025620|0x3ac73d8a|0xa88c24f4|0x16510c5e|0x8415f3c8|
|
|
|
|
Filter_Length:2
|
|
|
|
Filter_Data:51|55|
|
|
|
|
EOF
|
|
|
|
test-tool bloom generate_filter " " >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'compute bloom key for test string 1' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Hashes:0xb270de9b|0x1bb6f26e|0x84fd0641|0xee431a14|0x57892de7|0xc0cf41ba|0x2a15558d|
|
|
|
|
Filter_Length:2
|
|
|
|
Filter_Data:92|6c|
|
|
|
|
EOF
|
|
|
|
test-tool bloom generate_filter "Hello world!" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'compute bloom key for test string 2' '
|
|
|
|
cat >expect <<-\EOF &&
|
|
|
|
Hashes:0x20ab385b|0xf5237fe2|0xc99bc769|0x9e140ef0|0x728c5677|0x47049dfe|0x1b7ce585|
|
|
|
|
Filter_Length:2
|
|
|
|
Filter_Data:a5|4a|
|
|
|
|
EOF
|
|
|
|
test-tool bloom generate_filter "file.txt" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
2020-03-30 00:31:26 +00:00
|
|
|
test_expect_success 'get bloom filters for commit with no changes' '
|
|
|
|
git init &&
|
|
|
|
git commit --allow-empty -m "c0" &&
|
|
|
|
cat >expect <<-\EOF &&
|
bloom: encode out-of-bounds filters as non-empty
When a changed-path Bloom filter has either zero, or more than a
certain number (commonly 512) of entries, the commit-graph machinery
encodes it as "missing". More specifically, it sets the indices adjacent
in the BIDX chunk as equal to each other to indicate a "length 0"
filter; that is, that the filter occupies zero bytes on disk.
This has heretofore been fine, since the commit-graph machinery has no
need to care about these filters with too few or too many changed paths.
Both cases act like no filter has been generated at all, and so there is
no need to store them.
In a subsequent commit, however, the commit-graph machinery will learn
to only compute Bloom filters for some commits in the current
commit-graph layer. This is a change from the current implementation
which computes Bloom filters for all commits that are in the layer being
written. Critically for this patch, only computing some of the Bloom
filters means adding a third state for length 0 Bloom filters: zero
entries, too many entries, or "hasn't been computed".
It will be important for that future patch to distinguish between "not
representable" (i.e., zero or too-many changed paths), and "hasn't been
computed". In particular, we don't want to waste time recomputing
filters that have already been computed.
To that end, change how we store Bloom filters in the "computed but not
representable" category:
- Bloom filters with no entries are stored as a single byte with all
bits low (i.e., all queries to that Bloom filter will return
"definitely not")
- Bloom filters with too many entries are stored as a single byte with
all bits set high (i.e., all queries to that Bloom filter will
return "maybe").
These rules are sufficient to not incur a behavior change by changing
the on-disk representation of these two classes. Likewise, no
specification changes are necessary for the commit-graph format, either:
- Filters that were previously empty will be recomputed and stored
according to the new rules, and
- old clients reading filters generated by new clients will interpret
the filters correctly and be none the wiser to how they were
generated.
Clients will invoke the Bloom machinery in more cases than before, but
this can be addressed by returning a NULL filter when all bits are set
high. This can be addressed in a future patch.
Note that this does increase the size of on-disk commit-graphs, but far
less than other proposals. In particular, this is generally more
efficient than storing a bitmap for which commits haven't computed their
Bloom filters. Storing a bitmap incurs a penalty of one bit per commit,
whereas storing explicit filters as above incurs a penalty of one byte
per too-large or empty commit.
In practice, these boundary commits likely occupy a small proportion of
the overall number of commits, and so the size penalty is likely smaller
than storing a bitmap for all commits.
See, for example, these relative proportions of such boundary commits
(collected by SZEDER Gábor):
| Percentage of | commit-graph | |
| commits modifying | file size | |
├────────┬──────────────┼───────────────────┤ pct. |
| 0 path | >= 512 paths | before | after | change |
┌────────────────┼────────┼──────────────┼─────────┼─────────┼───────────┤
| android-base | 13.20% | 0.13% | 37.468M | 37.534M | +0.1741 % |
| cmssw | 0.15% | 0.23% | 17.118M | 17.119M | +0.0091 % |
| cpython | 3.07% | 0.01% | 7.967M | 7.971M | +0.0423 % |
| elasticsearch | 0.70% | 1.00% | 8.833M | 8.835M | +0.0128 % |
| gcc | 0.00% | 0.08% | 16.073M | 16.074M | +0.0030 % |
| gecko-dev | 0.14% | 0.64% | 59.868M | 59.874M | +0.0105 % |
| git | 0.11% | 0.02% | 3.895M | 3.895M | +0.0020 % |
| glibc | 0.02% | 0.10% | 3.555M | 3.555M | +0.0021 % |
| go | 0.00% | 0.07% | 3.186M | 3.186M | +0.0018 % |
| homebrew-cask | 0.40% | 0.02% | 7.035M | 7.035M | +0.0065 % |
| homebrew-core | 0.01% | 0.01% | 11.611M | 11.611M | +0.0002 % |
| jdk | 0.26% | 5.64% | 5.537M | 5.540M | +0.0590 % |
| linux | 0.01% | 0.51% | 63.735M | 63.740M | +0.0073 % |
| llvm-project | 0.12% | 0.03% | 25.515M | 25.516M | +0.0050 % |
| rails | 0.10% | 0.10% | 6.252M | 6.252M | +0.0027 % |
| rust | 0.07% | 0.17% | 9.364M | 9.364M | +0.0033 % |
| tensorflow | 0.09% | 1.02% | 7.009M | 7.010M | +0.0158 % |
| webkit | 0.05% | 0.31% | 17.405M | 17.406M | +0.0047 % |
(where the above increase is determined by computing a non-split
commit-graph before and after this patch).
Given that these projects are all "large" by commit count, the storage
cost by writing these filters explicitly is negligible. In the most
extreme example, android-base (which has 494,848 commits at the time of
writing) would have its commit-graph increase by a modest 68.4 KB.
Finally, a test to exercise filters which contain too many changed path
entries will be introduced in a subsequent patch.
Suggested-by: SZEDER Gábor <szeder.dev@gmail.com>
Suggested-by: Jakub Narębski <jnareb@gmail.com>
Helped-by: Derrick Stolee <dstolee@microsoft.com>
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-18 02:59:44 +00:00
|
|
|
Filter_Length:1
|
|
|
|
Filter_Data:00|
|
2020-03-30 00:31:26 +00:00
|
|
|
EOF
|
|
|
|
test-tool bloom get_filter_for_commit "$(git rev-parse HEAD)" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success 'get bloom filter for commit with 10 changes' '
|
|
|
|
rm actual &&
|
|
|
|
rm expect &&
|
|
|
|
mkdir smallDir &&
|
|
|
|
for i in $(test_seq 0 9)
|
|
|
|
do
|
|
|
|
echo $i >smallDir/$i
|
|
|
|
done &&
|
|
|
|
git add smallDir &&
|
|
|
|
git commit -m "commit with 10 changes" &&
|
|
|
|
cat >expect <<-\EOF &&
|
2020-05-11 11:56:12 +00:00
|
|
|
Filter_Length:14
|
|
|
|
Filter_Data:02|b3|c4|a0|34|e7|fe|eb|cb|47|fe|a0|e8|72|
|
2020-03-30 00:31:26 +00:00
|
|
|
EOF
|
|
|
|
test-tool bloom get_filter_for_commit "$(git rev-parse HEAD)" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
|
|
|
test_expect_success EXPENSIVE 'get bloom filter for commit with 513 changes' '
|
|
|
|
rm actual &&
|
|
|
|
rm expect &&
|
|
|
|
mkdir bigDir &&
|
bloom: use num_changes not nr for limit detection
As diff_tree_oid() computes a diff, it will terminate early if the
total number of changed paths is strictly larger than max_changes.
This includes the directories that changed, not just the file paths.
However, only the file paths are reflected in the resulting diff
queue's "nr" value.
Use the "num_changes" from diffopt to check if the diff terminated
early. This is incredibly important, as it can result in incorrect
filters! For example, the first commit in the Linux kernel repo
reports only 471 changes, but since these are nested inside several
directories they expand to 513 "real" changes, and in fact the
total list of changes is not reported. Thus, the computed filter
for this commit is incorrect.
Demonstrate the subtle difference by using one fewer file change
in the 'get bloom filter for commit with 513 changes' test. Before,
this edited 513 files inside "bigDir" which hit this inequality.
However, dropping the file count by one demonstrates how the
previous inequality was incorrect but the new one is correct.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-11 11:56:13 +00:00
|
|
|
for i in $(test_seq 0 511)
|
2020-03-30 00:31:26 +00:00
|
|
|
do
|
|
|
|
echo $i >bigDir/$i
|
|
|
|
done &&
|
|
|
|
git add bigDir &&
|
|
|
|
git commit -m "commit with 513 changes" &&
|
|
|
|
cat >expect <<-\EOF &&
|
bloom: encode out-of-bounds filters as non-empty
When a changed-path Bloom filter has either zero, or more than a
certain number (commonly 512) of entries, the commit-graph machinery
encodes it as "missing". More specifically, it sets the indices adjacent
in the BIDX chunk as equal to each other to indicate a "length 0"
filter; that is, that the filter occupies zero bytes on disk.
This has heretofore been fine, since the commit-graph machinery has no
need to care about these filters with too few or too many changed paths.
Both cases act like no filter has been generated at all, and so there is
no need to store them.
In a subsequent commit, however, the commit-graph machinery will learn
to only compute Bloom filters for some commits in the current
commit-graph layer. This is a change from the current implementation
which computes Bloom filters for all commits that are in the layer being
written. Critically for this patch, only computing some of the Bloom
filters means adding a third state for length 0 Bloom filters: zero
entries, too many entries, or "hasn't been computed".
It will be important for that future patch to distinguish between "not
representable" (i.e., zero or too-many changed paths), and "hasn't been
computed". In particular, we don't want to waste time recomputing
filters that have already been computed.
To that end, change how we store Bloom filters in the "computed but not
representable" category:
- Bloom filters with no entries are stored as a single byte with all
bits low (i.e., all queries to that Bloom filter will return
"definitely not")
- Bloom filters with too many entries are stored as a single byte with
all bits set high (i.e., all queries to that Bloom filter will
return "maybe").
These rules are sufficient to not incur a behavior change by changing
the on-disk representation of these two classes. Likewise, no
specification changes are necessary for the commit-graph format, either:
- Filters that were previously empty will be recomputed and stored
according to the new rules, and
- old clients reading filters generated by new clients will interpret
the filters correctly and be none the wiser to how they were
generated.
Clients will invoke the Bloom machinery in more cases than before, but
this can be addressed by returning a NULL filter when all bits are set
high. This can be addressed in a future patch.
Note that this does increase the size of on-disk commit-graphs, but far
less than other proposals. In particular, this is generally more
efficient than storing a bitmap for which commits haven't computed their
Bloom filters. Storing a bitmap incurs a penalty of one bit per commit,
whereas storing explicit filters as above incurs a penalty of one byte
per too-large or empty commit.
In practice, these boundary commits likely occupy a small proportion of
the overall number of commits, and so the size penalty is likely smaller
than storing a bitmap for all commits.
See, for example, these relative proportions of such boundary commits
(collected by SZEDER Gábor):
| Percentage of | commit-graph | |
| commits modifying | file size | |
├────────┬──────────────┼───────────────────┤ pct. |
| 0 path | >= 512 paths | before | after | change |
┌────────────────┼────────┼──────────────┼─────────┼─────────┼───────────┤
| android-base | 13.20% | 0.13% | 37.468M | 37.534M | +0.1741 % |
| cmssw | 0.15% | 0.23% | 17.118M | 17.119M | +0.0091 % |
| cpython | 3.07% | 0.01% | 7.967M | 7.971M | +0.0423 % |
| elasticsearch | 0.70% | 1.00% | 8.833M | 8.835M | +0.0128 % |
| gcc | 0.00% | 0.08% | 16.073M | 16.074M | +0.0030 % |
| gecko-dev | 0.14% | 0.64% | 59.868M | 59.874M | +0.0105 % |
| git | 0.11% | 0.02% | 3.895M | 3.895M | +0.0020 % |
| glibc | 0.02% | 0.10% | 3.555M | 3.555M | +0.0021 % |
| go | 0.00% | 0.07% | 3.186M | 3.186M | +0.0018 % |
| homebrew-cask | 0.40% | 0.02% | 7.035M | 7.035M | +0.0065 % |
| homebrew-core | 0.01% | 0.01% | 11.611M | 11.611M | +0.0002 % |
| jdk | 0.26% | 5.64% | 5.537M | 5.540M | +0.0590 % |
| linux | 0.01% | 0.51% | 63.735M | 63.740M | +0.0073 % |
| llvm-project | 0.12% | 0.03% | 25.515M | 25.516M | +0.0050 % |
| rails | 0.10% | 0.10% | 6.252M | 6.252M | +0.0027 % |
| rust | 0.07% | 0.17% | 9.364M | 9.364M | +0.0033 % |
| tensorflow | 0.09% | 1.02% | 7.009M | 7.010M | +0.0158 % |
| webkit | 0.05% | 0.31% | 17.405M | 17.406M | +0.0047 % |
(where the above increase is determined by computing a non-split
commit-graph before and after this patch).
Given that these projects are all "large" by commit count, the storage
cost by writing these filters explicitly is negligible. In the most
extreme example, android-base (which has 494,848 commits at the time of
writing) would have its commit-graph increase by a modest 68.4 KB.
Finally, a test to exercise filters which contain too many changed path
entries will be introduced in a subsequent patch.
Suggested-by: SZEDER Gábor <szeder.dev@gmail.com>
Suggested-by: Jakub Narębski <jnareb@gmail.com>
Helped-by: Derrick Stolee <dstolee@microsoft.com>
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-18 02:59:44 +00:00
|
|
|
Filter_Length:1
|
|
|
|
Filter_Data:ff|
|
2020-03-30 00:31:26 +00:00
|
|
|
EOF
|
|
|
|
test-tool bloom get_filter_for_commit "$(git rev-parse HEAD)" >actual &&
|
|
|
|
test_cmp expect actual
|
|
|
|
'
|
|
|
|
|
2020-05-07 23:51:02 +00:00
|
|
|
test_done
|