git/Documentation/gitformat-commit-graph.txt

184 lines
7.7 KiB
Plaintext
Raw Normal View History

gitformat-commit-graph(5)
=========================
NAME
----
gitformat-commit-graph - Git commit-graph format
SYNOPSIS
--------
[verse]
$GIT_DIR/objects/info/commit-graph
$GIT_DIR/objects/info/commit-graphs/*
DESCRIPTION
-----------
The Git commit-graph stores a list of commit OIDs and some associated
metadata, including:
- The generation number of the commit.
- The root tree OID.
- The commit date.
- The parents of the commit, stored using positional references within
the graph file.
- The Bloom filter of the commit carrying the paths that were changed between
the commit and its first parent, if requested.
These positional references are stored as unsigned 32-bit integers
corresponding to the array position within the list of commit OIDs. Due
to some special constants we use to track parents, we can store at most
(1 << 30) + (1 << 29) + (1 << 28) - 1 (around 1.8 billion) commits.
== Commit-graph files have the following format:
In order to allow extensions that add extra data to the graph, we organize
the body into "chunks" and provide a binary lookup table at the beginning
of the body. The header includes certain values, such as number of chunks
and hash type.
All multi-byte numbers are in network byte order.
=== HEADER:
4-byte signature:
The signature is: {'C', 'G', 'P', 'H'}
1-byte version number:
Currently, the only valid version is 1.
commit-graph: use the "hash version" byte The commit-graph format reserved a byte among the header of the file to store a "hash version". During the SHA-256 work, this was not modified because file formats are not necessarily intended to work across hash versions. If a repository has SHA-256 as its hash algorithm, it automatically up-shifts the lengths of object names in all necessary formats. However, since we have this byte available for adjusting the version, we can make the file formats more obviously incompatible instead of relying on other context from the repository. Update the oid_version() method in commit-graph.c to add a new value, 2, for sha-256. This automatically writes the new value in a SHA-256 repository _and_ verifies the value is correct. This is a breaking change relative to the current 'master' branch since 092b677 (Merge branch 'bc/sha-256-cvs-svn-updates', 2020-08-13) but it is not breaking relative to any released version of Git. The test impact is relatively minor: the output of 'test-tool read-graph' lists the header information, so those instances of '1' need to be replaced with a variable determined by GIT_TEST_DEFAULT_HASH. A more careful test is added that specifically creates a repository of each type then swaps the commit-graph files. The important value here is that the "git log" command succeeds while writing a message to stderr. Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-17 14:04:47 +00:00
1-byte Hash Version
We infer the hash length (H) from this value:
1 => SHA-1
2 => SHA-256
If the hash type does not match the repository's hash algorithm, the
commit-graph file should be ignored with a warning presented to the
user.
1-byte number (C) of "chunks"
1-byte number (B) of base commit-graphs
We infer the length (H*B) of the Base Graphs chunk
from this value.
=== CHUNK LOOKUP:
(C + 1) * 12 bytes listing the table of contents for the chunks:
First 4 bytes describe the chunk id. Value 0 is a terminating label.
Other 8 bytes provide the byte-offset in current file for chunk to
start. (Chunks are ordered contiguously in the file, so you can infer
the length using the next chunk position if necessary.) Each chunk
ID appears at most once.
The CHUNK LOOKUP matches the table of contents from
the chunk-based file format, see linkgit:gitformat-chunk[5]
The remaining data in the body is described one chunk at a time, and
these chunks may be given in any order. Chunks are required unless
otherwise specified.
=== CHUNK DATA:
==== OID Fanout (ID: {'O', 'I', 'D', 'F'}) (256 * 4 bytes)
The ith entry, F[i], stores the number of OIDs with first
byte at most i. Thus F[255] stores the total
number of commits (N).
==== OID Lookup (ID: {'O', 'I', 'D', 'L'}) (N * H bytes)
The OIDs for all commits in the graph, sorted in ascending order.
==== Commit Data (ID: {'C', 'D', 'A', 'T' }) (N * (H + 16) bytes)
* The first H bytes are for the OID of the root tree.
* The next 8 bytes are for the positions of the first two parents
of the ith commit. Stores value 0x70000000 if no parent in that
position. If there are more than two parents, the second value
has its most-significant bit on and the other bits store an array
commit-graph: rename "large edges" to "extra edges" The optional 'Large Edge List' chunk of the commit graph file stores parent information for commits with more than two parents, and the names of most of the macros, variables, struct fields, and functions related to this chunk contain the term "large edges", e.g. write_graph_chunk_large_edges(). However, it's not a really great term, as the edges to the second and subsequent parents stored in this chunk are not any larger than the edges to the first and second parents stored in the "main" 'Commit Data' chunk. It's the number of edges, IOW number of parents, that is larger compared to non-merge and "regular" two-parent merge commits. And indeed, two functions in 'commit-graph.c' have a local variable called 'num_extra_edges' that refer to the same thing, and this "extra edges" term is much better at describing these edges. So let's rename all these references to "large edges" in macro, variable, function, etc. names to "extra edges". There is a GRAPH_OCTOPUS_EDGES_NEEDED macro as well; for the sake of consistency rename it to GRAPH_EXTRA_EDGES_NEEDED. We can do so safely without causing any incompatibility issues, because the term "large edges" doesn't come up in the file format itself in any form (the chunk's magic is {'E', 'D', 'G', 'E'}, there is no 'L' in there), but only in the specification text. The string "large edges", however, does come up in the output of 'git commit-graph read' and in tests looking at its input, but that command is explicitly documented as debugging aid, so we can change its output and the affected tests safely. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-19 20:21:13 +00:00
position into the Extra Edge List chunk.
* The next 8 bytes store the topological level (generation number v1)
of the commit and
the commit time in seconds since EPOCH. The generation number
uses the higher 30 bits of the first 4 bytes, while the commit
time uses the 32 bits of the second 4 bytes, along with the lowest
2 bits of the lowest byte, storing the 33rd and 34th bit of the
commit time.
==== Generation Data (ID: {'G', 'D', 'A', '2' }) (N * 4 bytes) [Optional]
* This list of 4-byte values store corrected commit date offsets for the
commits, arranged in the same order as commit data chunk.
* If the corrected commit date offset cannot be stored within 31 bits,
the value has its most-significant bit on and the other bits store
the position of corrected commit date into the Generation Data Overflow
chunk.
* Generation Data chunk is present only when commit-graph file is written
by compatible versions of Git and in case of split commit-graph chains,
the topmost layer also has Generation Data chunk.
==== Generation Data Overflow (ID: {'G', 'D', 'O', '2' }) [Optional]
* This list of 8-byte values stores the corrected commit date offsets
for commits with corrected commit date offsets that cannot be
stored within 31 bits.
* Generation Data Overflow chunk is present only when Generation Data
chunk is present and atleast one corrected commit date offset cannot
be stored within 31 bits.
==== Extra Edge List (ID: {'E', 'D', 'G', 'E'}) [Optional]
This list of 4-byte values store the second through nth parents for
all octopus merges. The second parent value in the commit data stores
an array position within this list along with the most-significant bit
on. Starting at that array position, iterate through this list of commit
positions for the parents until reaching a value with the most-significant
bit on. The other bits correspond to the position of the last parent.
==== Bloom Filter Index (ID: {'B', 'I', 'D', 'X'}) (N * 4 bytes) [Optional]
* The ith entry, BIDX[i], stores the number of bytes in all Bloom filters
from commit 0 to commit i (inclusive) in lexicographic order. The Bloom
filter for the i-th commit spans from BIDX[i-1] to BIDX[i] (plus header
length), where BIDX[-1] is 0.
* The BIDX chunk is ignored if the BDAT chunk is not present.
==== Bloom Filter Data (ID: {'B', 'D', 'A', 'T'}) [Optional]
* It starts with header consisting of three unsigned 32-bit integers:
- Version of the hash algorithm being used. We currently only support
value 1 which corresponds to the 32-bit version of the murmur3 hash
implemented exactly as described in
https://en.wikipedia.org/wiki/MurmurHash#Algorithm and the double
hashing technique using seed values 0x293ae76f and 0x7e646e2 as
described in https://doi.org/10.1007/978-3-540-30494-4_26 "Bloom Filters
in Probabilistic Verification"
- The number of times a path is hashed and hence the number of bit positions
that cumulatively determine whether a file is present in the commit.
- The minimum number of bits 'b' per entry in the Bloom filter. If the filter
contains 'n' entries, then the filter size is the minimum number of 64-bit
words that contain n*b bits.
* The rest of the chunk is the concatenation of all the computed Bloom
filters for the commits in lexicographic order.
* Note: Commits with no changes or more than 512 changes have Bloom filters
bloom: encode out-of-bounds filters as non-empty When a changed-path Bloom filter has either zero, or more than a certain number (commonly 512) of entries, the commit-graph machinery encodes it as "missing". More specifically, it sets the indices adjacent in the BIDX chunk as equal to each other to indicate a "length 0" filter; that is, that the filter occupies zero bytes on disk. This has heretofore been fine, since the commit-graph machinery has no need to care about these filters with too few or too many changed paths. Both cases act like no filter has been generated at all, and so there is no need to store them. In a subsequent commit, however, the commit-graph machinery will learn to only compute Bloom filters for some commits in the current commit-graph layer. This is a change from the current implementation which computes Bloom filters for all commits that are in the layer being written. Critically for this patch, only computing some of the Bloom filters means adding a third state for length 0 Bloom filters: zero entries, too many entries, or "hasn't been computed". It will be important for that future patch to distinguish between "not representable" (i.e., zero or too-many changed paths), and "hasn't been computed". In particular, we don't want to waste time recomputing filters that have already been computed. To that end, change how we store Bloom filters in the "computed but not representable" category: - Bloom filters with no entries are stored as a single byte with all bits low (i.e., all queries to that Bloom filter will return "definitely not") - Bloom filters with too many entries are stored as a single byte with all bits set high (i.e., all queries to that Bloom filter will return "maybe"). These rules are sufficient to not incur a behavior change by changing the on-disk representation of these two classes. Likewise, no specification changes are necessary for the commit-graph format, either: - Filters that were previously empty will be recomputed and stored according to the new rules, and - old clients reading filters generated by new clients will interpret the filters correctly and be none the wiser to how they were generated. Clients will invoke the Bloom machinery in more cases than before, but this can be addressed by returning a NULL filter when all bits are set high. This can be addressed in a future patch. Note that this does increase the size of on-disk commit-graphs, but far less than other proposals. In particular, this is generally more efficient than storing a bitmap for which commits haven't computed their Bloom filters. Storing a bitmap incurs a penalty of one bit per commit, whereas storing explicit filters as above incurs a penalty of one byte per too-large or empty commit. In practice, these boundary commits likely occupy a small proportion of the overall number of commits, and so the size penalty is likely smaller than storing a bitmap for all commits. See, for example, these relative proportions of such boundary commits (collected by SZEDER Gábor): | Percentage of | commit-graph | | | commits modifying | file size | | ├────────┬──────────────┼───────────────────┤ pct. | | 0 path | >= 512 paths | before | after | change | ┌────────────────┼────────┼──────────────┼─────────┼─────────┼───────────┤ | android-base | 13.20% | 0.13% | 37.468M | 37.534M | +0.1741 % | | cmssw | 0.15% | 0.23% | 17.118M | 17.119M | +0.0091 % | | cpython | 3.07% | 0.01% | 7.967M | 7.971M | +0.0423 % | | elasticsearch | 0.70% | 1.00% | 8.833M | 8.835M | +0.0128 % | | gcc | 0.00% | 0.08% | 16.073M | 16.074M | +0.0030 % | | gecko-dev | 0.14% | 0.64% | 59.868M | 59.874M | +0.0105 % | | git | 0.11% | 0.02% | 3.895M | 3.895M | +0.0020 % | | glibc | 0.02% | 0.10% | 3.555M | 3.555M | +0.0021 % | | go | 0.00% | 0.07% | 3.186M | 3.186M | +0.0018 % | | homebrew-cask | 0.40% | 0.02% | 7.035M | 7.035M | +0.0065 % | | homebrew-core | 0.01% | 0.01% | 11.611M | 11.611M | +0.0002 % | | jdk | 0.26% | 5.64% | 5.537M | 5.540M | +0.0590 % | | linux | 0.01% | 0.51% | 63.735M | 63.740M | +0.0073 % | | llvm-project | 0.12% | 0.03% | 25.515M | 25.516M | +0.0050 % | | rails | 0.10% | 0.10% | 6.252M | 6.252M | +0.0027 % | | rust | 0.07% | 0.17% | 9.364M | 9.364M | +0.0033 % | | tensorflow | 0.09% | 1.02% | 7.009M | 7.010M | +0.0158 % | | webkit | 0.05% | 0.31% | 17.405M | 17.406M | +0.0047 % | (where the above increase is determined by computing a non-split commit-graph before and after this patch). Given that these projects are all "large" by commit count, the storage cost by writing these filters explicitly is negligible. In the most extreme example, android-base (which has 494,848 commits at the time of writing) would have its commit-graph increase by a modest 68.4 KB. Finally, a test to exercise filters which contain too many changed path entries will be introduced in a subsequent patch. Suggested-by: SZEDER Gábor <szeder.dev@gmail.com> Suggested-by: Jakub Narębski <jnareb@gmail.com> Helped-by: Derrick Stolee <dstolee@microsoft.com> Helped-by: SZEDER Gábor <szeder.dev@gmail.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-18 02:59:44 +00:00
of length one, with either all bits set to zero or one respectively.
* The BDAT chunk is present if and only if BIDX is present.
==== Base Graphs List (ID: {'B', 'A', 'S', 'E'}) [Optional]
This list of H-byte hashes describe a set of B commit-graph files that
form a commit-graph chain. The graph position for the ith commit in this
file's OID Lookup chunk is equal to i plus the number of commits in all
base graphs. If B is non-zero, this chunk must exist.
=== TRAILER:
H-byte HASH-checksum of all of the above.
commit-graph: declare bankruptcy on GDAT chunks The Generation Data (GDAT) and Generation Data Overflow (GDOV) chunks store corrected commit date offsets, used for generation number v2. Recent changes have demonstrated that previous versions of Git were incorrectly parsing data from these chunks, but might have also been writing them incorrectly. I asserted [1] that the previous fixes were sufficient because the known reasons for incorrectly writing generation number v2 data relied on parsing the information incorrectly out of a commit-graph file, but the previous versions of Git were not reading the generation number v2 data. However, Patrick demonstrated [2] a case where in split commit-graphs across an alternate boundary (and possibly some other special conditions) it was possible to have a commit-graph that was generated by a previous version of Git have incorrect generation number v2 data which results in errors like the following: commit-graph generation for commit <oid> is 1623273624 < 1623273710 [1] https://lore.kernel.org/git/f50e74f0-9ffa-f4f2-4663-269801495ed3@github.com/ [2] https://lore.kernel.org/git/Yh93vOkt2DkrGPh2@ncase/ Clearly, there is something else going on. The situation is not completely understood, but the errors do not reproduce if the commit-graphs are all generated by a Git version including these recent fixes. If we cannot trust the existing data in the GDAT and GDOV chunks, then we can alter the format to change the chunk IDs for these chunks. This causes the new version of Git to silently ignore the older chunks (and disabling generation number v2 in the process) while writing new commit-graph files with correct data in the GDA2 and GDO2 chunks. Update commit-graph-format.txt including a historical note about these deprecated chunks. Reported-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Derrick Stolee <derrickstolee@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-03-02 14:45:13 +00:00
== Historical Notes:
The Generation Data (GDA2) and Generation Data Overflow (GDO2) chunks have
the number '2' in their chunk IDs because a previous version of Git wrote
possibly erroneous data in these chunks with the IDs "GDAT" and "GDOV". By
changing the IDs, newer versions of Git will silently ignore those older
chunks and write the new information without trusting the incorrect data.
GIT
---
Part of the linkgit:git[1] suite