git/Documentation/gitformat-pack.txt

639 lines
25 KiB
Text
Raw Normal View History

gitformat-pack(5)
=================
NAME
----
gitformat-pack - Git pack format
SYNOPSIS
--------
[verse]
$GIT_DIR/objects/pack/pack-*.{pack,idx}
$GIT_DIR/objects/pack/pack-*.rev
$GIT_DIR/objects/pack/pack-*.mtimes
$GIT_DIR/objects/pack/multi-pack-index
DESCRIPTION
-----------
The Git pack format is now Git stores most of its primary repository
data. Over the lietime af a repository loose objects (if any) and
smaller packs are consolidated into larger pack(s). See
linkgit:git-gc[1] and linkgit:git-pack-objects[1].
The pack format is also used over-the-wire, see
e.g. linkgit:gitprotocol-v2[5], as well as being a part of
other container formats in the case of linkgit:gitformat-bundle[5].
== Checksums and object IDs
In a repository using the traditional SHA-1, pack checksums, index checksums,
and object IDs (object names) mentioned below are all computed using SHA-1.
Similarly, in SHA-256 repositories, these values are computed using SHA-256.
== pack-*.pack files have the following format:
- A header appears at the beginning and consists of the following:
4-byte signature:
The signature is: {'P', 'A', 'C', 'K'}
4-byte version number (network byte order):
Git currently accepts version number 2 or 3 but
generates version 2 only.
4-byte number of objects contained in the pack (network byte order)
Observation: we cannot have more than 4G versions ;-) and
more than 4G objects in a pack.
- The header is followed by number of object entries, each of
which looks like this:
(undeltified representation)
n-byte type and length (3-bit type, (n-1)*7+4-bit length)
compressed data
(deltified representation)
n-byte type and length (3-bit type, (n-1)*7+4-bit length)
base object name if OBJ_REF_DELTA or a negative relative
offset from the delta object's position in the pack if this
is an OBJ_OFS_DELTA object
compressed delta data
Observation: length of each object is encoded in a variable
length format and is not constrained to 32-bit or anything.
- The trailer records a pack checksum of all of the above.
=== Object types
Valid object types are:
- OBJ_COMMIT (1)
- OBJ_TREE (2)
- OBJ_BLOB (3)
- OBJ_TAG (4)
- OBJ_OFS_DELTA (6)
- OBJ_REF_DELTA (7)
Type 5 is reserved for future expansion. Type 0 is invalid.
=== Size encoding
This document uses the following "size encoding" of non-negative
integers: From each byte, the seven least significant bits are
used to form the resulting integer. As long as the most significant
bit is 1, this process continues; the byte with MSB 0 provides the
last seven bits. The seven-bit chunks are concatenated. Later
values are more significant.
This size encoding should not be confused with the "offset encoding",
which is also used in this document.
=== Deltified representation
Conceptually there are only four object types: commit, tree, tag and
blob. However to save space, an object could be stored as a "delta" of
another "base" object. These representations are assigned new types
ofs-delta and ref-delta, which is only valid in a pack file.
Both ofs-delta and ref-delta store the "delta" to be applied to
another object (called 'base object') to reconstruct the object. The
difference between them is, ref-delta directly encodes base object
name. If the base object is in the same pack, ofs-delta encodes
the offset of the base object in the pack instead.
The base object could also be deltified if it's in the same pack.
Ref-delta can also refer to an object outside the pack (i.e. the
so-called "thin pack"). When stored on disk however, the pack should
be self contained to avoid cyclic dependency.
The delta data starts with the size of the base object and the
size of the object to be reconstructed. These sizes are
encoded using the size encoding from above. The remainder of
the delta data is a sequence of instructions to reconstruct the object
from the base object. If the base object is deltified, it must be
converted to canonical form first. Each instruction appends more and
more data to the target object until it's complete. There are two
supported instructions so far: one for copy a byte range from the
source object and one for inserting new data embedded in the
instruction itself.
Each instruction has variable length. Instruction type is determined
by the seventh bit of the first octet. The following diagrams follow
the convention in RFC 1951 (Deflate compressed data format).
==== Instruction to copy from base object
+----------+---------+---------+---------+---------+-------+-------+-------+
| 1xxxxxxx | offset1 | offset2 | offset3 | offset4 | size1 | size2 | size3 |
+----------+---------+---------+---------+---------+-------+-------+-------+
This is the instruction format to copy a byte range from the source
object. It encodes the offset to copy from and the number of bytes to
copy. Offset and size are in little-endian order.
All offset and size bytes are optional. This is to reduce the
instruction size when encoding small offsets or sizes. The first seven
bits in the first octet determines which of the next seven octets is
present. If bit zero is set, offset1 is present. If bit one is set
offset2 is present and so on.
Note that a more compact instruction does not change offset and size
encoding. For example, if only offset2 is omitted like below, offset3
still contains bits 16-23. It does not become offset2 and contains
bits 8-15 even if it's right next to offset1.
+----------+---------+---------+
| 10000101 | offset1 | offset3 |
+----------+---------+---------+
In its most compact form, this instruction only takes up one byte
(0x80) with both offset and size omitted, which will have default
values zero. There is another exception: size zero is automatically
converted to 0x10000.
==== Instruction to add new data
+----------+============+
| 0xxxxxxx | data |
+----------+============+
This is the instruction to construct target object without the base
object. The following data is appended to the target object. The first
seven bits of the first octet determines the size of data in
bytes. The size must be non-zero.
==== Reserved instruction
+----------+============
| 00000000 |
+----------+============
This is the instruction reserved for future expansion.
== Original (version 1) pack-*.idx files have the following format:
- The header consists of 256 4-byte network byte order
integers. N-th entry of this table records the number of
objects in the corresponding pack, the first byte of whose
object name is less than or equal to N. This is called the
'first-level fan-out' table.
- The header is followed by sorted 24-byte entries, one entry
per object in the pack. Each entry is:
4-byte network byte order integer, recording where the
object is stored in the packfile as the offset from the
beginning.
one object name of the appropriate size.
- The file is concluded with a trailer:
A copy of the pack checksum at the end of the corresponding
packfile.
Index checksum of all of the above.
Pack Idx file:
-- +--------------------------------+
fanout | fanout[0] = 2 (for example) |-.
table +--------------------------------+ |
| fanout[1] | |
+--------------------------------+ |
| fanout[2] | |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| fanout[255] = total objects |---.
-- +--------------------------------+ | |
main | offset | | |
index | object name 00XXXXXXXXXXXXXXXX | | |
table +--------------------------------+ | |
| offset | | |
| object name 00XXXXXXXXXXXXXXXX | | |
+--------------------------------+<+ |
.-| offset | |
| | object name 01XXXXXXXXXXXXXXXX | |
| +--------------------------------+ |
| | offset | |
| | object name 01XXXXXXXXXXXXXXXX | |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | offset | |
| | object name FFXXXXXXXXXXXXXXXX | |
--| +--------------------------------+<--+
trailer | | packfile checksum |
| +--------------------------------+
| | idxfile checksum |
| +--------------------------------+
.-------.
|
Pack file entry: <+
packed object header:
1-byte size extension bit (MSB)
type (next 3 bit)
size0 (lower 4-bit)
n-byte sizeN (as long as MSB is set, each 7-bit)
size0..sizeN form 4+7+7+..+7 bit integer, size0
is the least significant part, and sizeN is the
most significant part.
packed object data:
If it is not DELTA, then deflated bytes (the size above
is the size before compression).
If it is REF_DELTA, then
base object name (the size above is the
size of the delta data that follows).
delta data, deflated.
If it is OFS_DELTA, then
n-byte offset (see below) interpreted as a negative
offset from the type-byte of the header of the
ofs-delta entry (the size above is the size of
the delta data that follows).
delta data, deflated.
offset encoding:
n bytes with MSB set in all but the last one.
The offset is then the number constructed by
concatenating the lower 7 bit of each byte, and
for n >= 2 adding 2^7 + 2^14 + ... + 2^(7*(n-1))
to the result.
== Version 2 pack-*.idx files support packs larger than 4 GiB, and
have some other reorganizations. They have the format:
- A 4-byte magic number '\377tOc' which is an unreasonable
fanout[0] value.
- A 4-byte version number (= 2)
- A 256-entry fan-out table just like v1.
- A table of sorted object names. These are packed together
without offset values to reduce the cache footprint of the
binary search for a specific object name.
- A table of 4-byte CRC32 values of the packed object data.
This is new in v2 so compressed data can be copied directly
from pack to pack during repacking without undetected
data corruption.
- A table of 4-byte offset values (in network byte order).
These are usually 31-bit pack file offsets, but large
offsets are encoded as an index into the next table with
the msbit set.
- A table of 8-byte offset entries (empty for pack files less
than 2 GiB). Pack files are organized with heavily used
objects toward the front, so most object references should
not need to refer to this table.
- The same trailer as a v1 pack file:
A copy of the pack checksum at the end of
corresponding packfile.
Index checksum of all of the above.
packfile: prepare for the existence of '*.rev' files Specify the format of the on-disk reverse index 'pack-*.rev' file, as well as prepare the code for the existence of such files. The reverse index maps from pack relative positions (i.e., an index into the array of object which is sorted by their offsets within the packfile) to their position within the 'pack-*.idx' file. Today, this is done by building up a list of (off_t, uint32_t) tuples for each object (the off_t corresponding to that object's offset, and the uint32_t corresponding to its position in the index). To convert between pack and index position quickly, this array of tuples is radix sorted based on its offset. This has two major drawbacks: First, the in-memory cost scales linearly with the number of objects in a pack. Each 'struct revindex_entry' is sizeof(off_t) + sizeof(uint32_t) + padding bytes for a total of 16. To observe this, force Git to load the reverse index by, for e.g., running 'git cat-file --batch-check="%(objectsize:disk)"'. When asking for a single object in a fresh clone of the kernel, Git needs to allocate 120+ MB of memory in order to hold the reverse index in memory. Second, the cost to sort also scales with the size of the pack. Luckily, this is a linear function since 'load_pack_revindex()' uses a radix sort, but this cost still must be paid once per pack per process. As an example, it takes ~60x longer to print the _size_ of an object as it does to print that entire object's _contents_: Benchmark #1: git.compile cat-file --batch <obj Time (mean ± σ): 3.4 ms ± 0.1 ms [User: 3.3 ms, System: 2.1 ms] Range (min … max): 3.2 ms … 3.7 ms 726 runs Benchmark #2: git.compile cat-file --batch-check="%(objectsize:disk)" <obj Time (mean ± σ): 210.3 ms ± 8.9 ms [User: 188.2 ms, System: 23.2 ms] Range (min … max): 193.7 ms … 224.4 ms 13 runs Instead, avoid computing and sorting the revindex once per process by writing it to a file when the pack itself is generated. The format is relatively straightforward. It contains an array of uint32_t's, the length of which is equal to the number of objects in the pack. The ith entry in this table contains the index position of the ith object in the pack, where "ith object in the pack" is determined by pack offset. One thing that the on-disk format does _not_ contain is the full (up to) eight-byte offset corresponding to each object. This is something that the in-memory revindex contains (it stores an off_t in 'struct revindex_entry' along with the same uint32_t that the on-disk format has). Omit it in the on-disk format, since knowing the index position for some object is sufficient to get a constant-time lookup in the pack-*.idx file to ask for an object's offset within the pack. This trades off between the on-disk size of the 'pack-*.rev' file for runtime to chase down the offset for some object. Even though the lookup is constant time, the constant is heavier, since it can potentially involve two pointer walks in v2 indexes (one to access the 4-byte offset table, and potentially a second to access the double wide offset table). Consider trying to map an object's pack offset to a relative position within that pack. In a cold-cache scenario, more page faults occur while switching between binary searching through the reverse index and searching through the *.idx file for an object's offset. Sure enough, with a cold cache (writing '3' into '/proc/sys/vm/drop_caches' after 'sync'ing), printing out the entire object's contents is still marginally faster than printing its size: Benchmark #1: git.compile cat-file --batch-check="%(objectsize:disk)" <obj >/dev/null Time (mean ± σ): 22.6 ms ± 0.5 ms [User: 2.4 ms, System: 7.9 ms] Range (min … max): 21.4 ms … 23.5 ms 41 runs Benchmark #2: git.compile cat-file --batch <obj >/dev/null Time (mean ± σ): 17.2 ms ± 0.7 ms [User: 2.8 ms, System: 5.5 ms] Range (min … max): 15.6 ms … 18.2 ms 45 runs (Numbers taken in the kernel after cheating and using the next patch to generate a reverse index). There are a couple of approaches to improve cold cache performance not pursued here: - We could include the object offsets in the reverse index format. Predictably, this does result in fewer page faults, but it triples the size of the file, while simultaneously duplicating a ton of data already available in the .idx file. (This was the original way I implemented the format, and it did show `--batch-check='%(objectsize:disk)'` winning out against `--batch`.) On the other hand, this increase in size also results in a large block-cache footprint, which could potentially hurt other workloads. - We could store the mapping from pack to index position in more cache-friendly way, like constructing a binary search tree from the table and writing the values in breadth-first order. This would result in much better locality, but the price you pay is trading O(1) lookup in 'pack_pos_to_index()' for an O(log n) one (since you can no longer directly index the table). So, neither of these approaches are taken here. (Thankfully, the format is versioned, so we are free to pursue these in the future.) But, cold cache performance likely isn't interesting outside of one-off cases like asking for the size of an object directly. In real-world usage, Git is often performing many operations in the revindex (i.e., asking about many objects rather than a single one). The trade-off is worth it, since we will avoid the vast majority of the cost of generating the revindex that the extra pointer chase will look like noise in the following patch's benchmarks. This patch describes the format and prepares callers (like in pack-revindex.c) to be able to read *.rev files once they exist. An implementation of the writer will appear in the next patch, and callers will gradually begin to start using the writer in the patches that follow after that. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-25 23:37:14 +00:00
== pack-*.rev files have the format:
- A 4-byte magic number '0x52494458' ('RIDX').
- A 4-byte version identifier (= 1).
- A 4-byte hash function identifier (= 1 for SHA-1, 2 for SHA-256).
- A table of index positions (one per packed object, num_objects in
total, each a 4-byte unsigned integer in network order), sorted by
their corresponding offsets in the packfile.
- A trailer, containing a:
checksum of the corresponding packfile, and
a checksum of all of the above.
All 4-byte numbers are in network order.
== pack-*.mtimes files have the format:
All 4-byte numbers are in network byte order.
- A 4-byte magic number '0x4d544d45' ('MTME').
- A 4-byte version identifier (= 1).
- A 4-byte hash function identifier (= 1 for SHA-1, 2 for SHA-256).
- A table of 4-byte unsigned integers. The ith value is the
modification time (mtime) of the ith object in the corresponding
pack by lexicographic (index) order. The mtimes count standard
epoch seconds.
- A trailer, containing a checksum of the corresponding packfile,
and a checksum of all of the above (each having length according
to the specified hash function).
== multi-pack-index (MIDX) files have the following format:
The multi-pack-index files refer to multiple pack-files and loose objects.
In order to allow extensions that add extra data to the MIDX, we organize
the body into "chunks" and provide a lookup table at the beginning of the
body. The header includes certain length values, such as the number of packs,
the number of base MIDX files, hash lengths and types.
All 4-byte numbers are in network order.
HEADER:
4-byte signature:
The signature is: {'M', 'I', 'D', 'X'}
1-byte version number:
Git only writes or recognizes version 1.
1-byte Object Id Version
multi-pack-index: use hash version byte Similar to the commit-graph format, the multi-pack-index format has a byte in the header intended to track the hash version used to write the file. This allows one to interpret the hash length without having the context of the repository config specifying the hash length. This was not modified as part of the SHA-256 work because the hash length was automatically up-shifted due to that config. Since we have this byte available, we can make the file formats more obviously incompatible instead of relying on other context from the repository. Add a new oid_version() method in midx.c similar to the one in commit-graph.c. This is specifically made separate from that implementation to avoid artificially linking the formats. The test impact requires a few more things than the corresponding change in the commit-graph format. Specifically, 'test-tool read-midx' was not writing anything about this header value to output. Since the value available in 'struct multi_pack_index' is hash_len instead of a version value, we output "20" or "32" instead of "1" or "2". Since we want a user to not have their Git commands fail if their multi-pack-index has the incorrect hash version compared to the repository's hash version, we relax the die() to an error() in load_multi_pack_index(). This has some effect on 'git multi-pack-index verify' as we need to check that a failed parse of a file that exists is actually a verify error. For that test that checks the hash version matches, we change the corrupted byte from "2" to "3" to ensure the test fails for both hash algorithms. Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-17 14:04:48 +00:00
We infer the length of object IDs (OIDs) from this value:
1 => SHA-1
2 => SHA-256
If the hash type does not match the repository's hash algorithm,
the multi-pack-index file should be ignored with a warning
presented to the user.
1-byte number of "chunks"
1-byte number of base multi-pack-index files:
This value is currently always zero.
4-byte number of pack files
CHUNK LOOKUP:
(C + 1) * 12 bytes providing the chunk offsets:
First 4 bytes describe chunk id. Value 0 is a terminating label.
Other 8 bytes provide offset in current file for chunk to start.
(Chunks are provided in file-order, so you can infer the length
using the next chunk position if necessary.)
The CHUNK LOOKUP matches the table of contents from
the chunk-based file format, see linkgit:gitformat-chunk[5].
The remaining data in the body is described one chunk at a time, and
these chunks may be given in any order. Chunks are required unless
otherwise specified.
CHUNK DATA:
Packfile Names (ID: {'P', 'N', 'A', 'M'})
Stores the packfile names as concatenated, null-terminated strings.
Packfiles must be listed in lexicographic order for fast lookups by
name. This is the only chunk not guaranteed to be a multiple of four
bytes in length, so should be the last chunk for alignment reasons.
OID Fanout (ID: {'O', 'I', 'D', 'F'})
The ith entry, F[i], stores the number of OIDs with first
byte at most i. Thus F[255] stores the total
number of objects.
OID Lookup (ID: {'O', 'I', 'D', 'L'})
The OIDs for all objects in the MIDX are stored in lexicographic
order in this chunk.
Object Offsets (ID: {'O', 'O', 'F', 'F'})
Stores two 4-byte values for every object.
1: The pack-int-id for the pack storing this object.
2: The offset within the pack.
If all offsets are less than 2^32, then the large offset chunk
will not exist and offsets are stored as in IDX v1.
If there is at least one offset value larger than 2^32-1, then
the large offset chunk must exist, and offsets larger than
2^31-1 must be stored in it instead. If the large offset chunk
exists and the 31st bit is on, then removing that bit reveals
the row in the large offsets containing the 8-byte offset of
this object.
[Optional] Object Large Offsets (ID: {'L', 'O', 'F', 'F'})
8-byte offsets into large packfiles.
midx.c: make changing the preferred pack safe The previous patch demonstrates a bug where a MIDX's auxiliary object order can become out of sync with a MIDX bitmap. This is because of two confounding factors: - First, the object order is stored in a file which is named according to the multi-pack index's checksum, and the MIDX does not store the object order. This means that the object order can change without altering the checksum. - But the .rev file is moved into place with finalize_object_file(), which link(2)'s the file into place instead of renaming it. For us, that means that a modified .rev file will not be moved into place if MIDX's checksum was unchanged. This fix is to force the MIDX's checksum to change when the preferred pack changes but the set of packs contained in the MIDX does not. In other words, when the object order changes, the MIDX's checksum needs to change with it (regardless of whether the MIDX is tracking the same or different packs). This prevents a race whereby changing the object order (but not the packs themselves) enables a reader to see the new .rev file with the old MIDX, or similarly seeing the new bitmap with the old object order. But why can't we just stop hardlinking the .rev into place instead adding additional data to the MIDX? Suppose that's what we did. Then when we go to generate the new bitmap, we'll load the old MIDX bitmap, along with the MIDX that it references. That's fine, since the new MIDX isn't moved into place until after the new bitmap is generated. But the new object order *has* been moved into place. So we'll read the old bitmaps in the new order when generating the new bitmap file, meaning that without this secondary change, bitmap generation itself would become a victim of the race described here. This can all be prevented by forcing the MIDX's checksum to change when the object order does. By embedding the entire object order into the MIDX, we do just that. That is, the MIDX's checksum will change in response to any perturbation of the underlying object order. In t5326, this will cause the MIDX's checksum to update (even without changing the set of packs in the MIDX), preventing the stale read problem. Note that this makes it safe to continue to link(2) the MIDX .rev file into place, since it is now impossible to have a .rev file that is out-of-sync with the MIDX whose checksum it references. (But we will do away with MIDX .rev files later in this series anyway, so this is somewhat of a moot point). In theory, it is possible to store a "fingerprint" of the full object order here, so long as that fingerprint changes at least as often as the full object order does. Some possibilities here include storing the identity of the preferred pack, along with the mtimes of the non-preferred packs in a consistent order. But storing a limited part of the information makes it difficult to reason about whether or not there are gaps between the two that would cause us to get bitten by this bug again. Signed-off-by: Taylor Blau <me@ttaylorr.com> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-25 22:41:03 +00:00
[Optional] Bitmap pack order (ID: {'R', 'I', 'D', 'X'})
A list of MIDX positions (one per object in the MIDX, num_objects in
total, each a 4-byte unsigned integer in network byte order), sorted
according to their relative bitmap/pseudo-pack positions.
TRAILER:
Index checksum of the above contents.
== multi-pack-index reverse indexes
Similar to the pack-based reverse index, the multi-pack index can also
be used to generate a reverse index.
Instead of mapping between offset, pack-, and index position, this
reverse index maps between an object's position within the MIDX, and
that object's position within a pseudo-pack that the MIDX describes
(i.e., the ith entry of the multi-pack reverse index holds the MIDX
position of ith object in pseudo-pack order).
To clarify the difference between these orderings, consider a multi-pack
reachability bitmap (which does not yet exist, but is what we are
building towards here). Each bit needs to correspond to an object in the
MIDX, and so we need an efficient mapping from bit position to MIDX
position.
One solution is to let bits occupy the same position in the oid-sorted
index stored by the MIDX. But because oids are effectively random, their
resulting reachability bitmaps would have no locality, and thus compress
poorly. (This is the reason that single-pack bitmaps use the pack
ordering, and not the .idx ordering, for the same purpose.)
So we'd like to define an ordering for the whole MIDX based around
pack ordering, which has far better locality (and thus compresses more
efficiently). We can think of a pseudo-pack created by the concatenation
of all of the packs in the MIDX. E.g., if we had a MIDX with three packs
(a, b, c), with 10, 15, and 20 objects respectively, we can imagine an
ordering of the objects like:
|a,0|a,1|...|a,9|b,0|b,1|...|b,14|c,0|c,1|...|c,19|
where the ordering of the packs is defined by the MIDX's pack list,
and then the ordering of objects within each pack is the same as the
order in the actual packfile.
Given the list of packs and their counts of objects, you can
naïvely reconstruct that pseudo-pack ordering (e.g., the object at
position 27 must be (c,1) because packs "a" and "b" consumed 25 of the
slots). But there's a catch. Objects may be duplicated between packs, in
which case the MIDX only stores one pointer to the object (and thus we'd
want only one slot in the bitmap).
Callers could handle duplicates themselves by reading objects in order
of their bit-position, but that's linear in the number of objects, and
much too expensive for ordinary bitmap lookups. Building a reverse index
solves this, since it is the logical inverse of the index, and that
index has already removed duplicates. But, building a reverse index on
the fly can be expensive. Since we already have an on-disk format for
pack-based reverse indexes, let's reuse it for the MIDX's pseudo-pack,
too.
Objects from the MIDX are ordered as follows to string together the
pseudo-pack. Let `pack(o)` return the pack from which `o` was selected
by the MIDX, and define an ordering of packs based on their numeric ID
(as stored by the MIDX). Let `offset(o)` return the object offset of `o`
within `pack(o)`. Then, compare `o1` and `o2` as follows:
- If one of `pack(o1)` and `pack(o2)` is preferred and the other
is not, then the preferred one sorts first.
+
(This is a detail that allows the MIDX bitmap to determine which
pack should be used by the pack-reuse mechanism, since it can ask
the MIDX for the pack containing the object at bit position 0).
- If `pack(o1) ≠ pack(o2)`, then sort the two objects in descending
order based on the pack ID.
- Otherwise, `pack(o1) = pack(o2)`, and the objects are sorted in
pack-order (i.e., `o1` sorts ahead of `o2` exactly when `offset(o1)
< offset(o2)`).
In short, a MIDX's pseudo-pack is the de-duplicated concatenation of
objects in packs stored by the MIDX, laid out in pack order, and the
packs arranged in MIDX order (with the preferred pack coming first).
midx.c: make changing the preferred pack safe The previous patch demonstrates a bug where a MIDX's auxiliary object order can become out of sync with a MIDX bitmap. This is because of two confounding factors: - First, the object order is stored in a file which is named according to the multi-pack index's checksum, and the MIDX does not store the object order. This means that the object order can change without altering the checksum. - But the .rev file is moved into place with finalize_object_file(), which link(2)'s the file into place instead of renaming it. For us, that means that a modified .rev file will not be moved into place if MIDX's checksum was unchanged. This fix is to force the MIDX's checksum to change when the preferred pack changes but the set of packs contained in the MIDX does not. In other words, when the object order changes, the MIDX's checksum needs to change with it (regardless of whether the MIDX is tracking the same or different packs). This prevents a race whereby changing the object order (but not the packs themselves) enables a reader to see the new .rev file with the old MIDX, or similarly seeing the new bitmap with the old object order. But why can't we just stop hardlinking the .rev into place instead adding additional data to the MIDX? Suppose that's what we did. Then when we go to generate the new bitmap, we'll load the old MIDX bitmap, along with the MIDX that it references. That's fine, since the new MIDX isn't moved into place until after the new bitmap is generated. But the new object order *has* been moved into place. So we'll read the old bitmaps in the new order when generating the new bitmap file, meaning that without this secondary change, bitmap generation itself would become a victim of the race described here. This can all be prevented by forcing the MIDX's checksum to change when the object order does. By embedding the entire object order into the MIDX, we do just that. That is, the MIDX's checksum will change in response to any perturbation of the underlying object order. In t5326, this will cause the MIDX's checksum to update (even without changing the set of packs in the MIDX), preventing the stale read problem. Note that this makes it safe to continue to link(2) the MIDX .rev file into place, since it is now impossible to have a .rev file that is out-of-sync with the MIDX whose checksum it references. (But we will do away with MIDX .rev files later in this series anyway, so this is somewhat of a moot point). In theory, it is possible to store a "fingerprint" of the full object order here, so long as that fingerprint changes at least as often as the full object order does. Some possibilities here include storing the identity of the preferred pack, along with the mtimes of the non-preferred packs in a consistent order. But storing a limited part of the information makes it difficult to reason about whether or not there are gaps between the two that would cause us to get bitten by this bug again. Signed-off-by: Taylor Blau <me@ttaylorr.com> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-25 22:41:03 +00:00
The MIDX's reverse index is stored in the optional 'RIDX' chunk within
the MIDX itself.
== cruft packs
The cruft packs feature offer an alternative to Git's traditional mechanism of
removing unreachable objects. This document provides an overview of Git's
pruning mechanism, and how a cruft pack can be used instead to accomplish the
same.
=== Background
To remove unreachable objects from your repository, Git offers `git repack -Ad`
(see linkgit:git-repack[1]). Quoting from the documentation:
----
[...] unreachable objects in a previous pack become loose, unpacked objects,
instead of being left in the old pack. [...] loose unreachable objects will be
pruned according to normal expiry rules with the next 'git gc' invocation.
----
Unreachable objects aren't removed immediately, since doing so could race with
an incoming push which may reference an object which is about to be deleted.
Instead, those unreachable objects are stored as loose objects and stay that way
until they are older than the expiration window, at which point they are removed
by linkgit:git-prune[1].
Git must store these unreachable objects loose in order to keep track of their
per-object mtimes. If these unreachable objects were written into one big pack,
then either freshening that pack (because an object contained within it was
re-written) or creating a new pack of unreachable objects would cause the pack's
mtime to get updated, and the objects within it would never leave the expiration
window. Instead, objects are stored loose in order to keep track of the
individual object mtimes and avoid a situation where all cruft objects are
freshened at once.
This can lead to undesirable situations when a repository contains many
unreachable objects which have not yet left the grace period. Having large
directories in the shards of `.git/objects` can lead to decreased performance in
the repository. But given enough unreachable objects, this can lead to inode
starvation and degrade the performance of the whole system. Since we
can never pack those objects, these repositories often take up a large amount of
disk space, since we can only zlib compress them, but not store them in delta
chains.
=== Cruft packs
A cruft pack eliminates the need for storing unreachable objects in a loose
state by including the per-object mtimes in a separate file alongside a single
pack containing all loose objects.
A cruft pack is written by `git repack --cruft` when generating a new pack.
linkgit:git-pack-objects[1]'s `--cruft` option. Note that `git repack --cruft`
is a classic all-into-one repack, meaning that everything in the resulting pack is
reachable, and everything else is unreachable. Once written, the `--cruft`
option instructs `git repack` to generate another pack containing only objects
not packed in the previous step (which equates to packing all unreachable
objects together). This progresses as follows:
1. Enumerate every object, marking any object which is (a) not contained in a
kept-pack, and (b) whose mtime is within the grace period as a traversal
tip.
2. Perform a reachability traversal based on the tips gathered in the previous
step, adding every object along the way to the pack.
3. Write the pack out, along with a `.mtimes` file that records the per-object
timestamps.
This mode is invoked internally by linkgit:git-repack[1] when instructed to
write a cruft pack. Crucially, the set of in-core kept packs is exactly the set
of packs which will not be deleted by the repack; in other words, they contain
all of the repository's reachable objects.
When a repository already has a cruft pack, `git repack --cruft` typically only
adds objects to it. An exception to this is when `git repack` is given the
`--cruft-expiration` option, which allows the generated cruft pack to omit
expired objects instead of waiting for linkgit:git-gc[1] to expire those objects
later on.
It is linkgit:git-gc[1] that is typically responsible for removing expired
unreachable objects.
=== Caution for mixed-version environments
Repositories that have cruft packs in them will continue to work with any older
version of Git. Note, however, that previous versions of Git which do not
understand the `.mtimes` file will use the cruft pack's mtime as the mtime for
all of the objects in it. In other words, do not expect older (pre-cruft pack)
versions of Git to interpret or even read the contents of the `.mtimes` file.
Note that having mixed versions of Git GC-ing the same repository can lead to
unreachable objects never being completely pruned. This can happen under the
following circumstances:
- An older version of Git running GC explodes the contents of an existing
cruft pack loose, using the cruft pack's mtime.
- A newer version running GC collects those loose objects into a cruft pack,
where the .mtime file reflects the loose object's actual mtimes, but the
cruft pack mtime is "now".
Repeating this process will lead to unreachable objects not getting pruned as a
result of repeatedly resetting the objects' mtimes to the present time.
If you are GC-ing repositories in a mixed version environment, consider omitting
the `--cruft` option when using linkgit:git-repack[1] and linkgit:git-gc[1], and
builtin/gc.c: make `gc.cruftPacks` enabled by default Back in 5b92477f89 (builtin/gc.c: conditionally avoid pruning objects via loose, 2022-05-20), `git gc` learned the `--cruft` option and `gc.cruftPacks` configuration to opt-in to writing cruft packs when collecting or pruning unreachable objects. Cruft packs were introduced with the merge in a50036da1a (Merge branch 'tb/cruft-packs', 2022-06-03). They address the problem of "loose object explosions", where Git will write out many individual loose objects when there is a large number of unreachable objects that have not yet aged past `--prune=<date>`. Instead of keeping track of those unreachable yet recent objects via their loose object file's mtime, cruft packs collect all unreachable objects into a single pack with a corresponding `*.mtimes` file that acts as a table to store the mtimes of all unreachable objects. This prevents the need to store unreachable objects as loose as they age out of the repository, and avoids the problem of loose object explosions. Beyond avoiding loose object explosions, cruft packs also act as a more efficient mechanism to store unreachable objects as they age out of a repository. This is because pairs of similar unreachable objects serve as delta bases for one another. In 5b92477f89, the feature was introduced as experimental. Since then, GitHub has been running these patches in every repository generating hundreds of millions of cruft packs along the way. The feature is battle-tested, and avoids many pathological cases such as above. Users who either run `git gc` manually, or via `git maintenance` can benefit from having cruft packs. As such, enable cruft pack generation to take place by default (by making `gc.cruftPacks` have the default of "true" rather than "false). Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-18 20:40:57 +00:00
setting the `gc.cruftPacks` configuration to "false" until all writers
understand cruft packs.
=== Alternatives
Notable alternatives to this design include:
- The location of the per-object mtime data, and
- Storing unreachable objects in multiple cruft packs.
On the location of mtime data, a new auxiliary file tied to the pack was chosen
to avoid complicating the `.idx` format. If the `.idx` format were ever to gain
support for optional chunks of data, it may make sense to consolidate the
`.mtimes` format into the `.idx` itself.
Storing unreachable objects among multiple cruft packs (e.g., creating a new
cruft pack during each repacking operation including only unreachable objects
which aren't already stored in an earlier cruft pack) is significantly more
complicated to construct, and so aren't pursued here. The obvious drawback to
the current implementation is that the entire cruft pack must be re-written from
scratch.
GIT
---
Part of the linkgit:git[1] suite