Commit graph

71464 commits

Author SHA1 Message Date
Kristoffer Haugsbakk b1688ea02d grep: die gracefully when outside repository
Die gracefully when `git grep --no-index` is run outside of a Git
repository and the path is outside the directory tree.

If you are not in a Git repository and say:

    git grep --no-index search ..

You trigger a `BUG`:

    BUG: environment.c:213: git environment hasn't been setup
    Aborted (core dumped)

Because `..` is a valid path which is treated as a pathspec. Then
`pathspec` figures out that it is not in the current directory tree. The
`BUG` is triggered when `pathspec` tries to advise the user about how the
path is not in the current (non-existing) repository.

Reported-by: ks1322 ks1322 <ks1322@gmail.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20 11:06:45 -07:00
Matthew McClain 10c89a02b0 git-p4 shouldn't attempt to store symlinks in LFS
git-p4.py would attempt to put a symlink in LFS if its file extension
matched git-p4.largeFileExtensions.

Git LFS doesn't store symlinks because smudge/clean filters don't handle
symlinks. They never get passed to the filter process nor the
smudge/clean filters, nor could that occur without a change to the
protocol or command-line interface. Unless Git learned how to send them
to the filters, Git LFS would have a hard time using them in any useful
way.

Git LFS's goal is to move large files out of the repository history, and
symlinks are functionally limited to 4 KiB or a similar size on most
systems.

Signed-off-by: Matthew McClain <mmcclain@noprivs.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-19 10:57:44 -07:00
Dorcas AnonoLitunya 5abb758118 t7601: use "test_path_is_file" etc. instead of "test -f"
Some tests in t7601 use "test -f" and "test ! -f" to see if a path
exists or is missing.

Use test_path_is_file and test_path_is_missing helper functions to
clarify these tests a bit better. This especially matters for the
"missing" case because "test ! -f F" will be happy if "F" exists as a
directory, but the intent of the test is that "F" should not exist, even
as a directory. The updated code expresses this better.

Signed-off-by: Dorcas AnonoLitunya <anonolitunya@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-18 16:57:49 -07:00
Junio C Hamano 14d569b1a7 am: align placeholder for --whitespace option with apply
`git am` passes the value given to its `--whitespace` option through
to the underlying `git apply`, and the value is called <action> over
there.  Fix the documentation for the command that calls the value
<option> to say <action> instead.

Note that the option help given by `git am -h` already calls the
value <action>, so there is no need to make a matching change there.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-18 16:35:44 -07:00
Junio C Hamano 813d9a9188 The nineteenth batch
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-18 13:25:42 -07:00
Junio C Hamano 7906b5c957 Merge branch 'jc/merge-ort-attr-index-fix'
Fix "git merge-tree" to stop segfaulting when the --attr-source
option is used.

* jc/merge-ort-attr-index-fix:
  merge-ort: initialize repo in index state
2023-10-18 13:25:42 -07:00
Junio C Hamano cc7d7183f0 Merge branch 'sn/cat-file-doc-update'
"git cat-file" documentation updates.

* sn/cat-file-doc-update:
  doc/cat-file: make synopsis and description less confusing
2023-10-18 13:25:41 -07:00
Junio C Hamano 0bc6bff9d5 Merge branch 'xz/commit-title-soft-limit-doc'
Doc update.

* xz/commit-title-soft-limit-doc:
  doc: correct the 50 characters soft limit (+)
2023-10-18 13:25:41 -07:00
Junio C Hamano 79861babe2 Merge branch 'tb/repack-max-cruft-size'
"git repack" learned "--max-cruft-size" to prevent cruft packs from
growing without bounds.

* tb/repack-max-cruft-size:
  repack: free existing_cruft array after use
  builtin/repack.c: avoid making cruft packs preferred
  builtin/repack.c: implement support for `--max-cruft-size`
  builtin/repack.c: parse `--max-pack-size` with OPT_MAGNITUDE
  t7700: split cruft-related tests to t7704
2023-10-18 13:25:41 -07:00
Junio C Hamano a060705d94 commit: do not use cryptic "new_index" in end-user facing messages
These error messages say "new_index" as if that spelling has some
significance to the end users (e.g. the file "$GIT_DIR/new_index"
has some issues), but that is not the case at all.  The i18n folks
were made to include the word literally in the translated messages,
which was not a good idea at all.  Spell it "new index", as we are
just telling the users that we failed to create a new index file.
The term is expected to be translated to the end-users' languages,
not left as if it were a literal file name.

This dates all the way back to the first re-implemenation of "git
commit" command in C (the scripted version did not have such wording
in its error messages), in f5bbc322 (Port git commit to C.,
2007-11-08).

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-17 22:09:54 -07:00
Naomi Ibe 48399e9cf0 builtin/add.c: clean up die() messages
As described in the CodingGuidelines document, a single line message
given to die() and its friends should not capitalize its first word,
and should not add full-stop at the end.

Signed-off-by: Naomi Ibe <naomi.ibeh69@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-17 12:41:55 -07:00
Patrick Steinhardt ca3285dd69 doc/git-repack: don't mention nonexistent "--unpacked" option
The documentation for geometric repacking mentions a "--unpacked" option
that supposedly changes how loose objects are rolled up. This option has
never existed, and the implied behaviour, namely to include all unpacked
objects into the resulting packfile, is in fact the default behaviour.

Correct the documentation to not mention this option.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-16 14:21:59 -07:00
Patrick Steinhardt e9cc3a027b doc/git-repack: fix syntax for -g shorthand option
The `-g` switch is a shorthand for `--geometric=` and allows the user to
specify the geometric. The documentation is wrong though and indicates
that the syntax for the shorthand is `-g=<factor>`. In fact though, the
option must be specified without the equals sign via `-g<factor>`.

Fix the syntax accordingly.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-16 14:21:59 -07:00
Jeff King 7538f9d89b t5319: make corrupted large-offset test more robust
The test t5319.88 ("reader bounds-checks large offset table") can fail
intermittently. The failure mode looks like this:

  1. An earlier test sets up "objects64", a directory that can be used
     to produce a midx with a corrupted large-offsets table. To get the
     large offsets, it corrupts the normal ".idx" file to have a fake
     large offset, and then builds a midx from that.

     That midx now has a large offset table, which is what we want. But
     we also have a .idx on disk that has a corrupted entry. We'll call
     the object with the corrupted large-offset "X".

  2. In t5319.88, we further corrupt the midx by reducing the size of
     the large-offset chunk (because our goal is to make sure we do not
     do an out-of-bounds read on it).

  3. We then enumerate all of the objects with "cat-file --batch-check
     --batch-all-objects", expecting to see a complaint when we try to
     show object X. We use --batch-all-objects because our objects64
     repo doesn't actually have any refs (but if we check them all, one
     of them will be the failing one). The default batch-check format
     includes %(objecttype) and %(objectsize), both of which require us
     to access the actual pack data (and thus requires looking at the
     offset).

  4a. Usually, this succeeds. We try to output object X, do a lookup via
      the midx for the type/size lookup, and run into the corrupt
      large-offset table.

  4b. But sometimes we hit a different error. If another object points
      to X as a delta base, then trying to find the type of that object
      requires walking the delta chain to the base entry (since only the
      base has the concrete type; deltas themselves are either OFS_DELTA
      or REF_DELTA).

      Normally this would not require separate offset lookups at all, as
      deltas are usually stored as OFS_DELTA, specifying the relative
      offset to the base. But the corrupt idx created in step 1 is done
      directly with "git pack-objects" and does not pass the
      --delta-base-offset option, meaning we have REF_DELTA entries!
      Those do have to consult an index to find the location of the base
      object, and they use the pack .idx to do this. The same pack .idx
      that we know is corrupted from step 1!

      Git does notice the error, but it does so by seeing the corrupt
      .idx file, not the corrupt midx file, and the error it reports is
      different, causing the test to fail.

The set of objects created in the test is deterministic. But the delta
selection seems not to be (which is not too surprising, as it is
multi-threaded). I have seen the failure in Windows CI but haven't
reproduced it locally (not even with --stress). Re-running a failed
Windows CI job tends to work. But when I download and examine the trash
directory from a failed run, it shows a different set of deltas than I
get locally. But the exact source of non-determinism isn't that
important; our test should be robust against any order.

There are a few options to fix this:

  a. It would be OK for the "objects64" setup to "unbreak" the .idx file
     after generating the midx. But then it would be hard for subsequent
     tests to reuse it, since it is the corrupted idx that forces the
     midx to have a large offset table.

  b. The "objects64" setup could use --delta-base-offset. This would fix
     our problem, but earlier tests have many hard-coded offsets. Using
     OFS_DELTA would change the locations of objects in the pack (this
     might even be OK because I think most of the offsets are within the
     .idx file, but it seems brittle and I'm afraid to touch it).

  c. Our cat-file output is in oid order by default. Since we store
     bases before deltas, if we went in pack order (using the
     "--unordered" flag), we'd always see our corrupt X before any delta
     which depends on it. But using "--unordered" means we skip the midx
     entirely. That makes sense, since it is just enumerating all of
     the packs, using the offsets found in their .idx files directly.
     So it doesn't work for our test.

  d. We could ask directly about object X, rather than enumerating all
     of them. But that requires further hard-coding of the oid (both
     sha1 and sha256) of object X. I'd prefer not to introduce more
     brittleness.

  e. We can use a --batch-check format that looks at the pack data, but
     doesn't have to chase deltas. The problem in this case is
     %(objecttype), which has to walk to the base. But %(objectsize)
     does not; we can get the value directly from the delta itself.
     Another option would be %(deltabase), where we report the REF_DELTA
     name but don't look at its data.

I've gone with option (e) here. It's kind of subtle, but it's simple and
has no side effects.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-14 10:17:25 -07:00
Junio C Hamano a9ecda2788 The eighteenth batch
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13 14:18:29 -07:00
Junio C Hamano 2920971a7f Merge branch 'jk/decoration-and-other-leak-fixes'
Leakfix.

* jk/decoration-and-other-leak-fixes:
  daemon: free listen_addr before returning
  revision: clear decoration structs during release_revisions()
  decorate: add clear_decoration() function
2023-10-13 14:18:28 -07:00
Junio C Hamano 09dcbb486d Merge branch 'ar/diff-index-merge-base-fix'
"git diff --merge-base X other args..." insisted that X must be a
commit and errored out when given an annotated tag that peels to a
commit, but we only need it to be a committish.  This has been
corrected.

* ar/diff-index-merge-base-fix:
  diff: fix --merge-base with annotated tags
2023-10-13 14:18:28 -07:00
Junio C Hamano b32f5b6b34 Merge branch 'js/submodule-fix-misuse-of-path-and-name'
In .gitmodules files, submodules are keyed by their names, and the
path to the submodule whose name is $name is specified by the
submodule.$name.path variable.  There were a few codepaths that
mixed the name and path up when consulting the submodule database,
which have been corrected.  It took long for these bugs to be found
as the name of a submodule initially is the same as its path, and
the problem does not surface until it is moved to a different path,
which apparently happens very rarely.

* js/submodule-fix-misuse-of-path-and-name:
  t7420: test that we correctly handle renamed submodules
  t7419: test that we correctly handle renamed submodules
  t7419, t7420: use test_cmp_config instead of grepping .gitmodules
  t7419: actually test the branch switching
  submodule--helper: return error from set-url when modifying failed
  submodule--helper: use submodule_from_path in set-{url,branch}
2023-10-13 14:18:28 -07:00
Junio C Hamano a45eddec40 Merge branch 'jk/commit-graph-leak-fixes'
Leakfix.

* jk/commit-graph-leak-fixes:
  commit-graph: clear oidset after finishing write
  commit-graph: free write-context base_graph_name during cleanup
  commit-graph: free write-context entries before overwriting
  commit-graph: free graph struct that was not added to chain
  commit-graph: delay base_graph assignment in add_graph_to_chain()
  commit-graph: free all elements of graph chain
  commit-graph: move slab-clearing to close_commit_graph()
  merge: free result of repo_get_merge_bases()
  commit-reach: free temporary list in get_octopus_merge_bases()
  t6700: mark test as leak-free
2023-10-13 14:18:28 -07:00
Junio C Hamano c75e91499b Merge branch 'la/trailer-test-and-doc-updates'
Test coverage for trailers has been improved.

* la/trailer-test-and-doc-updates:
  trailer doc: <token> is a <key> or <keyAlias>, not both
  trailer doc: separator within key suppresses default separator
  trailer doc: emphasize the effect of configuration variables
  trailer --unfold help: prefer "reformat" over "join"
  trailer --parse docs: add explanation for its usefulness
  trailer --only-input: prefer "configuration variables" over "rules"
  trailer --parse help: expose aliased options
  trailer --no-divider help: describe usual "---" meaning
  trailer: trailer location is a place, not an action
  trailer doc: narrow down scope of --where and related flags
  trailer: add tests to check defaulting behavior with --no-* flags
  trailer test description: this tests --where=after, not --where=before
  trailer tests: make test cases self-contained
2023-10-13 14:18:27 -07:00
Junio C Hamano e56b9edf22 Merge branch 'ds/mailmap-entry-update'
Update mailmap entry for Derrick.

* ds/mailmap-entry-update:
  mailmap: change primary address for Derrick Stolee
2023-10-13 14:18:27 -07:00
Jason Hatton 5143ac07b1 Prevent git from rehashing 4GiB files
The index stores file sizes using a uint32_t. This causes any file
that is a multiple of 2^32 to have a cached file size of zero.
Zero is a special value used by racily clean. This causes git to
rehash every file that is a multiple of 2^32 every time git status
or git commit is run.

This patch mitigates the problem by making all files that are a
multiple of 2^32 appear to have a size of 1<<31 instead of zero.

The value of 1<<31 is chosen to keep it as far away from zero
as possible to help prevent things getting mixed up with unpatched
versions of git.

An example would be to have a 2^32 sized file in the index of
patched git. Patched git would save the file as 2^31 in the cache.
An unpatched git would very much see the file has changed in size
and force it to rehash the file, which is safe. The file would
have to grow or shrink by exactly 2^31 and retain all of its
ctime, mtime, and other attributes for old git to not notice
the change.

This patch does not change the behavior of any file that is not
an exact multiple of 2^32.

Signed-off-by: Jason D. Hatton <jhatton@globalfinishing.com>
Signed-off-by: brian m. carlson <bk2204@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13 13:33:35 -07:00
brian m. carlson 678eb55f5d t: add a test helper to truncate files
In a future commit, we're going to work with some large files which will
be at least 4 GiB in size.  To take advantage of the sparseness
functionality on most Unix systems and avoid running the system out of
disk, it would be convenient to use truncate(2) to simply create a
sparse file of sufficient size.

However, the GNU truncate(1) utility isn't portable, so let's write a
tiny test helper that does the work for us.

Signed-off-by: brian m. carlson <bk2204@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13 13:33:35 -07:00
John Cai 9f9c40cf34 attr: add attr.tree for setting the treeish to read attributes from
44451a2 (attr: teach "--attr-source=<tree>" global option to "git",
2023-05-06) provided the ability to pass in a treeish as the attr
source. In the context of serving Git repositories as bare repos like we
do at GitLab however, it would be easier to point --attr-source to HEAD
for all commands by setting it once.

Add a new config attr.tree that allows this.

Signed-off-by: John Cai <johncai86@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13 11:43:29 -07:00
John Cai 2386535511 attr: read attributes from HEAD when bare repo
The motivation for 44451a2e5e (attr: teach "--attr-source=<tree>" global
option to "git" , 2023-05-06), was to make it possible to use
gitattributes with bare repositories.

To make it easier to read gitattributes in bare repositories however,
let's just make HEAD:.gitattributes the default. This is in line with
how mailmap works, 8c473cecfd (mailmap: default mailmap.blob in bare
repositories, 2012-12-13).

Signed-off-by: John Cai <johncai86@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13 11:43:29 -07:00
Junio C Hamano 59167d7d09 The seventeenth batch
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-12 12:18:27 -07:00
Junio C Hamano 4ae4c70577 Merge branch 'js/ci-coverity'
GitHub CI workflow has learned to trigger Coverity check.

* js/ci-coverity:
  coverity: detect and report when the token or project is incorrect
  coverity: allow running on macOS
  coverity: support building on Windows
  coverity: allow overriding the Coverity project
  coverity: cache the Coverity Build Tool
  ci: add a GitHub workflow to submit Coverity scans
2023-10-12 12:18:27 -07:00
Junio C Hamano c70e7a3cfd Merge branch 'jm/git-status-submodule-states-docfix'
Docfix.

* jm/git-status-submodule-states-docfix:
  git-status.txt: fix minor asciidoc format issue
2023-10-12 12:18:26 -07:00
Junio C Hamano 6e47cfcffc Merge branch 'rs/parse-opt-ctx-cleanup'
Code clean-up.

* rs/parse-opt-ctx-cleanup:
  parse-options: drop unused parse_opt_ctx_t member
2023-10-12 12:18:26 -07:00
Derrick Stolee 6e5457d8c7 mailmap: change primary address for Derrick Stolee
The previous primary address is no longer valid.

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-12 10:59:36 -07:00
Junio C Hamano 5b2424b658 grep: -f <path> is relative to $cwd
Just like OPT_FILENAME() does, "git grep -f <path>" should treat
the <path> relative to the original $cwd by paying attention to the
prefix the command is given.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-12 10:41:59 -07:00
Junio C Hamano d9b6634589 stash: be careful what we store
"git stash store" is meant to store what "git stash create"
produces, as these two are implementation details of the end-user
facing "git stash save" command.  Even though it is clearly
documented as such, users would try silly things like "git stash
store HEAD" to render their stash unusable.

Worse yet, because "git stash drop" does not allow such a stash
entry to be removed, "git stash clear" would be the only way to
recover from such a mishap.  Reuse the logic that allows "drop" to
refrain from working on such a stash entry to teach "store" to avoid
storing an object that is not a stash entry in the first place.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-11 16:27:30 -07:00
Junio C Hamano b182658e3e merge: introduce {copy|clear}_merge_options()
When mostly the same set of options are to be used to perform
multiple merges, one instance of the merge_options structure may
want to be created and used by copying from the same template
instance.  We saw such a use recently in "git merge-tree".

Let's make the pattern official by introducing copy_merge_options()
as a supported way to make a copy of the structure, and also give
clear_merge_options() to release any resources held by a copied
instance.  Currently we only make a shallow copy, so the former is a
mere structure assignment while the latter is a no-op, but this may
change in the future as the members of merge_options structure
evolve.

Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-11 13:37:47 -07:00
Junio C Hamano aab89be2eb The sixteenth batch
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-10 11:39:15 -07:00
Junio C Hamano 1fdedb7c7d Merge branch 'cc/repack-sift-filtered-objects-to-separate-pack'
"git repack" machinery learns to pay attention to the "--filter="
option.

* cc/repack-sift-filtered-objects-to-separate-pack:
  gc: add `gc.repackFilterTo` config option
  repack: implement `--filter-to` for storing filtered out objects
  gc: add `gc.repackFilter` config option
  repack: add `--filter=<filter-spec>` option
  pack-bitmap-write: rebuild using new bitmap when remapping
  repack: refactor finding pack prefix
  repack: refactor finishing pack-objects command
  t/helper: add 'find-pack' test-tool
  pack-objects: allow `--filter` without `--stdout`
2023-10-10 11:39:15 -07:00
Junio C Hamano afb0d0880a Merge branch 'ds/init-diffstat-width'
Code clean-up.

* ds/init-diffstat-width:
  diff --stat: set the width defaults in a helper function
2023-10-10 11:39:14 -07:00
Junio C Hamano a7a2d10421 Merge branch 'cw/prelim-cleanup'
Shuffle some bits across headers and sources to prepare for
libification effort.

* cw/prelim-cleanup:
  parse: separate out parsing functions from config.h
  config: correct bad boolean env value error message
  wrapper: reduce scope of remove_or_warn()
  hex-ll: separate out non-hash-algo functions
2023-10-10 11:39:14 -07:00
Junio C Hamano 3df51ea0a5 Merge branch 'eb/limit-bulk-checkin-to-blobs'
The "streaming" interface used for bulk-checkin codepath has been
narrowed to take only blob objects for now, with no real loss of
functionality.

* eb/limit-bulk-checkin-to-blobs:
  bulk-checkin: only support blobs in index_bulk_checkin
2023-10-10 11:39:14 -07:00
Patrick Steinhardt 8b3aa36f5a doc/git-worktree: mention "refs/rewritten" as per-worktree refs
Some references are special in the context of worktrees as they are
considered to be per-worktree instead of shared across all of the
worktrees. Most importantly, this includes "refs/worktree/" that have
explicitly been designed such that users can create per-woorktree refs.
But there are also special references that have an associated meaning
like "refs/bisect/", which is used to track state of git-bisect(1).

These special per-worktree references are documented in git-worktree(1),
but one instance is missing. In a9be29c981 (sequencer: make refs
generated by the `label` command worktree-local, 2018-04-25), we have
converted "refs/rewritten/" to be a per-worktree reference as well.
These references are used by our sequencer infrastructure to generate
labels for rebased commits. So in order to allow for multiple concurrent
rebases to happen in different worktrees, these references need to be
tracked per worktree.

We forgot to update our documentation to mention these new per-worktree
references, which is fixed by this patch.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-10 09:23:16 -07:00
Jeff King ca06f0fe5d chunk-format: drop pair_chunk_unsafe()
There are no callers left, and we don't want anybody to add new ones (they
should use the not-unsafe version instead). So let's drop the function.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:02 -07:00
Jeff King 12192a9db9 commit-graph: detect out-of-order BIDX offsets
The BIDX chunk tells us the offsets at which each commit's Bloom filters
can be found in the BDAT chunk. We compute the length of each filter by
checking the offsets of neighbors and subtracting them.

If the offsets are out of order, then we'll get a negative length, which
we then store as a very large unsigned value. This can cause us to read
out-of-bounds memory, as we access the hash data modulo "filter->len *
BITS_PER_WORD".

We can easily detect this case when loading the individual filters.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:02 -07:00
Jeff King 581e0f8b18 commit-graph: check bounds when accessing BIDX chunk
We load the bloom_filter_indexes chunk using pair_chunk(), so we have no
idea how big it is. This can lead to out-of-bounds reads if it is
smaller than expected, since we index it based on the number of commits
found elsewhere in the graph file.

We can check the chunk size up front, like we do for CDAT and other
chunks with one fixed-size record per commit.

The test case demonstrates the problem. It actually won't segfault,
because we end up reading random data from the follow-on chunk (BDAT in
this case), and the bounds checks added in the previous patch complain.
But this is by no means assured, and you can craft a commit-graph file
with BIDX at the end (or a smaller BDAT) that does segfault.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King 920f400e91 commit-graph: check bounds when accessing BDAT chunk
When loading Bloom filters from a commit-graph file, we use the offset
values in the BIDX chunk to index into the memory mapped for the BDAT
chunk. But since we don't record how big the BDAT chunk is, we just
trust that the BIDX offsets won't cause us to read outside of the chunk
memory. A corrupted or malicious commit-graph file will cause us to
segfault (in practice this isn't a very interesting attack, since
commit-graph files are local-only, and the worst case is an
out-of-bounds read).

We can't fix this by checking the chunk size during parsing, since the
data in the BDAT chunk doesn't have a fixed size (that's why we need the
BIDX in the first place). So we'll fix it in two parts:

  1. Record the BDAT chunk size during parsing, and then later check
     that the BIDX offsets we look up are within bounds.

  2. Because the offsets are relative to the end of the BDAT header, we
     must also make sure that the BDAT chunk is at least as large as the
     expected header size. Otherwise, we overflow when trying to move
     past the header, even for an offset of "0". We can check this
     early, during the parsing stage.

The error messages are rather verbose, but since this is not something
you'd expect to see outside of severe bugs or corruption, it makes sense
to err on the side of too many details. Sadly we can't mention the
filename during the chunk-parsing stage, as we haven't set g->filename
at this point, nor passed it down through the stack.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King ee6a792412 commit-graph: bounds-check generation overflow chunk
If the generation entry in a commit-graph doesn't fit, we instead insert
an offset into a generation overflow chunk. But since we don't record
the size of the chunk, we may read outside the chunk if the offset we
find on disk is malicious or corrupted.

We can't check the size of the chunk up-front; it will vary based on how
many entries need overflow. So instead, we'll do a bounds-check before
accessing the chunk memory. Unfortunately there is no error-return from
this function, so we'll just have to die(), which is what it does for
other forms of corruption.

As with other cases, we can drop the st_mult() call, since we know our
bounds-checked value will fit within a size_t.

Before this patch, the test here actually "works" because we read
garbage data from the next chunk. And since that garbage data happens
not to provide a generation number which changes the output, it appears
to work. We could construct a case that actually segfaults or produces
wrong output, but it would be a bit tricky. For our purposes its
sufficient to check that we've detected the bounds error.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King 4a3c34662b commit-graph: check size of generations chunk
We neither check nor record the size of the generations chunk we parse
from a commit-graph file. This should have one uint32_t for each commit
in the file; if it is smaller (due to corruption, etc), we may read
outside the mapped memory.

The included test segfaults without this patch, as it shrinks the size
considerably (and the chunk is near the end of the file, so we read off
the end of the array rather than accidentally reading another chunk).

We can fix this by checking the size up front (like we do for other
fixed-size chunks, like CDAT).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King 6cf61d0db5 commit-graph: bounds-check base graphs chunk
When we are loading a commit-graph chain, we check that each slice of the
chain points to the appropriate set of base graphs via its BASE chunk.
But since we don't record the size of the chunk, we may access
out-of-bounds memory if the file is corrupted.

Since we know the number of entries we expect to find (based on the
position within the commit-graph-chain file), we can just check the size
up front.

In theory this would also let us drop the st_mult() call a few lines
later when we actually access the memory, since we know that the
computed offset will fit in a size_t. But because the operands
"g->hash_len" and "n" have types "unsigned char" and "int", we'd have to
cast to size_t first. Leaving the st_mult() does that cast, and makes it
more obvious that we don't have an overflow problem.

Note that the test does not actually segfault before this patch, since
it just reads garbage from the chunk after BASE (and indeed, it even
rejects the file because that garbage does not have the expected hash
value). You could construct a file with BASE at the end that did
segfault, but corrupting the existing one is easy, and we can check
stderr for the expected message.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King 9622610e55 commit-graph: detect out-of-bounds extra-edges pointers
If an entry in a commit-graph file has more than 2 parents, the
fixed-size parent fields instead point to an offset within an "extra
edges" chunk. We blindly follow these, assuming that the chunk is
present and sufficiently large; this can lead to an out-of-bounds read
for a corrupt or malicious file.

We can fix this by recording the size of the chunk and adding a
bounds-check in fill_commit_in_graph(). There are a few tricky bits:

  1. We'll switch from working with a pointer to an offset. This makes
     some corner cases just fall out naturally:

      a. If we did not find an EDGE chunk at all, our size will
         correctly be zero (so everything is "out of bounds").

      b. Comparing "size / 4" lets us make sure we have at least 4 bytes
         to read, and we never compute a pointer more than one element
         past the end of the array (computing a larger pointer is
         probably OK in practice, but is technically undefined
         behavior).

      c. The current code casts to "uint32_t *". Replacing it with an
         offset avoids any comparison between different types of pointer
         (since the chunk is stored as "unsigned char *").

  2. This is the first case in which fill_commit_in_graph() may return
     anything but success. We need to make sure to roll back the
     "parsed" flag (and any parents we might have added before running
     out of buffer) so that the caller can cleanly fall back to
     loading the commit object itself.

     It's a little non-trivial to do this, and we might benefit from
     factoring it out. But we can wait on that until we actually see a
     second case where we return an error.

As a bonus, this lets us drop the st_mult() call. Since we've already
done a bounds check, we know there won't be any integer overflow (it
would imply our buffer is larger than a size_t can hold).

The included test does not actually segfault before this patch (though
you could construct a case where it does). Instead, it reads garbage
from the next chunk which results in it complaining about a bogus parent
id. This is sufficient for our needs, though (we care that the fallback
succeeds, and that stderr mentions the out-of-bounds read).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King b72df612af commit-graph: check size of commit data chunk
We expect a commit-graph file to have a fixed-size data record for each
commit in the file (and we know the number of commits to expct from the
size of the lookup table). If we encounter a file where this is too
small, we'll look past the end of the chunk (and possibly even off the
mapped memory).

We can fix this by checking the size up front when we record the
pointer.

The included test doesn't segfault, since it ends up reading bytes
from another chunk. But it produces nonsense results, since the values
it reads are garbage. Our test notices this by comparing the output to a
non-corrupted run of the same command (and of course we also check that
the expected error is printed to stderr).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King c0fe9b2da5 midx: check size of revindex chunk
When we load a revindex from disk, we check the size of the file
compared to the number of objects we expect it to have. But when we use
a RIDX chunk stored directly in the midx, we just access the memory
directly. This can lead to out-of-bounds memory access for a corrupted
or malicious multi-pack-index file.

We can catch this by recording the RIDX chunk size, and then checking it
against the expected size when we "load" the revindex. Note that this
check is much simpler than the one that load_revindex_from_disk() does,
because we just have the data array with no header (so we do not need
to account for the header size, and nor do we need to bother validating
the header values).

The test confirms both that we catch this case, and that we continue the
process (the revindex is required to use the midx bitmaps, but we
fallback to a non-bitmap traversal).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00
Jeff King 2abd56e9b2 midx: bounds-check large offset chunk
When we see a large offset bit in the regular midx offset table, we
use the entry as an index into a separate large offset table (just like
a pack idx does). But we don't bounds-check the access to that large
offset table (nor even record its size when we parse the chunk!).

The equivalent code for a regular pack idx is in check_pack_index_ptr().
But things are a bit simpler here because of the chunked format: we can
just check our array index directly.

As a bonus, we can get rid of the st_mult() here. If our array
bounds-check is successful, then we know that the result will fit in a
size_t (and the bounds check uses a division to avoid overflow
entirely).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09 15:55:01 -07:00