"git p4" tried to store symlinks to LFS when told, but has been
fixed not to do so, because it does not make sense.
* mm/p4-symlink-with-lfs:
git-p4 shouldn't attempt to store symlinks in LFS
The attribute subsystem learned to honor `attr.tree` configuration
that specifies which tree to read the .gitattributes files from.
* jc/attr-tree-config:
attr: add attr.tree for setting the treeish to read attributes from
attr: read attributes from HEAD when bare repo
Many typos, ungrammatical sentences and wrong phrasing have been
fixed.
* sn/typo-grammo-phraso-fixes:
t/README: fix multi-prerequisite example
doc/gitk: s/sticked/stuck/
git-jump: admit to passing merge mode args to ls-files
doc/diff-options: improve wording of the log.diffMerges mention
doc: fix some typos, grammar and wording issues
The index file has room only for lower 32-bit of the file size in
the cached stat information, which means cached stat information
will have 0 in its sd_size member for a file whose size is multiple
of 4GiB. This is mistaken for a racily clean path. Avoid it by
storing a bogus sd_size value instead for such files.
* bc/racy-4gb-files:
Prevent git from rehashing 4GiB files
t: add a test helper to truncate files
Feeding "git stash store" with a random commit that was not created
by "git stash create" now errors out.
* jc/fail-stash-to-store-non-stash:
stash: be careful what we store
The codepaths that read "chunk" formatted files have been corrected
to pay attention to the chunk size and notice broken files.
* jk/chunk-bounds: (21 commits)
t5319: make corrupted large-offset test more robust
chunk-format: drop pair_chunk_unsafe()
commit-graph: detect out-of-order BIDX offsets
commit-graph: check bounds when accessing BIDX chunk
commit-graph: check bounds when accessing BDAT chunk
commit-graph: bounds-check generation overflow chunk
commit-graph: check size of generations chunk
commit-graph: bounds-check base graphs chunk
commit-graph: detect out-of-bounds extra-edges pointers
commit-graph: check size of commit data chunk
midx: check size of revindex chunk
midx: bounds-check large offset chunk
midx: check size of object offset chunk
midx: enforce chunk alignment on reading
midx: check size of pack names chunk
commit-graph: check consistency of fanout table
midx: check size of oid lookup chunk
commit-graph: check size of oid fanout chunk
midx: stop ignoring malformed oid fanout chunk
t: add library for munging chunk-format files
...
Unlike "git log --pretty=%D", "git log --pretty="%(decorate)" did
not auto-initialize the decoration subsystem, which has been
corrected.
* ak/pretty-decorate-more-fix:
pretty: fix ref filtering for %(decorate) formats
The code to iterate over loose references have been optimized to
reduce the number of lstat() system calls.
* vd/loose-ref-iteration-optimization:
files-backend.c: avoid stat in 'loose_fill_ref_dir'
dir.[ch]: add 'follow_symlink' arg to 'get_dtype'
dir.[ch]: expose 'get_dtype'
ref-cache.c: fix prefix matching in ref iteration
"git merge-tree" learned to take strategy backend specific options
via the "-X" option, like "git merge" does.
* ty/merge-tree-strategy-options:
merge: introduce {copy|clear}_merge_options()
merge-tree: add -X strategy option
git-p4.py would attempt to put a symlink in LFS if its file extension
matched git-p4.largeFileExtensions.
Git LFS doesn't store symlinks because smudge/clean filters don't handle
symlinks. They never get passed to the filter process nor the
smudge/clean filters, nor could that occur without a change to the
protocol or command-line interface. Unless Git learned how to send them
to the filters, Git LFS would have a hard time using them in any useful
way.
Git LFS's goal is to move large files out of the repository history, and
symlinks are functionally limited to 4 KiB or a similar size on most
systems.
Signed-off-by: Matthew McClain <mmcclain@noprivs.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some tests in t7601 use "test -f" and "test ! -f" to see if a path
exists or is missing.
Use test_path_is_file and test_path_is_missing helper functions to
clarify these tests a bit better. This especially matters for the
"missing" case because "test ! -f F" will be happy if "F" exists as a
directory, but the intent of the test is that "F" should not exist, even
as a directory. The updated code expresses this better.
Signed-off-by: Dorcas AnonoLitunya <anonolitunya@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
`git am` passes the value given to its `--whitespace` option through
to the underlying `git apply`, and the value is called <action> over
there. Fix the documentation for the command that calls the value
<option> to say <action> instead.
Note that the option help given by `git am -h` already calls the
value <action>, so there is no need to make a matching change there.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix "git merge-tree" to stop segfaulting when the --attr-source
option is used.
* jc/merge-ort-attr-index-fix:
merge-ort: initialize repo in index state
"git repack" learned "--max-cruft-size" to prevent cruft packs from
growing without bounds.
* tb/repack-max-cruft-size:
repack: free existing_cruft array after use
builtin/repack.c: avoid making cruft packs preferred
builtin/repack.c: implement support for `--max-cruft-size`
builtin/repack.c: parse `--max-pack-size` with OPT_MAGNITUDE
t7700: split cruft-related tests to t7704
As described in the CodingGuidelines document, a single line message
given to die() and its friends should not capitalize its first word,
and should not add full-stop at the end.
Signed-off-by: Naomi Ibe <naomi.ibeh69@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The test t5319.88 ("reader bounds-checks large offset table") can fail
intermittently. The failure mode looks like this:
1. An earlier test sets up "objects64", a directory that can be used
to produce a midx with a corrupted large-offsets table. To get the
large offsets, it corrupts the normal ".idx" file to have a fake
large offset, and then builds a midx from that.
That midx now has a large offset table, which is what we want. But
we also have a .idx on disk that has a corrupted entry. We'll call
the object with the corrupted large-offset "X".
2. In t5319.88, we further corrupt the midx by reducing the size of
the large-offset chunk (because our goal is to make sure we do not
do an out-of-bounds read on it).
3. We then enumerate all of the objects with "cat-file --batch-check
--batch-all-objects", expecting to see a complaint when we try to
show object X. We use --batch-all-objects because our objects64
repo doesn't actually have any refs (but if we check them all, one
of them will be the failing one). The default batch-check format
includes %(objecttype) and %(objectsize), both of which require us
to access the actual pack data (and thus requires looking at the
offset).
4a. Usually, this succeeds. We try to output object X, do a lookup via
the midx for the type/size lookup, and run into the corrupt
large-offset table.
4b. But sometimes we hit a different error. If another object points
to X as a delta base, then trying to find the type of that object
requires walking the delta chain to the base entry (since only the
base has the concrete type; deltas themselves are either OFS_DELTA
or REF_DELTA).
Normally this would not require separate offset lookups at all, as
deltas are usually stored as OFS_DELTA, specifying the relative
offset to the base. But the corrupt idx created in step 1 is done
directly with "git pack-objects" and does not pass the
--delta-base-offset option, meaning we have REF_DELTA entries!
Those do have to consult an index to find the location of the base
object, and they use the pack .idx to do this. The same pack .idx
that we know is corrupted from step 1!
Git does notice the error, but it does so by seeing the corrupt
.idx file, not the corrupt midx file, and the error it reports is
different, causing the test to fail.
The set of objects created in the test is deterministic. But the delta
selection seems not to be (which is not too surprising, as it is
multi-threaded). I have seen the failure in Windows CI but haven't
reproduced it locally (not even with --stress). Re-running a failed
Windows CI job tends to work. But when I download and examine the trash
directory from a failed run, it shows a different set of deltas than I
get locally. But the exact source of non-determinism isn't that
important; our test should be robust against any order.
There are a few options to fix this:
a. It would be OK for the "objects64" setup to "unbreak" the .idx file
after generating the midx. But then it would be hard for subsequent
tests to reuse it, since it is the corrupted idx that forces the
midx to have a large offset table.
b. The "objects64" setup could use --delta-base-offset. This would fix
our problem, but earlier tests have many hard-coded offsets. Using
OFS_DELTA would change the locations of objects in the pack (this
might even be OK because I think most of the offsets are within the
.idx file, but it seems brittle and I'm afraid to touch it).
c. Our cat-file output is in oid order by default. Since we store
bases before deltas, if we went in pack order (using the
"--unordered" flag), we'd always see our corrupt X before any delta
which depends on it. But using "--unordered" means we skip the midx
entirely. That makes sense, since it is just enumerating all of
the packs, using the offsets found in their .idx files directly.
So it doesn't work for our test.
d. We could ask directly about object X, rather than enumerating all
of them. But that requires further hard-coding of the oid (both
sha1 and sha256) of object X. I'd prefer not to introduce more
brittleness.
e. We can use a --batch-check format that looks at the pack data, but
doesn't have to chase deltas. The problem in this case is
%(objecttype), which has to walk to the base. But %(objectsize)
does not; we can get the value directly from the delta itself.
Another option would be %(deltabase), where we report the REF_DELTA
name but don't look at its data.
I've gone with option (e) here. It's kind of subtle, but it's simple and
has no side effects.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git diff --merge-base X other args..." insisted that X must be a
commit and errored out when given an annotated tag that peels to a
commit, but we only need it to be a committish. This has been
corrected.
* ar/diff-index-merge-base-fix:
diff: fix --merge-base with annotated tags
In .gitmodules files, submodules are keyed by their names, and the
path to the submodule whose name is $name is specified by the
submodule.$name.path variable. There were a few codepaths that
mixed the name and path up when consulting the submodule database,
which have been corrected. It took long for these bugs to be found
as the name of a submodule initially is the same as its path, and
the problem does not surface until it is moved to a different path,
which apparently happens very rarely.
* js/submodule-fix-misuse-of-path-and-name:
t7420: test that we correctly handle renamed submodules
t7419: test that we correctly handle renamed submodules
t7419, t7420: use test_cmp_config instead of grepping .gitmodules
t7419: actually test the branch switching
submodule--helper: return error from set-url when modifying failed
submodule--helper: use submodule_from_path in set-{url,branch}
Leakfix.
* jk/commit-graph-leak-fixes:
commit-graph: clear oidset after finishing write
commit-graph: free write-context base_graph_name during cleanup
commit-graph: free write-context entries before overwriting
commit-graph: free graph struct that was not added to chain
commit-graph: delay base_graph assignment in add_graph_to_chain()
commit-graph: free all elements of graph chain
commit-graph: move slab-clearing to close_commit_graph()
merge: free result of repo_get_merge_bases()
commit-reach: free temporary list in get_octopus_merge_bases()
t6700: mark test as leak-free
Test coverage for trailers has been improved.
* la/trailer-test-and-doc-updates:
trailer doc: <token> is a <key> or <keyAlias>, not both
trailer doc: separator within key suppresses default separator
trailer doc: emphasize the effect of configuration variables
trailer --unfold help: prefer "reformat" over "join"
trailer --parse docs: add explanation for its usefulness
trailer --only-input: prefer "configuration variables" over "rules"
trailer --parse help: expose aliased options
trailer --no-divider help: describe usual "---" meaning
trailer: trailer location is a place, not an action
trailer doc: narrow down scope of --where and related flags
trailer: add tests to check defaulting behavior with --no-* flags
trailer test description: this tests --where=after, not --where=before
trailer tests: make test cases self-contained
The index stores file sizes using a uint32_t. This causes any file
that is a multiple of 2^32 to have a cached file size of zero.
Zero is a special value used by racily clean. This causes git to
rehash every file that is a multiple of 2^32 every time git status
or git commit is run.
This patch mitigates the problem by making all files that are a
multiple of 2^32 appear to have a size of 1<<31 instead of zero.
The value of 1<<31 is chosen to keep it as far away from zero
as possible to help prevent things getting mixed up with unpatched
versions of git.
An example would be to have a 2^32 sized file in the index of
patched git. Patched git would save the file as 2^31 in the cache.
An unpatched git would very much see the file has changed in size
and force it to rehash the file, which is safe. The file would
have to grow or shrink by exactly 2^31 and retain all of its
ctime, mtime, and other attributes for old git to not notice
the change.
This patch does not change the behavior of any file that is not
an exact multiple of 2^32.
Signed-off-by: Jason D. Hatton <jhatton@globalfinishing.com>
Signed-off-by: brian m. carlson <bk2204@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a future commit, we're going to work with some large files which will
be at least 4 GiB in size. To take advantage of the sparseness
functionality on most Unix systems and avoid running the system out of
disk, it would be convenient to use truncate(2) to simply create a
sparse file of sufficient size.
However, the GNU truncate(1) utility isn't portable, so let's write a
tiny test helper that does the work for us.
Signed-off-by: brian m. carlson <bk2204@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
44451a2 (attr: teach "--attr-source=<tree>" global option to "git",
2023-05-06) provided the ability to pass in a treeish as the attr
source. In the context of serving Git repositories as bare repos like we
do at GitLab however, it would be easier to point --attr-source to HEAD
for all commands by setting it once.
Add a new config attr.tree that allows this.
Signed-off-by: John Cai <johncai86@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The motivation for 44451a2e5e (attr: teach "--attr-source=<tree>" global
option to "git" , 2023-05-06), was to make it possible to use
gitattributes with bare repositories.
To make it easier to read gitattributes in bare repositories however,
let's just make HEAD:.gitattributes the default. This is in line with
how mailmap works, 8c473cecfd (mailmap: default mailmap.blob in bare
repositories, 2012-12-13).
Signed-off-by: John Cai <johncai86@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
GitHub CI workflow has learned to trigger Coverity check.
* js/ci-coverity:
coverity: detect and report when the token or project is incorrect
coverity: allow running on macOS
coverity: support building on Windows
coverity: allow overriding the Coverity project
coverity: cache the Coverity Build Tool
ci: add a GitHub workflow to submit Coverity scans
"git stash store" is meant to store what "git stash create"
produces, as these two are implementation details of the end-user
facing "git stash save" command. Even though it is clearly
documented as such, users would try silly things like "git stash
store HEAD" to render their stash unusable.
Worse yet, because "git stash drop" does not allow such a stash
entry to be removed, "git stash clear" would be the only way to
recover from such a mishap. Reuse the logic that allows "drop" to
refrain from working on such a stash entry to teach "store" to avoid
storing an object that is not a stash entry in the first place.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When mostly the same set of options are to be used to perform
multiple merges, one instance of the merge_options structure may
want to be created and used by copying from the same template
instance. We saw such a use recently in "git merge-tree".
Let's make the pattern official by introducing copy_merge_options()
as a supported way to make a copy of the structure, and also give
clear_merge_options() to release any resources held by a copied
instance. Currently we only make a shallow copy, so the former is a
mere structure assignment while the latter is a no-op, but this may
change in the future as the members of merge_options structure
evolve.
Suggested-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Shuffle some bits across headers and sources to prepare for
libification effort.
* cw/prelim-cleanup:
parse: separate out parsing functions from config.h
config: correct bad boolean env value error message
wrapper: reduce scope of remove_or_warn()
hex-ll: separate out non-hash-algo functions