It was reported that git-clone(1) started to fail in Git v2.44 when
cloning via HTTPS when the config contains an "includeIf.*.onbranch"
condition:
$ git clone https://example.com/repo.git
Cloning into 'repo'...
BUG: refs.c:2083: reference backend is unknown
error: git-remote-https died of signal 6
This regression was bisected to 0fcc285c5e (refs: refactor logic to look
up storage backends, 2023-12-29). This commit tightens the logic to look
up ref backends such that we now die when the backend has not yet been
detected by reading the gitconfig.
Now on its own, this commit wouldn't have caused the failure. But in
18c9cb7524 (builtin/clone: create the refdb with the correct object
format, 2023-12-12) we have also changed how git-clone(1) initializes
the refdb such that it happens after the remote helper is spawned, which
is required so that we can first learn about the object format used by
the remote repository before initializing the refdb. Starting with this
change, the remote helper will be unable to detect the repository right
from the start and thus have an unconfigured ref backend. Consequently,
when we try to resolve the "includeIf.*.onbranch" condition, we will now
fail to look up the refdb and die.
This regression has already been fixed via 199f44cb2e (builtin/clone:
allow remote helpers to detect repo, 2024-02-27), where we now
pre-initialize a partial refdb so that the remote helper can detect the
repository right from the start. But it's clear that we're lacking test
coverage of this functionality.
Add a test to avoid regressing in the future. Note that this test stops
short of defining the desired behaviour for the "onbranch" condition
during a clone. It's not quite clear how exactly it should behave, so
this is a leftover bit for the future.
Reported-by: Angelo Dureghello <angelo@kernel-space.org>
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a new `--ref-format` value flag for git-clone(1) that allows
the user to specify the ref format that is to be used for a newly
initialized repository.
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
They are equivalents and the former still exists, so as long as the
only change this commit makes are to rewrite test_i18ngrep to
test_grep, there won't be any new bug, even if there still are
callers of test_i18ngrep remaining in the tree, or when merged to
other topics that add new uses of test_i18ngrep.
This patch was produced more or less with
git grep -l -e 'test_i18ngrep ' 't/t[0-9][0-9][0-9][0-9]-*.sh' |
xargs perl -p -i -e 's/test_i18ngrep /test_grep /'
and a good way to sanity check the result yourself is to run the
above in a checkout of c4603c1c (test framework: further deprecate
test_i18ngrep, 2023-10-31) and compare the resulting working tree
contents with the result of applying this patch to the same commit.
You'll see that test_i18ngrep in a few t/lib-*.sh files corrected,
in addition to the manual reproduction.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The creationToken heuristic provides an ordering on the bundles
advertised by a bundle list. Teach the Git client to download bundles
differently when this heuristic is advertised.
The bundles in the list are sorted by their advertised creationToken
values, then downloaded in decreasing order. This avoids the previous
strategy of downloading bundles in an arbitrary order and attempting
to apply them (likely failing in the case of required commits) until
discovering the order through attempted unbundling.
During a fresh 'git clone', it may make sense to download the bundles in
increasing order, since that would prevent the need to attempt
unbundling a bundle with required commits that do not exist in our empty
object store. The cost of testing an unbundle is quite low, and instead
the chosen order is optimizing for a future bundle download during a
'git fetch' operation with a non-empty object store.
Since the Git client continues fetching from the Git remote after
downloading and unbundling bundles, the client's object store can be
ahead of the bundle provider's object store. The next time it attempts
to download from the bundle list, it makes most sense to download only
the most-recent bundles until all tips successfully unbundle. The
strategy implemented here provides that short-circuit where the client
downloads a minimal set of bundles.
However, we are not satisfied by the naive approach of downloading
bundles until one successfully unbundles, expecting the earlier bundles
to successfully unbundle now. The example repository in t5558
demonstrates this well:
---------------- bundle-4
4
/ \
----|---|------- bundle-3
| |
| 3
| |
----|---|------- bundle-2
| |
2 |
| |
----|---|------- bundle-1
\ /
1
|
(previous commits)
In this repository, if we already have the objects for bundle-1 and then
try to fetch from this list, the naive approach will fail. bundle-4
requires both bundle-3 and bundle-2, though bundle-3 will successfully
unbundle without bundle-2. Thus, the algorithm needs to keep this in
mind.
A later implementation detail will store the maximum creationToken seen
during such a bundle download, and the client will avoid downloading a
bundle unless its creationToken is strictly greater than that stored
value. For now, if the client seeks to download from an identical
bundle list since its previous download, it will download the
most-recent bundle then stop since its required commits are already in
the object store.
Add tests that exercise this behavior, but we will expand upon these
tests when incremental downloads during 'git fetch' make use of
creationToken values.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A previous change introduced the transport methods to acquire a bundle
list from the 'bundle-uri' protocol v2 command, when advertised _and_
when the client has chosen to enable the feature.
Teach Git to download and unbundle the data advertised by those bundles
during 'git clone'. This takes place between the ref advertisement and
the object data download, and stateful connections will linger while
the client downloads bundles. In the future, we should consider closing
the remote connection during this process.
Also, since the --bundle-uri option exists, we do not want to mix the
advertised bundles with the user-specified bundles.
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Commit 6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06) added tests for our handling of passwords in URLs. Since the
obvious URL to be affected is git-over-http, the tests use http. However
they don't set up a test server; they just try to access
https://localhost, assuming it will fail (because the nothing is
listening there).
This causes some possible problems:
- There might be a web server running on localhost, and we do not
actually want to connect to that.
- The DNS resolver, or the local firewall, might take a substantial
amount of time (or forever, whichever comes first) to fail to
connect, slowing down the tests cases unnecessarily.
- Since there's no server, our tests for "allow" and "warn" still
expect the clone/fetch/push operations to fail, even though in the
real world we'd expect these to succeed. We scrape stderr to see
what happened, but it's not as robust as a more realistic test.
Let's instead move these to t5551, which is all about testing http and
where we have a real server. That eliminates any issues with contacting
a strange URL, and lets the "allow" and "warn" tests confirm that the
operation actually succeeds.
It's not quite a verbatim move for a few reasons:
- we can drop the LIBCURL dependency; it's already part of
lib-httpd.sh
- we'll use HTTPD_URL_USER_PASS, etc, instead of our fake URL. To
avoid repetition, we'll add a few extra variables.
- the "https://username:@localhost" test uses a funny URL that
lib-httpd.sh doesn't provide. We'll similarly construct it in a
variable. Note that we're hard-coding the lib-httpd username here,
but t5551 already does that everywhere.
- for the "domain:port" test, the URL provided by lib-httpd is fine,
since our test server will always be on an exotic port. But we'll
confirm in the test that this is so.
- since our message-matching is done via grep, I simplified it to use
a regex, rather than trying to massage lib-httpd's variables.
Arguably this makes it more readable, too, while retaining the bits
we care about: the fatal/warning distinction, the "uses plaintext"
message, and the fact that the password was redacted.
- we'll use the /auth/ path for the repo, which shows that we are
indeed making use of the auth information when needed.
- we'll also use /smart/; most of these tests could be done via /dumb/
in t5550, but setting up pushes there requires extra effort and
dependencies. The smart protocol is what most everyone is using
these days anyway.
This patch is my own, but I stole the analysis and a few bits of the
commit message from a patch by Johannes Schindelin.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
To prepare for the default value of `protocol.file.allow` to change to
"user", ensure tests that rely on local submodules can initialize them
over the file protocol.
Tests that only need to interact with submodules in a limited capacity
have individual Git commands annotated with the appropriate
configuration via `-c`. Tests that interact with submodules a handful of
times use `test_config_global` instead. Test scripts that rely on
submodules throughout use a `git config --global` during a setup test
towards the beginning of the script.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Currently, negotiation for V0/V1/V2 fetch have trace2 regions covering
the entire negotiation process. However, we'd like additional data, such
as timing for each round of negotiation or the number of "haves" in each
round. Additionally, "independent negotiation" (AKA push negotiation)
has no tracing at all. Having this data would allow us to compare the
performance of the various negotation implementations, and to debug
unexpectedly slow fetch & push sessions.
Add per-round trace2 regions for all negotiation implementations (V0+V1,
V2, and independent negotiation), as well as an overall region for
independent negotiation. Add trace2 data logging for the number of haves
and "in vain" objects for each round, and for the total number of rounds
once negotiation completes. Finally, add a few checks into various
tests to verify that the number of rounds is logged as expected.
Signed-off-by: Josh Steadmon <steadmon@google.com>
Acked-by: Jeff Hostetler <jeffhost@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rename fetch.credentialsInUrl to transfer.credentialsInUrl as the
single configuration variable should work both in pushing and
fetching.
* ab/credentials-in-url-more:
transfer doc: move fetch.credentialsInUrl to "transfer" config namespace
fetch doc: note "pushurl" caveat about "credentialsInUrl", elaborate
Add and use a LIBCURL prerequisite for tests added in
6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06).
These tests would get as far as emitting a couple of the warnings we
were testing for, but would then die as we had no "git-remote-https"
program compiled.
It would be more consistent with other prerequisites (e.g. PERL for
NO_PERL) to name this "CURL", but since e9184b0789 (t5561: skip tests
if curl is not available, 2018-04-03) we've had that prerequisite
defined for checking of we have the curl(1) program.
The existing "CURL" prerequisite is only used in one place, and we
should probably name it "CURL_PROGRAM", then rename "LIBCURL" to
"CURL" as a follow-up, but for now (pre-v2.37.0) let's aim for the
most minimal fix possible.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Rename the "fetch.credentialsInUrl" configuration variable introduced
in 6dcbdc0d66 (remote: create fetch.credentialsInUrl config,
2022-06-06) to "transfer".
There are existing exceptions, but generally speaking the
"<namespace>.<var>" configuration should only apply to command
described in the "namespace" (and its sub-commands, so e.g. "clone.*"
or "fetch.*" might also configure "git-remote-https").
But in the case of "fetch.credentialsInUrl" we've got a configuration
variable that configures the behavior of all of "clone", "push" and
"fetch", someone adjusting "fetch.*" configuration won't expect to
have the behavior of "git push" altered, especially as we have the
pre-existing "{transfer,fetch,receive}.fsckObjects", which configures
different parts of the transfer dialog.
So let's move this configuration variable to the "transfer" namespace
before it's exposed in a release. We could add all of
"{transfer,fetch,pull}.credentialsInUrl" at some other time, but once
we have "fetch" configure "pull" such an arrangement would would be a
confusing mess, as we'd at least need to have "fetch" configure
"push" (but not the other way around), or change existing behavior.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "fetch.credentialsInUrl" configuration variable controls what
happens when a URL with embedded login credential is used.
* ds/credentials-in-url:
remote: create fetch.credentialsInUrl config
Users sometimes provide a "username:password" combination in their
plaintext URLs. Since Git stores these URLs in plaintext in the
.git/config file, this is a very insecure way of storing these
credentials. Credential managers are a more secure way of storing this
information.
System administrators might want to prevent this kind of use by users on
their machines.
Create a new "fetch.credentialsInUrl" config option and teach Git to
warn or die when seeing a URL with this kind of information. The warning
anonymizes the sensitive information of the URL to be clear about the
issue.
This change currently defaults the behavior to "allow" which does
nothing with these URLs. We can consider changing this behavior to
"warn" by default if we wish. At that time, we may want to add some
advice about setting fetch.credentialsInUrl=ignore for users who still
want to follow this pattern (and not receive the warning).
An earlier version of this change injected the logic into
url_normalize() in urlmatch.c. While most code paths that parse URLs
eventually normalize the URL, that normalization does not happen early
enough in the stack to avoid attempting connections to the URL first. By
inserting a check into the remote validation, we identify the issue
before making a connection. In the old code path, this was revealed by
testing the new t5601-clone.sh test under --stress, resulting in an
instance where the return code was 13 (SIGPIPE) instead of 128 from the
die().
However, we can reuse the parsing information from url_normalize() in
order to benefit from its well-worn parsing logic. We can use the struct
url_info that is created in that method to replace the password with
"<redacted>" in our error messages. This comes with a slight downside
that the normalized URL might look slightly different from the input URL
(for instance, the normalized version adds a closing slash). This should
not hinder users figuring out what the problem is and being able to fix
the issue.
As an attempt to ensure the parsing logic did not catch any
unintentional cases, I modified this change locally to to use the "die"
option by default. Running the test suite succeeds except for the
explicit username:password URLs used in t5550-http-fetch-dumb.sh and
t5541-http-push-smart.sh. This means that all other tested URLs did not
trigger this logic.
The tests show that the proper error messages appear (or do not
appear), but also count the number of error messages. When only warning,
each process validates the remote URL and outputs a warning. This
happens twice for clone, three times for fetch, and once for push.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Refactor various test code to use the "test_hook" helper. This change:
- Fixes the long-standing issues with those tests using "#!/bin/sh"
instead of "#!$SHELL_PATH". Using "#!/bin/sh" here happened to work
because this code was so simple that it e.g. worked on Solaris
/bin/sh.
- Removes the "mkdir .git/hooks" invocation, as explained in a
preceding commit we'll rely on the default templates to create that
directory for us.
For the test in "t5402-post-merge-hook.sh" it's easier and more
correct to unroll the for-loop into a test_expect_success, so let's do
that.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Preliminary clean-up of tests before the main reftable changes
hits the codebase.
* hn/prep-tests-for-reftable: (22 commits)
t1415: set REFFILES for test specific to storage format
t4202: mark bogus head hash test with REFFILES
t7003: check reflog existence only for REFFILES
t7900: stop checking for loose refs
t1404: mark tests that muck with .git directly as REFFILES.
t2017: mark --orphan/logAllRefUpdates=false test as REFFILES
t1414: mark corruption test with REFFILES
t1407: require REFFILES for for_each_reflog test
test-lib: provide test prereq REFFILES
t5304: use "reflog expire --all" to clear the reflog
t5304: restyle: trim empty lines, drop ':' before >
t7003: use rev-parse rather than FS inspection
t5000: inspect HEAD using git-rev-parse
t5000: reformat indentation to the latest fashion
t1301: fix typo in error message
t1413: use tar to save and restore entire .git directory
t1401-symbolic-ref: avoid direct filesystem access
t1401: use tar to snapshot and restore repo state
t5601: read HEAD using rev-parse
t9300: check ref existence using test-helper rather than a file system check
...
A HTTP-clone test introduced in 4fe788b1b0 ("builtin/clone.c: add
--reject-shallow option", 2021-04-01) only works in protocol v2, but is
not marked as such.
The aforementioned patch implements --reject-shallow for a variety of
situations, but usage of a protocol that requires a remote helper is not
one of them. (Such an implementation would require extending the remote
helper protocol to support the passing of a "reject shallow" option, and
then teaching it to both protocol-speaking ends.)
For now, to make it pass when GIT_TEST_PROTOCOL_VERSION=0 is passed, add
"-c protocol.version=2". A more complete solution would be either to
augment the remote helper protocol to support this feature or to return
a fatal error when using --reject-shallow with a protocol that uses a
remote helper.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In some scenarios, users may want more history than the repository
offered for cloning, which happens to be a shallow repository, can
give them. But because users don't know it is a shallow repository
until they download it to local, we may want to refuse to clone
this kind of repository, without creating any unnecessary files.
The '--depth=x' option cannot be used as a solution; the source may
be deep enough to give us 'x' commits when cloned, but the user may
later need to deepen the history to arbitrary depth.
Teach '--reject-shallow' option to "git clone" to abort as soon as
we find out that we are cloning from a shallow repository.
Signed-off-by: Li Linchao <lilinchao@oschina.cn>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As a follow-up to d162b25f95 (tests: remove support for
GIT_TEST_GETTEXT_POISON, 2021-01-20) remove those uses of the now
always true C_LOCALE_OUTPUT prerequisite from those tests which
declare it as an argument to test_expect_{success,failure}.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This trick was performed via
$ (cd t &&
sed -i -e 's/master/main/g' -e 's/MASTER/MAIN/g' \
-e 's/Master/Main/g' -- t5[6-9]*.sh)
This allows us to define `GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=main`
for those tests.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In addition to the manual adjustment to let the `linux-gcc` CI job run
the test suite with `master` and then with `main`, this patch makes sure
that GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME is set in all test scripts
that currently rely on the initial branch name being `master by default.
To determine which test scripts to mark up, the first step was to
force-set the default branch name to `master` in
- all test scripts that contain the keyword `master`,
- t4211, which expects `t/t4211/history.export` with a hard-coded ref to
initialize the default branch,
- t5560 because it sources `t/t556x_common` which uses `master`,
- t8002 and t8012 because both source `t/annotate-tests.sh` which also
uses `master`)
This trick was performed by this command:
$ sed -i '/^ *\. \.\/\(test-lib\|lib-\(bash\|cvs\|git-svn\)\|gitweb-lib\)\.sh$/i\
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\
' $(git grep -l master t/t[0-9]*.sh) \
t/t4211*.sh t/t5560*.sh t/t8002*.sh t/t8012*.sh
After that, careful, manual inspection revealed that some of the test
scripts containing the needle `master` do not actually rely on a
specific default branch name: either they mention `master` only in a
comment, or they initialize that branch specificially, or they do not
actually refer to the current default branch. Therefore, the
aforementioned modification was undone in those test scripts thusly:
$ git checkout HEAD -- \
t/t0027-auto-crlf.sh t/t0060-path-utils.sh \
t/t1011-read-tree-sparse-checkout.sh \
t/t1305-config-include.sh t/t1309-early-config.sh \
t/t1402-check-ref-format.sh t/t1450-fsck.sh \
t/t2024-checkout-dwim.sh \
t/t2106-update-index-assume-unchanged.sh \
t/t3040-subprojects-basic.sh t/t3301-notes.sh \
t/t3308-notes-merge.sh t/t3423-rebase-reword.sh \
t/t3436-rebase-more-options.sh \
t/t4015-diff-whitespace.sh t/t4257-am-interactive.sh \
t/t5323-pack-redundant.sh t/t5401-update-hooks.sh \
t/t5511-refspec.sh t/t5526-fetch-submodules.sh \
t/t5529-push-errors.sh t/t5530-upload-pack-error.sh \
t/t5548-push-porcelain.sh \
t/t5552-skipping-fetch-negotiator.sh \
t/t5572-pull-submodule.sh t/t5608-clone-2gb.sh \
t/t5614-clone-submodules-shallow.sh \
t/t7508-status.sh t/t7606-merge-custom.sh \
t/t9302-fast-import-unpack-limit.sh
We excluded one set of test scripts in these commands, though: the range
of `git p4` tests. The reason? `git p4` stores the (foreign) remote
branch in the branch called `p4/master`, which is obviously not the
default branch. Manual analysis revealed that only five of these tests
actually require a specific default branch name to pass; They were
modified thusly:
$ sed -i '/^ *\. \.\/lib-git-p4\.sh$/i\
GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME=master\
export GIT_TEST_DEFAULT_INITIAL_BRANCH_NAME\
' t/t980[0167]*.sh t/t9811*.sh
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The lazy fetching done internally to make missing objects available
in a partial clone incorrectly made permanent damage to the partial
clone filter in the repository, which has been corrected.
* jt/keep-partial-clone-filter-upon-lazy-fetch:
fetch: do not override partial clone filter
promisor-remote: remove unused variable
When a fetch with the --filter argument is made, the configured default
filter is set even if one already exists. This change was made in
5e46139376 ("builtin/fetch: remove unique promisor remote limitation",
2019-06-25) - in particular, changing from:
* If this is the FIRST partial-fetch request, we enable partial
* on this repo and remember the given filter-spec as the default
* for subsequent fetches to this remote.
to:
* If this is a partial-fetch request, we enable partial on
* this repo if not already enabled and remember the given
* filter-spec as the default for subsequent fetches to this
* remote.
(The given filter-spec is "remembered" even if there is already an
existing one.)
This is problematic whenever a lazy fetch is made, because lazy fetches
are made using "git fetch --filter=blob:none", but this will also happen
if the user invokes "git fetch --filter=<filter>" manually. Therefore,
restore the behavior prior to 5e46139376, which writes a filter-spec
only if the current fetch request is the first partial-fetch one (for
that remote).
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If a user is cloning a SHA-1 repository with GIT_DEFAULT_HASH set to
"sha256", then we can end up with a repository where the repository
format version is 0 but the extensions.objectformat key is set to
"sha256". This is both wrong (the user has a SHA-1 repository) and
nonfunctional (because the extension cannot be used in a v0 repository).
This happens because in a clone, we initially set up the repository, and
then change its algorithm based on what the remote side tells us it's
using. We've initially set up the repository as SHA-256 in this case,
and then later on reset the repository version without clearing the
extension.
We could just always set the extension in this case, but that would mean
that our SHA-1 repositories weren't compatible with older Git versions,
even though there's no reason why they shouldn't be. And we also don't
want to initialize the repository as SHA-1 initially, since that means
if we're cloning an empty repository, we'll have failed to honor the
GIT_DEFAULT_HASH variable and will end up with a SHA-1 repository, not a
SHA-256 repository.
Neither of those are appealing, so let's tell the repository
initialization code if we're doing a reinit like this, and if so, to
clear the extension if we're using SHA-1. This makes sure we produce a
valid and functional repository and doesn't break any of our other use
cases.
Reported-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach Git to lazy-fetch missing objects in a subprocess instead of doing
it in-process. This allows any fatal errors that occur during the fetch
to be isolated and converted into an error return value, instead of
causing the current command being run to terminate.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When using git clone with --separate-git-dir realgitdir and
realgitdir already exists, it's content is destroyed.
So, make sure we don't clone into an existing non-empty directory.
When d45420c1 (clone: do not clean up directories we didn't create,
2018-01-02) tightened the clean-up procedure after a failed cloning
into an empty directory, it assumed that the existing directory
given is an empty one so it is OK to keep that directory, while
running the clean-up procedure that is designed to remove everything
in it (since there won't be any, anyway). Check and make sure that
the $GIT_DIR is empty even cloning into an existing repository.
Signed-off-by: Ben Wijen <ben@wijen.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This test performs a clone from outside any repository. Consequently,
the hash algorithm used defaults to SHA-1. When the test is running with
SHA-256, this results in an object ID that is not usable by the rest of
the test. In order to ensure that we provide a usable value, switch into
the source repository before hashing.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Teach the lazy clone machinery that there can be more than one
promisor remote and consult them in order when downloading missing
objects on demand.
* cc/multi-promisor:
Move core_partial_clone_filter_default to promisor-remote.c
Move repository_format_partial_clone to promisor-remote.c
Remove fetch-object.{c,h} in favor of promisor-remote.{c,h}
remote: add promisor and partial clone config to the doc
partial-clone: add multiple remotes in the doc
t0410: test fetching from many promisor remotes
builtin/fetch: remove unique promisor remote limitation
promisor-remote: parse remote.*.partialclonefilter
Use promisor_remote_get_direct() and has_promisor_remote()
promisor-remote: use repository_format_partial_clone
promisor-remote: add promisor_remote_reinit()
promisor-remote: implement promisor_remote_get_direct()
Add initial support for many promisor remotes
fetch-object: make functions return an error code
t0410: remove pipes after git commands
In many test scripts, there are bespoke definitions of the single quote
that are some variation of this:
SQ="'"
Define a common $SQ variable in test-lib.sh and replace all usages of
these bespoke variables with the common one.
This change was done by running `git grep =\"\'\" t/` and
`git grep =\\\\\'` and manually changing the resulting definitions and
corresponding usages.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We have a couple of test scripts that are not completely
httpd-specific, but do run a few httpd-specific tests at the end.
These test scripts source 'lib-httpd.sh' somewhere mid-script, which
then skips all the rest of the test script if the dependencies for
running httpd tests are not fulfilled.
As the previous two patches in this series show, already on two
occasions non-httpd-specific tests were appended at the end of such
test scripts, and, consequently, they were skipped as well when httpd
tests couldn't be run.
Add a comment at the end of these test scripts to warn against adding
non-httpd-specific tests at the end, in the hope that they will help
prevent similar issues in the future.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This makes it possible to specify a different partial clone
filter for each promisor remote.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Instead of using the repository_format_partial_clone global
and fetch_objects() directly, let's use has_promisor_remote()
and promisor_remote_get_direct().
This way all the configured promisor remotes will be taken
into account, not only the one specified by
extensions.partialClone.
Also when cloning or fetching using a partial clone filter,
remote.origin.promisor will be set to "true" instead of
setting extensions.partialClone to "origin". This makes it
possible to use many promisor remote just by fetching from
them.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Code tightening against a "wrong" object appearing where an object
of a different type is expected, instead of blindly assuming that
the connection between objects are correctly made.
* tb/unexpected:
rev-list: detect broken root trees
rev-list: let traversal die when --missing is not in use
get_commit_tree(): return NULL for broken tree
list-objects.c: handle unexpected non-tree entries
list-objects.c: handle unexpected non-blob entries
t: introduce tests for unexpected object types
t: move 'hex2oct' into test-lib-functions.sh
Test framework update to more robustly clean up leftover files and
processes after tests are done.
* sg/test-atexit:
t9811-git-p4-label-import: fix pipeline negation
git p4 test: disable '-x' tracing in the p4d watchdog loop
git p4 test: simplify timeout handling
git p4 test: clean up the p4d cleanup functions
git p4 test: use 'test_atexit' to kill p4d and the watchdog process
t0301-credential-cache: use 'test_atexit' to stop the credentials helper
tests: use 'test_atexit' to stop httpd
git-daemon: use 'test_atexit` to stop 'git-daemon'
test-lib: introduce 'test_atexit'
t/lib-git-daemon: make sure to kill the 'git-daemon' process
test-lib: fix interrupt handling with 'dash' and '--verbose-log -x'
The helper 'hex2oct' is used to convert base-16 encoded data into a
base-8 binary form, and is useful for preparing data for commands that
accept input in a binary format, such as 'git hash-object', via
'printf'.
This helper is defined identically in three separate places throughout
't'. Move the definition to test-lib-function.sh, so that it can be used
in new test suites, and its definition is not redundant.
This will likewise make our job easier in the subsequent commit, which
also uses 'hex2oct'.
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Use 'test_atexit' to run cleanup commands to stop httpd at the end of
the test script or upon interrupt or failure, as it is shorter,
simpler, and more robust than registering such cleanup commands in the
trap on EXIT in the test scripts.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When running the SSH command as part of a fetch, Git will write "SendEnv
GIT_PROTOCOL" as an option if protocol v1 or v2 is used, but not v0.
Update all tests that check this to run Git with
GIT_TEST_PROTOCOL_VERSION=0.
I chose not to do a more thorough fix (for example, checking the value of
GIT_TEST_PROTOCOL_VERSION to see if the SendEnv check needs to be done)
because a set of patches [1] that unifies the handling of SSH options,
including writing "SendEnv GIT_PROTOCOL" regardless of protocol version,
is in progress. When that is done, this patch should be reverted, since
the functionality in here is no longer needed.
As of this patch, all tests pass if GIT_TEST_PROTOCOL_VERSION is set to
1.
[1] https://public-inbox.org/git/cover.1545342797.git.steadmon@google.com/
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A regression for cygwin users was introduced with commit 05b458c,
"real_path: resolve symlinks by hand".
In the the commit message we read:
The current implementation of real_path uses chdir() in order to resolve
symlinks. Unfortunately this isn't thread-safe as chdir() affects a
process as a whole...
The old (and non-thread-save) OS calls chdir()/pwd() had been
replaced by a string operation.
The cygwin layer "knows" that "C:\cygwin" is an absolute path,
but the new string operation does not.
"git clone <url> C:\cygwin\home\USER\repo" fails like this:
fatal: Invalid path '/home/USER/repo/C:\cygwin\home\USER\repo'
The solution is to implement has_dos_drive_prefix(), skip_dos_drive_prefix()
is_dir_sep(), offset_1st_component() and convert_slashes() for cygwin
in the same way as it is done in 'Git for Windows' in compat/mingw.[ch]
Extract the needed code into compat/win32/path-utils.[ch] and use it
for cygwin as well.
Reported-by: Steven Penny <svnpenn@gmail.com>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code recently added to "git clone" to see if the platform's
filesystem is adequate to check out and use the project code
correctly (e.g. a case smashing filesystem cannot be used for a
project with two files whose paths are different only in case) was
meant to help Windows users, but the test for it was not enabled
for that platform, which has been corrected.
* tb/clone-case-smashing-warning-test:
t5601-99: Enable colliding file detection for MINGW
Commit b878579ae7 (clone: report duplicate entries on case-insensitive
filesystems - 2018-08-17) adds a warning to user when cloning a repo
with case-sensitive file names on a case-insensitive file system.
This test has never been enabled for MINGW.
It had been working since day 1, but I forget to report that to the
author.
Enable it after a re-test.
Signed-off-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Recently added check for case smashing filesystems did not
correctly utilize the cached stat information, leading to false
breakage detected by our test suite, which has been corrected.
* nd/clone-case-smashing-warning:
clone: fix colliding file detection on APFS