Fix sparse warnings
Fix warnings from 'make check'.
- These files don't include 'builtin.h' causing sparse to complain that
cmd_* isn't declared:
builtin/clone.c:364, builtin/fetch-pack.c:797,
builtin/fmt-merge-msg.c:34, builtin/hash-object.c:78,
builtin/merge-index.c:69, builtin/merge-recursive.c:22
builtin/merge-tree.c:341, builtin/mktag.c:156, builtin/notes.c:426
builtin/notes.c:822, builtin/pack-redundant.c:596,
builtin/pack-refs.c:10, builtin/patch-id.c:60, builtin/patch-id.c:149,
builtin/remote.c:1512, builtin/remote-ext.c:240,
builtin/remote-fd.c:53, builtin/reset.c:236, builtin/send-pack.c:384,
builtin/unpack-file.c:25, builtin/var.c:75
- These files have symbols which should be marked static since they're
only file scope:
submodule.c:12, diff.c:631, replace_object.c:92, submodule.c:13,
submodule.c:14, trace.c:78, transport.c:195, transport-helper.c:79,
unpack-trees.c:19, url.c:3, url.c:18, url.c:104, url.c:117, url.c:123,
url.c:129, url.c:136, thread-utils.c:21, thread-utils.c:48
- These files redeclare symbols to be different types:
builtin/index-pack.c:210, parse-options.c:564, parse-options.c:571,
usage.c:49, usage.c:58, usage.c:63, usage.c:72
- These files use a literal integer 0 when they really should use a NULL
pointer:
daemon.c:663, fast-import.c:2942, imap-send.c:1072, notes-merge.c:362
While we're in the area, clean up some unused #includes in builtin files
(mostly exec_cmd.h).
Signed-off-by: Stephen Boyd <bebarino@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-22 07:51:05 +00:00
|
|
|
#include "builtin.h"
|
2023-03-21 06:25:54 +00:00
|
|
|
#include "gettext.h"
|
2023-02-24 00:09:27 +00:00
|
|
|
#include "hex.h"
|
2023-04-11 07:41:53 +00:00
|
|
|
#include "object-file.h"
|
2005-07-04 20:26:53 +00:00
|
|
|
#include "pkt-line.h"
|
2007-09-11 03:03:00 +00:00
|
|
|
#include "fetch-pack.h"
|
2013-07-08 20:56:53 +00:00
|
|
|
#include "remote.h"
|
|
|
|
#include "connect.h"
|
2020-03-30 14:03:46 +00:00
|
|
|
#include "oid-array.h"
|
2018-03-14 18:31:45 +00:00
|
|
|
#include "protocol.h"
|
2007-09-19 04:49:35 +00:00
|
|
|
|
2005-08-12 09:08:29 +00:00
|
|
|
static const char fetch_pack_usage[] =
|
2015-01-13 07:44:47 +00:00
|
|
|
"git fetch-pack [--all] [--stdin] [--quiet | -q] [--keep | -k] [--thin] "
|
fetch-pack: new --stdin option to read refs from stdin
If a remote repo has too many tags (or branches), cloning it over the
smart HTTP transport can fail because remote-curl.c puts all the refs
from the remote repo on the fetch-pack command line. This can make the
command line longer than the global OS command line limit, causing
fetch-pack to fail.
This is especially a problem on Windows where the command line limit is
orders of magnitude shorter than Linux. There are already real repos out
there that msysGit cannot clone over smart HTTP due to this problem.
Here is an easy way to trigger this problem:
git init too-many-refs
cd too-many-refs
echo bla > bla.txt
git add .
git commit -m test
sha=$(git rev-parse HEAD)
tag=$(perl -e 'print "bla" x 30')
for i in `seq 50000`; do
echo $sha refs/tags/$tag-$i >> .git/packed-refs
done
Then share this repo over the smart HTTP protocol and try cloning it:
$ git clone http://localhost/.../too-many-refs/.git
Cloning into 'too-many-refs'...
fatal: cannot exec 'fetch-pack': Argument list too long
50k tags is obviously an absurd number, but it is required to
demonstrate the problem on Linux because it has a much more generous
command line limit. On Windows the clone fails with as little as 500
tags in the above loop, which is getting uncomfortably close to the
number of tags you might see in real long lived repos.
This is not just theoretical, msysGit is already failing to clone our
company repo due to this. It's a large repo converted from CVS, nearly
10 years of history.
Four possible solutions were discussed on the Git mailing list (in no
particular order):
1) Call fetch-pack multiple times with smaller batches of refs.
This was dismissed as inefficient and inelegant.
2) Add option --refs-fd=$n to pass a an fd from where to read the refs.
This was rejected because inheriting descriptors other than
stdin/stdout/stderr through exec() is apparently problematic on Windows,
plus it would require changes to the run-command API to open extra
pipes.
3) Add option --refs-from=$tmpfile to pass the refs using a temp file.
This was not favored because of the temp file requirement.
4) Add option --stdin to pass the refs on stdin, one per line.
In the end this option was chosen as the most efficient and most
desirable from scripting perspective.
There was however a small complication when using stdin to pass refs to
fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for
communication with the remote server.
If we are going to sneak refs on stdin line by line, it would have to be
done very carefully in the presence of --stateless-rpc, because when
reading refs line by line we might read ahead too much data into our
buffer and eat some of the remote protocol data which is also coming on
stdin.
One way to solve this would be to refactor get_remote_heads() in
fetch-pack.c to accept a residual buffer from our stdin line parsing
above, but this function is used in several places so other callers
would be burdened by this residual buffer interface even when most of
them don't need it.
In the end we settled on the following solution:
If --stdin is specified without --stateless-rpc, fetch-pack would read
the refs from stdin one per line, in a script friendly format.
However if --stdin is specified together with --stateless-rpc,
fetch-pack would read the refs from stdin in packetized format
(pkt-line) with a flush packet terminating the list of refs. This way we
can read the exact number of bytes that we need from stdin, and then
get_remote_heads() can continue reading from the same fd without losing
a single byte of remote protocol data.
This way the --stdin option only loses generality and scriptability when
used together with --stateless-rpc, which is not easily scriptable
anyway because it also uses pkt-line when talking to the remote server.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-02 15:13:48 +00:00
|
|
|
"[--include-tag] [--upload-pack=<git-upload-pack>] [--depth=<n>] "
|
2013-11-28 19:49:17 +00:00
|
|
|
"[--no-progress] [--diag-url] [-v] [<host>:]<directory> [<refs>...]";
|
2005-07-04 20:26:53 +00:00
|
|
|
|
2016-02-22 22:44:50 +00:00
|
|
|
static void add_sought_entry(struct ref ***sought, int *nr, int *alloc,
|
|
|
|
const char *name)
|
2013-01-29 22:02:15 +00:00
|
|
|
{
|
2016-02-22 22:44:50 +00:00
|
|
|
struct ref *ref;
|
2015-11-10 02:22:21 +00:00
|
|
|
struct object_id oid;
|
2018-10-15 00:01:52 +00:00
|
|
|
const char *p;
|
2013-12-05 13:02:49 +00:00
|
|
|
|
2018-10-15 00:01:52 +00:00
|
|
|
if (!parse_oid_hex(name, &oid, &p)) {
|
|
|
|
if (*p == ' ') {
|
|
|
|
/* <oid> <ref>, find refname */
|
|
|
|
name = p + 1;
|
|
|
|
} else if (*p == '\0') {
|
|
|
|
; /* <oid>, leave oid as name */
|
2016-03-01 02:12:56 +00:00
|
|
|
} else {
|
|
|
|
/* <ref>, clear cruft from oid */
|
|
|
|
oidclr(&oid);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* <ref>, clear cruft from get_oid_hex */
|
2016-02-22 22:44:50 +00:00
|
|
|
oidclr(&oid);
|
2016-03-01 02:12:56 +00:00
|
|
|
}
|
2013-01-29 22:02:15 +00:00
|
|
|
|
2016-02-22 22:44:50 +00:00
|
|
|
ref = alloc_ref(name);
|
|
|
|
oidcpy(&ref->old_oid, &oid);
|
2013-01-29 22:02:15 +00:00
|
|
|
(*nr)++;
|
|
|
|
ALLOC_GROW(*sought, *nr, *alloc);
|
|
|
|
(*sought)[*nr - 1] = ref;
|
|
|
|
}
|
|
|
|
|
builtins: mark unused prefix parameters
All builtins receive a "prefix" parameter, but it is only useful if they
need to adjust filenames given by the user on the command line. For
builtins that do not even call parse_options(), they often don't look at
the prefix at all, and -Wunused-parameter complains.
Let's annotate those to silence the compiler warning. I gave a quick
scan of each of these cases, and it seems like they don't have anything
they _should_ be using the prefix for (i.e., there is no hidden bug that
we are missing). The only questionable cases I saw were:
- in git-unpack-file, we create a tempfile which will always be at the
root of the repository, even if the command is run from a subdir.
Arguably this should be created in the subdir from which we're run
(as we report the path only as a relative name). However, nobody has
complained, and I'm hesitant to change something that is deep
plumbing going back to April 2005 (though I think within our
scripts, the sole caller in git-merge-one-file would be OK, as it
moves to the toplevel itself).
- in fetch-pack, local-filesystem remotes are taken as relative to the
project root, not the current directory. So:
git init server.git
[...put stuff in server.git...]
git init client.git
cd client.git
mkdir subdir
cd subdir
git fetch-pack ../../server.git ...
won't work, as we quietly move to the top of the repository before
interpreting the path (so "../server.git" would work). This is
weird, but again, nobody has complained and this is how it has
always worked. And this is how "git fetch" works, too. Plus it
raises questions about how a configured remote like:
git config remote.origin.url ../server.git
should behave. I can certainly come up with a reasonable set of
behavior, but it may not be worth stirring up complications in a
plumbing tool.
So I've left the behavior untouched in both of those cases. If anybody
really wants to revisit them, it's easy enough to drop the UNUSED
marker. This commit is just about removing them as obstacles to turning
on -Wunused-parameter all the time.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-28 20:56:55 +00:00
|
|
|
int cmd_fetch_pack(int argc, const char **argv, const char *prefix UNUSED)
|
2007-09-19 04:49:39 +00:00
|
|
|
{
|
2012-05-21 07:59:59 +00:00
|
|
|
int i, ret;
|
2008-02-04 18:26:23 +00:00
|
|
|
struct ref *ref = NULL;
|
2012-05-21 07:59:56 +00:00
|
|
|
const char *dest = NULL;
|
2013-01-29 22:02:15 +00:00
|
|
|
struct ref **sought = NULL;
|
|
|
|
int nr_sought = 0, alloc_sought = 0;
|
2008-02-04 18:26:23 +00:00
|
|
|
int fd[2];
|
fetch-pack: support more than one pack lockfile
Whenever a fetch results in a packfile being downloaded, a .keep file is
generated, so that the packfile can be preserved (from, say, a running
"git repack") until refs are written referring to the contents of the
packfile.
In a subsequent patch, a successful fetch using protocol v2 may result
in more than one .keep file being generated. Therefore, teach
fetch_pack() and the transport mechanism to support multiple .keep
files.
Implementation notes:
- builtin/fetch-pack.c normally does not generate .keep files, and thus
is unaffected by this or future changes. However, it has an
undocumented "--lock-pack" feature, used by remote-curl.c when
implementing the "fetch" remote helper command. In keeping with the
remote helper protocol, only one "lock" line will ever be written;
the rest will result in warnings to stderr. However, in practice,
warnings will never be written because the remote-curl.c "fetch" is
only used for protocol v0/v1 (which will not generate multiple .keep
files). (Protocol v2 uses the "stateless-connect" command, not the
"fetch" command.)
- connected.c has an optimization in that connectivity checks on a ref
need not be done if the target object is in a pack known to be
self-contained and connected. If there are multiple packfiles, this
optimization can no longer be done.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 20:57:22 +00:00
|
|
|
struct string_list pack_lockfiles = STRING_LIST_INIT_DUP;
|
|
|
|
struct string_list *pack_lockfiles_ptr = NULL;
|
2008-02-04 18:26:23 +00:00
|
|
|
struct child_process *conn;
|
2012-10-26 15:53:54 +00:00
|
|
|
struct fetch_pack_args args;
|
2017-03-31 01:40:00 +00:00
|
|
|
struct oid_array shallow = OID_ARRAY_INIT;
|
2016-06-12 10:54:04 +00:00
|
|
|
struct string_list deepen_not = STRING_LIST_INIT_DUP;
|
2018-03-14 18:31:45 +00:00
|
|
|
struct packet_reader reader;
|
2018-12-17 22:40:54 +00:00
|
|
|
enum protocol_version version;
|
2007-01-25 01:02:15 +00:00
|
|
|
|
2017-12-08 15:27:14 +00:00
|
|
|
fetch_if_missing = 0;
|
|
|
|
|
2011-02-24 14:30:19 +00:00
|
|
|
packet_trace_identity("fetch-pack");
|
|
|
|
|
2012-10-26 15:53:54 +00:00
|
|
|
memset(&args, 0, sizeof(args));
|
list-objects-filter: add and use initializers
In 7e2619d8ff (list_objects_filter_options: plug leak of filter_spec
strings, 2022-09-08), we noted that the filter_spec string_list was
inconsistent in how it handled memory ownership of strings stored in the
list. The fix there was a bit of a band-aid to set the "strdup_strings"
variable right before adding anything.
That works OK, and it lets the users of the API continue to
zero-initialize the struct. But it makes the code a bit hard to follow
and accident-prone, as any other spots appending the filter_spec need to
think about whether to set the strdup_strings value, too (there's one
such spot in partial_clone_get_default_filter_spec(), which is probably
a possible memory leak).
So let's do that full cleanup now. We'll introduce a
LIST_OBJECTS_FILTER_INIT macro and matching function, and use them as
appropriate (though it is for the "_options" struct, this matches the
corresponding list_objects_filter_release() function).
This is harder than it seems! Many other structs, like
git_transport_data, embed the filter struct. So they need to initialize
it themselves even if the rest of the enclosing struct is OK with
zero-initialization. I found all of the relevant spots by grepping
manually for declarations of list_objects_filter_options. And then doing
so recursively for structs which embed it, and ones which embed those,
and so on.
I'm pretty sure I got everything, but there's no change that would alert
the compiler if any topics in flight added new declarations. To catch
this case, we now double-check in the parsing function that things were
initialized as expected and BUG() if appropriate.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-09-11 05:03:07 +00:00
|
|
|
list_objects_filter_init(&args.filter_options);
|
2012-10-26 15:53:54 +00:00
|
|
|
args.uploadpack = "git-upload-pack";
|
|
|
|
|
2012-05-21 07:59:58 +00:00
|
|
|
for (i = 1; i < argc && *argv[i] == '-'; i++) {
|
2007-09-11 03:03:00 +00:00
|
|
|
const char *arg = argv[i];
|
2005-07-04 20:26:53 +00:00
|
|
|
|
2016-06-12 10:53:53 +00:00
|
|
|
if (skip_prefix(arg, "--upload-pack=", &arg)) {
|
|
|
|
args.uploadpack = arg;
|
2012-05-21 07:59:58 +00:00
|
|
|
continue;
|
|
|
|
}
|
2016-06-12 10:53:53 +00:00
|
|
|
if (skip_prefix(arg, "--exec=", &arg)) {
|
|
|
|
args.uploadpack = arg;
|
2012-05-21 07:59:58 +00:00
|
|
|
continue;
|
2005-07-04 20:26:53 +00:00
|
|
|
}
|
2012-05-21 07:59:58 +00:00
|
|
|
if (!strcmp("--quiet", arg) || !strcmp("-q", arg)) {
|
|
|
|
args.quiet = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--keep", arg) || !strcmp("-k", arg)) {
|
|
|
|
args.lock_pack = args.keep_pack;
|
|
|
|
args.keep_pack = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--thin", arg)) {
|
|
|
|
args.use_thin_pack = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--include-tag", arg)) {
|
|
|
|
args.include_tag = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--all", arg)) {
|
|
|
|
args.fetch_all = 1;
|
|
|
|
continue;
|
2005-07-04 20:26:53 +00:00
|
|
|
}
|
2012-05-21 07:59:58 +00:00
|
|
|
if (!strcmp("--stdin", arg)) {
|
|
|
|
args.stdin_refs = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2013-11-28 19:49:17 +00:00
|
|
|
if (!strcmp("--diag-url", arg)) {
|
|
|
|
args.diag_url = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2012-05-21 07:59:58 +00:00
|
|
|
if (!strcmp("-v", arg)) {
|
|
|
|
args.verbose = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2016-06-12 10:53:53 +00:00
|
|
|
if (skip_prefix(arg, "--depth=", &arg)) {
|
|
|
|
args.depth = strtol(arg, NULL, 0);
|
2012-05-21 07:59:58 +00:00
|
|
|
continue;
|
|
|
|
}
|
2016-06-12 10:53:59 +00:00
|
|
|
if (skip_prefix(arg, "--shallow-since=", &arg)) {
|
|
|
|
args.deepen_since = xstrdup(arg);
|
|
|
|
continue;
|
|
|
|
}
|
2016-06-12 10:54:04 +00:00
|
|
|
if (skip_prefix(arg, "--shallow-exclude=", &arg)) {
|
|
|
|
string_list_append(&deepen_not, arg);
|
|
|
|
continue;
|
|
|
|
}
|
fetch, upload-pack: --deepen=N extends shallow boundary by N commits
In git-fetch, --depth argument is always relative with the latest
remote refs. This makes it a bit difficult to cover this use case,
where the user wants to make the shallow history, say 3 levels
deeper. It would work if remote refs have not moved yet, but nobody
can guarantee that, especially when that use case is performed a
couple months after the last clone or "git fetch --depth". Also,
modifying shallow boundary using --depth does not work well with
clones created by --since or --not.
This patch fixes that. A new argument --deepen=<N> will add <N> more (*)
parent commits to the current history regardless of where remote refs
are.
Have/Want negotiation is still respected. So if remote refs move, the
server will send two chunks: one between "have" and "want" and another
to extend shallow history. In theory, the client could send no "want"s
in order to get the second chunk only. But the protocol does not allow
that. Either you send no want lines, which means ls-remote; or you
have to send at least one want line that carries deep-relative to the
server..
The main work was done by Dongcan Jiang. I fixed it up here and there.
And of course all the bugs belong to me.
(*) We could even support --deepen=<N> where <N> is negative. In that
case we can cut some history from the shallow clone. This operation
(and --depth=<shorter depth>) does not require interaction with remote
side (and more complicated to implement as a result).
Helped-by: Duy Nguyen <pclouds@gmail.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-12 10:54:09 +00:00
|
|
|
if (!strcmp(arg, "--deepen-relative")) {
|
|
|
|
args.deepen_relative = 1;
|
2012-05-21 07:59:58 +00:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--no-progress", arg)) {
|
|
|
|
args.no_progress = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--stateless-rpc", arg)) {
|
|
|
|
args.stateless_rpc = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--lock-pack", arg)) {
|
|
|
|
args.lock_pack = 1;
|
fetch-pack: support more than one pack lockfile
Whenever a fetch results in a packfile being downloaded, a .keep file is
generated, so that the packfile can be preserved (from, say, a running
"git repack") until refs are written referring to the contents of the
packfile.
In a subsequent patch, a successful fetch using protocol v2 may result
in more than one .keep file being generated. Therefore, teach
fetch_pack() and the transport mechanism to support multiple .keep
files.
Implementation notes:
- builtin/fetch-pack.c normally does not generate .keep files, and thus
is unaffected by this or future changes. However, it has an
undocumented "--lock-pack" feature, used by remote-curl.c when
implementing the "fetch" remote helper command. In keeping with the
remote helper protocol, only one "lock" line will ever be written;
the rest will result in warnings to stderr. However, in practice,
warnings will never be written because the remote-curl.c "fetch" is
only used for protocol v0/v1 (which will not generate multiple .keep
files). (Protocol v2 uses the "stateless-connect" command, not the
"fetch" command.)
- connected.c has an optimization in that connectivity checks on a ref
need not be done if the target object is in a pack known to be
self-contained and connected. If there are multiple packfiles, this
optimization can no longer be done.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 20:57:22 +00:00
|
|
|
pack_lockfiles_ptr = &pack_lockfiles;
|
2012-05-21 07:59:58 +00:00
|
|
|
continue;
|
|
|
|
}
|
2013-07-21 08:18:05 +00:00
|
|
|
if (!strcmp("--check-self-contained-and-connected", arg)) {
|
|
|
|
args.check_self_contained_and_connected = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2013-12-05 13:02:50 +00:00
|
|
|
if (!strcmp("--cloning", arg)) {
|
|
|
|
args.cloning = 1;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (!strcmp("--update-shallow", arg)) {
|
|
|
|
args.update_shallow = 1;
|
|
|
|
continue;
|
|
|
|
}
|
introduce fetch-object: fetch one promisor object
Introduce fetch-object, providing the ability to fetch one object from a
promisor remote.
This uses fetch-pack. To do this, the transport mechanism has been
updated with 2 flags, "from-promisor" to indicate that the resulting
pack comes from a promisor remote (and thus should be annotated as such
by index-pack), and "no-dependents" to indicate that only the objects
themselves need to be fetched (but fetching additional objects is
nevertheless safe).
Whenever "no-dependents" is used, fetch-pack will refrain from using any
object flags, because it is most likely invoked as part of a dynamic
object fetch by another Git command (which may itself use object flags).
An alternative to this is to leave fetch-pack alone, and instead update
the allocation of flags so that fetch-pack's flags never overlap with
any others, but this will end up shrinking the number of flags available
to nearly every other Git command (that is, every Git command that
accesses objects), so the approach in this commit was used instead.
This will be tested in a subsequent commit.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-05 16:58:49 +00:00
|
|
|
if (!strcmp("--from-promisor", arg)) {
|
|
|
|
args.from_promisor = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-28 14:02:07 +00:00
|
|
|
if (!strcmp("--refetch", arg)) {
|
|
|
|
args.refetch = 1;
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-22 17:28:35 +00:00
|
|
|
if (skip_prefix(arg, ("--filter="), &arg)) {
|
2017-12-08 15:58:40 +00:00
|
|
|
parse_list_objects_filter(&args.filter_options, arg);
|
|
|
|
continue;
|
|
|
|
}
|
2022-03-22 17:28:35 +00:00
|
|
|
if (!strcmp(arg, ("--no-filter"))) {
|
2017-12-08 15:58:50 +00:00
|
|
|
list_objects_filter_set_no_filter(&args.filter_options);
|
2017-12-08 15:58:41 +00:00
|
|
|
continue;
|
|
|
|
}
|
2012-05-21 07:59:58 +00:00
|
|
|
usage(fetch_pack_usage);
|
2005-07-04 20:26:53 +00:00
|
|
|
}
|
2016-06-12 10:54:04 +00:00
|
|
|
if (deepen_not.nr)
|
|
|
|
args.deepen_not = &deepen_not;
|
2012-05-21 07:59:57 +00:00
|
|
|
|
|
|
|
if (i < argc)
|
|
|
|
dest = argv[i++];
|
|
|
|
else
|
2005-07-04 20:26:53 +00:00
|
|
|
usage(fetch_pack_usage);
|
2007-09-11 03:03:00 +00:00
|
|
|
|
2012-05-21 07:59:59 +00:00
|
|
|
/*
|
|
|
|
* Copy refs from cmdline to growable list, then append any
|
|
|
|
* refs from the standard input:
|
|
|
|
*/
|
|
|
|
for (; i < argc; i++)
|
2013-01-29 22:02:15 +00:00
|
|
|
add_sought_entry(&sought, &nr_sought, &alloc_sought, argv[i]);
|
fetch-pack: new --stdin option to read refs from stdin
If a remote repo has too many tags (or branches), cloning it over the
smart HTTP transport can fail because remote-curl.c puts all the refs
from the remote repo on the fetch-pack command line. This can make the
command line longer than the global OS command line limit, causing
fetch-pack to fail.
This is especially a problem on Windows where the command line limit is
orders of magnitude shorter than Linux. There are already real repos out
there that msysGit cannot clone over smart HTTP due to this problem.
Here is an easy way to trigger this problem:
git init too-many-refs
cd too-many-refs
echo bla > bla.txt
git add .
git commit -m test
sha=$(git rev-parse HEAD)
tag=$(perl -e 'print "bla" x 30')
for i in `seq 50000`; do
echo $sha refs/tags/$tag-$i >> .git/packed-refs
done
Then share this repo over the smart HTTP protocol and try cloning it:
$ git clone http://localhost/.../too-many-refs/.git
Cloning into 'too-many-refs'...
fatal: cannot exec 'fetch-pack': Argument list too long
50k tags is obviously an absurd number, but it is required to
demonstrate the problem on Linux because it has a much more generous
command line limit. On Windows the clone fails with as little as 500
tags in the above loop, which is getting uncomfortably close to the
number of tags you might see in real long lived repos.
This is not just theoretical, msysGit is already failing to clone our
company repo due to this. It's a large repo converted from CVS, nearly
10 years of history.
Four possible solutions were discussed on the Git mailing list (in no
particular order):
1) Call fetch-pack multiple times with smaller batches of refs.
This was dismissed as inefficient and inelegant.
2) Add option --refs-fd=$n to pass a an fd from where to read the refs.
This was rejected because inheriting descriptors other than
stdin/stdout/stderr through exec() is apparently problematic on Windows,
plus it would require changes to the run-command API to open extra
pipes.
3) Add option --refs-from=$tmpfile to pass the refs using a temp file.
This was not favored because of the temp file requirement.
4) Add option --stdin to pass the refs on stdin, one per line.
In the end this option was chosen as the most efficient and most
desirable from scripting perspective.
There was however a small complication when using stdin to pass refs to
fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for
communication with the remote server.
If we are going to sneak refs on stdin line by line, it would have to be
done very carefully in the presence of --stateless-rpc, because when
reading refs line by line we might read ahead too much data into our
buffer and eat some of the remote protocol data which is also coming on
stdin.
One way to solve this would be to refactor get_remote_heads() in
fetch-pack.c to accept a residual buffer from our stdin line parsing
above, but this function is used in several places so other callers
would be burdened by this residual buffer interface even when most of
them don't need it.
In the end we settled on the following solution:
If --stdin is specified without --stateless-rpc, fetch-pack would read
the refs from stdin one per line, in a script friendly format.
However if --stdin is specified together with --stateless-rpc,
fetch-pack would read the refs from stdin in packetized format
(pkt-line) with a flush packet terminating the list of refs. This way we
can read the exact number of bytes that we need from stdin, and then
get_remote_heads() can continue reading from the same fd without losing
a single byte of remote protocol data.
This way the --stdin option only loses generality and scriptability when
used together with --stateless-rpc, which is not easily scriptable
anyway because it also uses pkt-line when talking to the remote server.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-02 15:13:48 +00:00
|
|
|
if (args.stdin_refs) {
|
|
|
|
if (args.stateless_rpc) {
|
|
|
|
/* in stateless RPC mode we use pkt-line to read
|
|
|
|
* from stdin, until we get a flush packet
|
|
|
|
*/
|
|
|
|
for (;;) {
|
pkt-line: provide a LARGE_PACKET_MAX static buffer
Most of the callers of packet_read_line just read into a
static 1000-byte buffer (callers which handle arbitrary
binary data already use LARGE_PACKET_MAX). This works fine
in practice, because:
1. The only variable-sized data in these lines is a ref
name, and refs tend to be a lot shorter than 1000
characters.
2. When sending ref lines, git-core always limits itself
to 1000 byte packets.
However, the only limit given in the protocol specification
in Documentation/technical/protocol-common.txt is
LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in
pack-protocol.txt, and then only describing what we write,
not as a specific limit for readers.
This patch lets us bump the 1000-byte limit to
LARGE_PACKET_MAX. Even though git-core will never write a
packet where this makes a difference, there are two good
reasons to do this:
1. Other git implementations may have followed
protocol-common.txt and used a larger maximum size. We
don't bump into it in practice because it would involve
very long ref names.
2. We may want to increase the 1000-byte limit one day.
Since packets are transferred before any capabilities,
it's difficult to do this in a backwards-compatible
way. But if we bump the size of buffer the readers can
handle, eventually older versions of git will be
obsolete enough that we can justify bumping the
writers, as well. We don't have plans to do this
anytime soon, but there is no reason not to start the
clock ticking now.
Just bumping all of the reading bufs to LARGE_PACKET_MAX
would waste memory. Instead, since most readers just read
into a temporary buffer anyway, let's provide a single
static buffer that all callers can use. We can further wrap
this detail away by having the packet_read_line wrapper just
use the buffer transparently and return a pointer to the
static storage. That covers most of the cases, and the
remaining ones already read into their own LARGE_PACKET_MAX
buffers.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 20:02:57 +00:00
|
|
|
char *line = packet_read_line(0, NULL);
|
|
|
|
if (!line)
|
fetch-pack: new --stdin option to read refs from stdin
If a remote repo has too many tags (or branches), cloning it over the
smart HTTP transport can fail because remote-curl.c puts all the refs
from the remote repo on the fetch-pack command line. This can make the
command line longer than the global OS command line limit, causing
fetch-pack to fail.
This is especially a problem on Windows where the command line limit is
orders of magnitude shorter than Linux. There are already real repos out
there that msysGit cannot clone over smart HTTP due to this problem.
Here is an easy way to trigger this problem:
git init too-many-refs
cd too-many-refs
echo bla > bla.txt
git add .
git commit -m test
sha=$(git rev-parse HEAD)
tag=$(perl -e 'print "bla" x 30')
for i in `seq 50000`; do
echo $sha refs/tags/$tag-$i >> .git/packed-refs
done
Then share this repo over the smart HTTP protocol and try cloning it:
$ git clone http://localhost/.../too-many-refs/.git
Cloning into 'too-many-refs'...
fatal: cannot exec 'fetch-pack': Argument list too long
50k tags is obviously an absurd number, but it is required to
demonstrate the problem on Linux because it has a much more generous
command line limit. On Windows the clone fails with as little as 500
tags in the above loop, which is getting uncomfortably close to the
number of tags you might see in real long lived repos.
This is not just theoretical, msysGit is already failing to clone our
company repo due to this. It's a large repo converted from CVS, nearly
10 years of history.
Four possible solutions were discussed on the Git mailing list (in no
particular order):
1) Call fetch-pack multiple times with smaller batches of refs.
This was dismissed as inefficient and inelegant.
2) Add option --refs-fd=$n to pass a an fd from where to read the refs.
This was rejected because inheriting descriptors other than
stdin/stdout/stderr through exec() is apparently problematic on Windows,
plus it would require changes to the run-command API to open extra
pipes.
3) Add option --refs-from=$tmpfile to pass the refs using a temp file.
This was not favored because of the temp file requirement.
4) Add option --stdin to pass the refs on stdin, one per line.
In the end this option was chosen as the most efficient and most
desirable from scripting perspective.
There was however a small complication when using stdin to pass refs to
fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for
communication with the remote server.
If we are going to sneak refs on stdin line by line, it would have to be
done very carefully in the presence of --stateless-rpc, because when
reading refs line by line we might read ahead too much data into our
buffer and eat some of the remote protocol data which is also coming on
stdin.
One way to solve this would be to refactor get_remote_heads() in
fetch-pack.c to accept a residual buffer from our stdin line parsing
above, but this function is used in several places so other callers
would be burdened by this residual buffer interface even when most of
them don't need it.
In the end we settled on the following solution:
If --stdin is specified without --stateless-rpc, fetch-pack would read
the refs from stdin one per line, in a script friendly format.
However if --stdin is specified together with --stateless-rpc,
fetch-pack would read the refs from stdin in packetized format
(pkt-line) with a flush packet terminating the list of refs. This way we
can read the exact number of bytes that we need from stdin, and then
get_remote_heads() can continue reading from the same fd without losing
a single byte of remote protocol data.
This way the --stdin option only loses generality and scriptability when
used together with --stateless-rpc, which is not easily scriptable
anyway because it also uses pkt-line when talking to the remote server.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-02 15:13:48 +00:00
|
|
|
break;
|
2013-04-01 15:59:37 +00:00
|
|
|
add_sought_entry(&sought, &nr_sought, &alloc_sought, line);
|
fetch-pack: new --stdin option to read refs from stdin
If a remote repo has too many tags (or branches), cloning it over the
smart HTTP transport can fail because remote-curl.c puts all the refs
from the remote repo on the fetch-pack command line. This can make the
command line longer than the global OS command line limit, causing
fetch-pack to fail.
This is especially a problem on Windows where the command line limit is
orders of magnitude shorter than Linux. There are already real repos out
there that msysGit cannot clone over smart HTTP due to this problem.
Here is an easy way to trigger this problem:
git init too-many-refs
cd too-many-refs
echo bla > bla.txt
git add .
git commit -m test
sha=$(git rev-parse HEAD)
tag=$(perl -e 'print "bla" x 30')
for i in `seq 50000`; do
echo $sha refs/tags/$tag-$i >> .git/packed-refs
done
Then share this repo over the smart HTTP protocol and try cloning it:
$ git clone http://localhost/.../too-many-refs/.git
Cloning into 'too-many-refs'...
fatal: cannot exec 'fetch-pack': Argument list too long
50k tags is obviously an absurd number, but it is required to
demonstrate the problem on Linux because it has a much more generous
command line limit. On Windows the clone fails with as little as 500
tags in the above loop, which is getting uncomfortably close to the
number of tags you might see in real long lived repos.
This is not just theoretical, msysGit is already failing to clone our
company repo due to this. It's a large repo converted from CVS, nearly
10 years of history.
Four possible solutions were discussed on the Git mailing list (in no
particular order):
1) Call fetch-pack multiple times with smaller batches of refs.
This was dismissed as inefficient and inelegant.
2) Add option --refs-fd=$n to pass a an fd from where to read the refs.
This was rejected because inheriting descriptors other than
stdin/stdout/stderr through exec() is apparently problematic on Windows,
plus it would require changes to the run-command API to open extra
pipes.
3) Add option --refs-from=$tmpfile to pass the refs using a temp file.
This was not favored because of the temp file requirement.
4) Add option --stdin to pass the refs on stdin, one per line.
In the end this option was chosen as the most efficient and most
desirable from scripting perspective.
There was however a small complication when using stdin to pass refs to
fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for
communication with the remote server.
If we are going to sneak refs on stdin line by line, it would have to be
done very carefully in the presence of --stateless-rpc, because when
reading refs line by line we might read ahead too much data into our
buffer and eat some of the remote protocol data which is also coming on
stdin.
One way to solve this would be to refactor get_remote_heads() in
fetch-pack.c to accept a residual buffer from our stdin line parsing
above, but this function is used in several places so other callers
would be burdened by this residual buffer interface even when most of
them don't need it.
In the end we settled on the following solution:
If --stdin is specified without --stateless-rpc, fetch-pack would read
the refs from stdin one per line, in a script friendly format.
However if --stdin is specified together with --stateless-rpc,
fetch-pack would read the refs from stdin in packetized format
(pkt-line) with a flush packet terminating the list of refs. This way we
can read the exact number of bytes that we need from stdin, and then
get_remote_heads() can continue reading from the same fd without losing
a single byte of remote protocol data.
This way the --stdin option only loses generality and scriptability when
used together with --stateless-rpc, which is not easily scriptable
anyway because it also uses pkt-line when talking to the remote server.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-02 15:13:48 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
else {
|
|
|
|
/* read from stdin one ref per line, until EOF */
|
|
|
|
struct strbuf line = STRBUF_INIT;
|
2016-01-13 23:31:17 +00:00
|
|
|
while (strbuf_getline_lf(&line, stdin) != EOF)
|
2013-01-29 22:02:15 +00:00
|
|
|
add_sought_entry(&sought, &nr_sought, &alloc_sought, line.buf);
|
fetch-pack: new --stdin option to read refs from stdin
If a remote repo has too many tags (or branches), cloning it over the
smart HTTP transport can fail because remote-curl.c puts all the refs
from the remote repo on the fetch-pack command line. This can make the
command line longer than the global OS command line limit, causing
fetch-pack to fail.
This is especially a problem on Windows where the command line limit is
orders of magnitude shorter than Linux. There are already real repos out
there that msysGit cannot clone over smart HTTP due to this problem.
Here is an easy way to trigger this problem:
git init too-many-refs
cd too-many-refs
echo bla > bla.txt
git add .
git commit -m test
sha=$(git rev-parse HEAD)
tag=$(perl -e 'print "bla" x 30')
for i in `seq 50000`; do
echo $sha refs/tags/$tag-$i >> .git/packed-refs
done
Then share this repo over the smart HTTP protocol and try cloning it:
$ git clone http://localhost/.../too-many-refs/.git
Cloning into 'too-many-refs'...
fatal: cannot exec 'fetch-pack': Argument list too long
50k tags is obviously an absurd number, but it is required to
demonstrate the problem on Linux because it has a much more generous
command line limit. On Windows the clone fails with as little as 500
tags in the above loop, which is getting uncomfortably close to the
number of tags you might see in real long lived repos.
This is not just theoretical, msysGit is already failing to clone our
company repo due to this. It's a large repo converted from CVS, nearly
10 years of history.
Four possible solutions were discussed on the Git mailing list (in no
particular order):
1) Call fetch-pack multiple times with smaller batches of refs.
This was dismissed as inefficient and inelegant.
2) Add option --refs-fd=$n to pass a an fd from where to read the refs.
This was rejected because inheriting descriptors other than
stdin/stdout/stderr through exec() is apparently problematic on Windows,
plus it would require changes to the run-command API to open extra
pipes.
3) Add option --refs-from=$tmpfile to pass the refs using a temp file.
This was not favored because of the temp file requirement.
4) Add option --stdin to pass the refs on stdin, one per line.
In the end this option was chosen as the most efficient and most
desirable from scripting perspective.
There was however a small complication when using stdin to pass refs to
fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for
communication with the remote server.
If we are going to sneak refs on stdin line by line, it would have to be
done very carefully in the presence of --stateless-rpc, because when
reading refs line by line we might read ahead too much data into our
buffer and eat some of the remote protocol data which is also coming on
stdin.
One way to solve this would be to refactor get_remote_heads() in
fetch-pack.c to accept a residual buffer from our stdin line parsing
above, but this function is used in several places so other callers
would be burdened by this residual buffer interface even when most of
them don't need it.
In the end we settled on the following solution:
If --stdin is specified without --stateless-rpc, fetch-pack would read
the refs from stdin one per line, in a script friendly format.
However if --stdin is specified together with --stateless-rpc,
fetch-pack would read the refs from stdin in packetized format
(pkt-line) with a flush packet terminating the list of refs. This way we
can read the exact number of bytes that we need from stdin, and then
get_remote_heads() can continue reading from the same fd without losing
a single byte of remote protocol data.
This way the --stdin option only loses generality and scriptability when
used together with --stateless-rpc, which is not easily scriptable
anyway because it also uses pkt-line when talking to the remote server.
Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-02 15:13:48 +00:00
|
|
|
strbuf_release(&line);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-10-31 00:47:42 +00:00
|
|
|
if (args.stateless_rpc) {
|
|
|
|
conn = NULL;
|
|
|
|
fd[0] = 0;
|
|
|
|
fd[1] = 1;
|
2008-02-04 18:26:23 +00:00
|
|
|
} else {
|
2013-11-28 19:49:17 +00:00
|
|
|
int flags = args.verbose ? CONNECT_VERBOSE : 0;
|
|
|
|
if (args.diag_url)
|
|
|
|
flags |= CONNECT_DIAG_URL;
|
git_connect(): fix corner cases in downgrading v2 to v0
There's code in git_connect() that checks whether we are doing a push
with protocol_v2, and if so, drops us to protocol_v0 (since we know
how to do v2 only for fetches). But it misses some corner cases:
1. it checks the "prog" variable, which is actually the path to
receive-pack on the remote side. By default this is just
"git-receive-pack", but it could be an arbitrary string (like
"/path/to/git receive-pack", etc). We'd accidentally stay in v2
mode in this case.
2. besides "receive-pack" and "upload-pack", there's one other value
we'd expect: "upload-archive" for handling "git archive --remote".
Like receive-pack, this doesn't understand v2, and should use the
v0 protocol.
In practice, neither of these causes bugs in the real world so far. We
do send a "we understand v2" probe to the server, but since no server
implements v2 for anything but upload-pack, it's simply ignored. But
this would eventually become a problem if we do implement v2 for those
endpoints, as older clients would falsely claim to understand it,
leading to a server response they can't parse.
We can fix (1) by passing in both the program path and the "name" of the
operation. I treat the name as a string here, because that's the pattern
set in transport_connect(), which is one of our callers (we were simply
throwing away the "name" value there before).
We can fix (2) by allowing only known-v2 protocols ("upload-pack"),
rather than blocking unknown ones ("receive-pack" and "upload-archive").
That will mean whoever eventually implements v2 push will have to adjust
this list, but that's reasonable. We'll do the safe, conservative thing
(sticking to v0) by default, and anybody working on v2 will quickly
realize this spot needs to be updated.
The new tests cover the receive-pack and upload-archive cases above, and
re-confirm that we allow v2 with an arbitrary "--upload-pack" path (that
already worked before this patch, of course, but it would be an easy
thing to break if we flipped the allow/block logic without also handling
"name" separately).
Here are a few miscellaneous implementation notes, since I had to do a
little head-scratching to understand who calls what:
- transport_connect() is called only for git-upload-archive. For
non-http git remotes, that resolves to the virtual connect_git()
function (which then calls git_connect(); confused yet?). So
plumbing through "name" in connect_git() covers that.
- for regular fetches and pushes, callers use higher-level functions
like transport_fetch_refs(). For non-http git remotes, that means
calling git_connect() under the hood via connect_setup(). And that
uses the "for_push" flag to decide which name to use.
- likewise, plumbing like fetch-pack and send-pack may call
git_connect() directly; they each know which name to use.
- for remote helpers (including http), we already have separate
parameters for "name" and "exec" (another name for "prog"). In
process_connect_service(), we feed the "name" to the helper via
"connect" or "stateless-connect" directives.
There's also a "servpath" option, which can be used to tell the
helper about the "exec" path. But no helpers we implement support
it! For http it would be useless anyway (no reasonable server
implementation will allow you to send a shell command to run the
server). In theory it would be useful for more obscure helpers like
remote-ext, but even there it is not implemented.
It's tempting to get rid of it simply to reduce confusion, but we
have publicly documented it since it was added in fa8c097cc9
(Support remote helpers implementing smart transports, 2009-12-09),
so it's possible some helper in the wild is using it.
- So for v2, helpers (again, including http) are mainly used via
stateless-connect, driven by the main program. But they do still
need to decide whether to do a v2 probe. And so there's similar
logic in remote-curl.c's discover_refs() that looks for
"git-receive-pack". But it's not buggy in the same way. Since it
doesn't support servpath, it is always dealing with a "service"
string like "git-receive-pack". And since it doesn't support
straight "connect", it can't be used for "upload-archive".
So we could leave that spot alone. But I've updated it here to match
the logic we're changing in connect_git(). That seems like the least
confusing thing for somebody who has to touch both of these spots
later (say, to add v2 push support). I didn't add a new test to make
sure this doesn't break anything; we already have several tests (in
t5551 and elsewhere) that make sure we are using v2 over http.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-03-17 19:08:51 +00:00
|
|
|
conn = git_connect(fd, dest, "git-upload-pack",
|
|
|
|
args.uploadpack, flags);
|
2013-11-28 19:49:17 +00:00
|
|
|
if (!conn)
|
|
|
|
return args.diag_url ? 0 : 1;
|
2009-10-31 00:47:42 +00:00
|
|
|
}
|
2018-03-14 18:31:45 +00:00
|
|
|
|
|
|
|
packet_reader_init(&reader, fd[0], NULL, 0,
|
|
|
|
PACKET_READ_CHOMP_NEWLINE |
|
pack-protocol.txt: accept error packets in any context
In the Git pack protocol definition, an error packet may appear only in
a certain context. However, servers can face a runtime error (e.g. I/O
error) at an arbitrary timing. This patch changes the protocol to allow
an error packet to be sent instead of any packet.
Without this protocol spec change, when a server cannot process a
request, there's no way to tell that to a client. Since the server
cannot produce a valid response, it would be forced to cut a connection
without telling why. With this protocol spec change, the server can be
more gentle in this situation. An old client may see these error packets
as an unexpected packet, but this is not worse than having an unexpected
EOF.
Following this protocol spec change, the error packet handling code is
moved to pkt-line.c. Implementation wise, this implementation uses
pkt-line to communicate with a subprocess. Since this is not a part of
Git protocol, it's possible that a packet that is not supposed to be an
error packet is mistakenly parsed as an error packet. This error packet
handling is enabled only for the Git pack protocol parsing code
considering this.
Signed-off-by: Masaya Suzuki <masayasuzuki@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-12-29 21:19:15 +00:00
|
|
|
PACKET_READ_GENTLE_ON_EOF |
|
|
|
|
PACKET_READ_DIE_ON_ERR_PACKET);
|
2018-03-14 18:31:45 +00:00
|
|
|
|
2018-12-17 22:40:54 +00:00
|
|
|
version = discover_version(&reader);
|
|
|
|
switch (version) {
|
2018-03-14 18:31:47 +00:00
|
|
|
case protocol_v2:
|
2021-02-05 20:48:48 +00:00
|
|
|
get_remote_refs(fd[1], &reader, &ref, 0, NULL, NULL,
|
|
|
|
args.stateless_rpc);
|
2018-12-17 22:40:54 +00:00
|
|
|
break;
|
2018-03-14 18:31:45 +00:00
|
|
|
case protocol_v1:
|
|
|
|
case protocol_v0:
|
|
|
|
get_remote_heads(&reader, &ref, 0, NULL, &shallow);
|
|
|
|
break;
|
|
|
|
case protocol_unknown_version:
|
|
|
|
BUG("unknown protocol version");
|
|
|
|
}
|
2009-10-31 00:47:42 +00:00
|
|
|
|
2019-03-20 08:16:14 +00:00
|
|
|
ref = fetch_pack(&args, fd, ref, sought, nr_sought,
|
fetch-pack: support more than one pack lockfile
Whenever a fetch results in a packfile being downloaded, a .keep file is
generated, so that the packfile can be preserved (from, say, a running
"git repack") until refs are written referring to the contents of the
packfile.
In a subsequent patch, a successful fetch using protocol v2 may result
in more than one .keep file being generated. Therefore, teach
fetch_pack() and the transport mechanism to support multiple .keep
files.
Implementation notes:
- builtin/fetch-pack.c normally does not generate .keep files, and thus
is unaffected by this or future changes. However, it has an
undocumented "--lock-pack" feature, used by remote-curl.c when
implementing the "fetch" remote helper command. In keeping with the
remote helper protocol, only one "lock" line will ever be written;
the rest will result in warnings to stderr. However, in practice,
warnings will never be written because the remote-curl.c "fetch" is
only used for protocol v0/v1 (which will not generate multiple .keep
files). (Protocol v2 uses the "stateless-connect" command, not the
"fetch" command.)
- connected.c has an optimization in that connectivity checks on a ref
need not be done if the target object is in a pack known to be
self-contained and connected. If there are multiple packfiles, this
optimization can no longer be done.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 20:57:22 +00:00
|
|
|
&shallow, pack_lockfiles_ptr, version);
|
|
|
|
if (pack_lockfiles.nr) {
|
|
|
|
int i;
|
|
|
|
|
|
|
|
printf("lock %s\n", pack_lockfiles.items[0].string);
|
2009-10-31 00:47:42 +00:00
|
|
|
fflush(stdout);
|
fetch-pack: support more than one pack lockfile
Whenever a fetch results in a packfile being downloaded, a .keep file is
generated, so that the packfile can be preserved (from, say, a running
"git repack") until refs are written referring to the contents of the
packfile.
In a subsequent patch, a successful fetch using protocol v2 may result
in more than one .keep file being generated. Therefore, teach
fetch_pack() and the transport mechanism to support multiple .keep
files.
Implementation notes:
- builtin/fetch-pack.c normally does not generate .keep files, and thus
is unaffected by this or future changes. However, it has an
undocumented "--lock-pack" feature, used by remote-curl.c when
implementing the "fetch" remote helper command. In keeping with the
remote helper protocol, only one "lock" line will ever be written;
the rest will result in warnings to stderr. However, in practice,
warnings will never be written because the remote-curl.c "fetch" is
only used for protocol v0/v1 (which will not generate multiple .keep
files). (Protocol v2 uses the "stateless-connect" command, not the
"fetch" command.)
- connected.c has an optimization in that connectivity checks on a ref
need not be done if the target object is in a pack known to be
self-contained and connected. If there are multiple packfiles, this
optimization can no longer be done.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10 20:57:22 +00:00
|
|
|
for (i = 1; i < pack_lockfiles.nr; i++)
|
|
|
|
warning(_("Lockfile created but not reported: %s"),
|
|
|
|
pack_lockfiles.items[i].string);
|
2008-02-04 18:26:23 +00:00
|
|
|
}
|
2013-07-21 08:18:05 +00:00
|
|
|
if (args.check_self_contained_and_connected &&
|
|
|
|
args.self_contained_and_connected) {
|
|
|
|
printf("connectivity-ok\n");
|
|
|
|
fflush(stdout);
|
|
|
|
}
|
2009-10-31 00:47:42 +00:00
|
|
|
close(fd[0]);
|
|
|
|
close(fd[1]);
|
|
|
|
if (finish_connect(conn))
|
2012-09-09 06:19:46 +00:00
|
|
|
return 1;
|
2007-09-11 03:03:00 +00:00
|
|
|
|
2013-01-29 22:02:15 +00:00
|
|
|
ret = !ref;
|
2012-09-09 06:19:48 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* If the heads to pull were given, we should have consumed
|
|
|
|
* all of them by matching the remote. Otherwise, 'git fetch
|
|
|
|
* remote no-such-ref' would silently succeed without issuing
|
|
|
|
* an error.
|
|
|
|
*/
|
2017-02-22 16:01:22 +00:00
|
|
|
ret |= report_unmatched_refs(sought, nr_sought);
|
2013-01-29 22:02:15 +00:00
|
|
|
|
2007-09-11 03:03:00 +00:00
|
|
|
while (ref) {
|
|
|
|
printf("%s %s\n",
|
2015-11-10 02:22:20 +00:00
|
|
|
oid_to_hex(&ref->old_oid), ref->name);
|
2007-09-11 03:03:00 +00:00
|
|
|
ref = ref->next;
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|