Commit graph

273 commits

Author SHA1 Message Date
brian m. carlson
ee3051bd23 sha1-array: convert internal storage for struct sha1_array to object_id
Make the internal storage for struct sha1_array use an array of struct
object_id internally.  Update the users of this struct which inspect its
internals.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28 09:59:34 -07:00
Brandon Williams
511155db51 remote-curl: allow push options
Teach remote-curl to understand push options and to be able to convey
them across HTTP.

Signed-off-by: Brandon Williams <bmwill@google.com>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-22 15:41:21 -07:00
Junio C Hamano
d984592043 Merge branch 'dt/smart-http-detect-server-going-away'
When the http server gives an incomplete response to a smart-http
rpc call, it could lead to client waiting for a full response that
will never come.  Teach the client side to notice this condition
and abort the transfer.

An improvement counterproposal has failed.
cf. <20161114194049.mktpsvgdhex2f4zv@sigill.intra.peff.net>

* dt/smart-http-detect-server-going-away:
  upload-pack: optionally allow fetching any sha1
  remote-curl: don't hang when a server dies before any output
2017-01-10 15:24:25 -08:00
Junio C Hamano
8a2882f23e Merge branch 'jk/http-walker-limit-redirect-2.9'
Transport with dumb http can be fooled into following foreign URLs
that the end user does not intend to, especially with the server
side redirects and http-alternates mechanism, which can lead to
security issues.  Tighten the redirection and make it more obvious
to the end user when it happens.

* jk/http-walker-limit-redirect-2.9:
  http: treat http-alternates like redirects
  http: make redirects more obvious
  remote-curl: rename shadowed options variable
  http: always update the base URL for redirects
  http: simplify update_url_from_redirect
2016-12-19 14:45:32 -08:00
Jeff King
50d3413740 http: make redirects more obvious
We instruct curl to always follow HTTP redirects. This is
convenient, but it creates opportunities for malicious
servers to create confusing situations. For instance,
imagine Alice is a git user with access to a private
repository on Bob's server. Mallory runs her own server and
wants to access objects from Bob's repository.

Mallory may try a few tricks that involve asking Alice to
clone from her, build on top, and then push the result:

  1. Mallory may simply redirect all fetch requests to Bob's
     server. Git will transparently follow those redirects
     and fetch Bob's history, which Alice may believe she
     got from Mallory. The subsequent push seems like it is
     just feeding Mallory back her own objects, but is
     actually leaking Bob's objects. There is nothing in
     git's output to indicate that Bob's repository was
     involved at all.

     The downside (for Mallory) of this attack is that Alice
     will have received Bob's entire repository, and is
     likely to notice that when building on top of it.

  2. If Mallory happens to know the sha1 of some object X in
     Bob's repository, she can instead build her own history
     that references that object. She then runs a dumb http
     server, and Alice's client will fetch each object
     individually. When it asks for X, Mallory redirects her
     to Bob's server. The end result is that Alice obtains
     objects from Bob, but they may be buried deep in
     history. Alice is less likely to notice.

Both of these attacks are fairly hard to pull off. There's a
social component in getting Mallory to convince Alice to
work with her. Alice may be prompted for credentials in
accessing Bob's repository (but not always, if she is using
a credential helper that caches). Attack (1) requires a
certain amount of obliviousness on Alice's part while making
a new commit. Attack (2) requires that Mallory knows a sha1
in Bob's repository, that Bob's server supports dumb http,
and that the object in question is loose on Bob's server.

But we can probably make things a bit more obvious without
any loss of functionality. This patch does two things to
that end.

First, when we encounter a whole-repo redirect during the
initial ref discovery, we now inform the user on stderr,
making attack (1) much more obvious.

Second, the decision to follow redirects is now
configurable. The truly paranoid can set the new
http.followRedirects to false to avoid any redirection
entirely. But for a more practical default, we will disallow
redirects only after the initial ref discovery. This is
enough to thwart attacks similar to (2), while still
allowing the common use of redirects at the repository
level. Since c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28) we re-root all further requests from
the redirect destination, which should generally mean that
no further redirection is necessary.

As an escape hatch, in case there really is a server that
needs to redirect individual requests, the user can set
http.followRedirects to "true" (and this can be done on a
per-server basis via http.*.followRedirects config).

Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:32:48 -08:00
Jeff King
fcaa6e64b3 remote-curl: rename shadowed options variable
The discover_refs() function has a local "options" variable
to hold the http_get_options we pass to http_get_strbuf().
But this shadows the global "struct options" that holds our
program-level options, which cannot be accessed from this
function.

Let's give the local one a more descriptive name so we can
tell the two apart.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 12:32:48 -08:00
David Turner
296b847c0d remote-curl: don't hang when a server dies before any output
In the event that a HTTP server closes the connection after giving a
200 but before giving any packets, we don't want to hang forever
waiting for a response that will never come.  Instead, we should die
immediately.

One case where this happens is when attempting to fetch a dangling
object by its object name.  In this case, the server dies before
sending any data.  Prior to this patch, fetch-pack would wait for
data from the server, and remote-curl would wait for fetch-pack,
causing a deadlock.

Despite this patch, there is other possible malformed input that could
cause the same deadlock (e.g. a half-finished pktline, or a pktline but
no trailing flush).  There are a few possible solutions to this:

1. Allowing remote-curl to tell fetch-pack about the EOF (so that
fetch-pack could know that no more data is coming until it says
something else).  This is tricky because an out-of-band signal would
be required, or the http response would have to be re-framed inside
another layer of pkt-line or something.

2. Make remote-curl understand some of the protocol.  It turns out
that in addition to understanding pkt-line, it would need to watch for
ack/nak.  This is somewhat fragile, as information about the protocol
would end up in two places.  Also, pkt-lines which are already at the
length limit would need special handling.

Both of these solutions would require a fair amount of work, whereas
this hack is easy and solves at least some of the problem.

Still to do: it would be good to give a better error message
than "fatal: The remote end hung up unexpectedly".

Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18 13:05:46 -08:00
Junio C Hamano
a460ea4a3c Merge branch 'nd/shallow-deepen'
The existing "git fetch --depth=<n>" option was hard to use
correctly when making the history of an existing shallow clone
deeper.  A new option, "--deepen=<n>", has been added to make this
easier to use.  "git clone" also learned "--shallow-since=<date>"
and "--shallow-exclude=<tag>" options to make it easier to specify
"I am interested only in the recent N months worth of history" and
"Give me only the history since that version".

* nd/shallow-deepen: (27 commits)
  fetch, upload-pack: --deepen=N extends shallow boundary by N commits
  upload-pack: add get_reachable_list()
  upload-pack: split check_unreachable() in two, prep for get_reachable_list()
  t5500, t5539: tests for shallow depth excluding a ref
  clone: define shallow clone boundary with --shallow-exclude
  fetch: define shallow boundary with --shallow-exclude
  upload-pack: support define shallow boundary by excluding revisions
  refs: add expand_ref()
  t5500, t5539: tests for shallow depth since a specific date
  clone: define shallow clone boundary based on time with --shallow-since
  fetch: define shallow boundary with --shallow-since
  upload-pack: add deepen-since to cut shallow repos based on time
  shallow.c: implement a generic shallow boundary finder based on rev-list
  fetch-pack: use a separate flag for fetch in deepening mode
  fetch-pack.c: mark strings for translating
  fetch-pack: use a common function for verbose printing
  fetch-pack: use skip_prefix() instead of starts_with()
  upload-pack: move rev-list code out of check_non_tip()
  upload-pack: make check_non_tip() clean things up on error
  upload-pack: tighten number parsing at "deepen" lines
  ...
2016-10-10 14:03:50 -07:00
Junio C Hamano
de61cebde7 Merge branch 'jk/common-main-2.8' into jk/common-main
* jk/common-main-2.8:
  mingw: declare main()'s argv as const
  common-main: call git_setup_gettext()
  common-main: call restore_sigpipe_to_default()
  common-main: call sanitize_stdfds()
  common-main: call git_extract_argv0_path()
  add an extra level of indirection to main()
2016-07-06 10:02:57 -07:00
Jeff King
5ce5f5fa5a common-main: call git_setup_gettext()
This should be part of every program, as otherwise users do
not get translated error messages. However, some external
commands forgot to do so (e.g., git-credential-store). This
fixes them, and eliminates the repeated code in programs
that did remember to use it.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01 15:09:10 -07:00
Jeff King
650c449250 common-main: call git_extract_argv0_path()
Every program which links against libgit.a must call this
function, or risk hitting an assert() in system_path() that
checks whether we have configured argv0_path (though only
when RUNTIME_PREFIX is defined, so essentially only on
Windows).

Looking at the diff, you can see that putting it into the
common main() saves us having to do it individually in each
of the external commands. But what you can't see are the
cases where we _should_ have been doing so, but weren't
(e.g., git-credential-store, and all of the t/helper test
programs).

This has been an accident-waiting-to-happen for a long time,
but wasn't triggered until recently because it involves one
of those programs actually calling system_path(). That
happened with git-credential-store in v2.8.0 with ae5f677
(lazily load core.sharedrepository, 2016-03-11). The
program:

  - takes a lock file, which...

  - opens a tempfile, which...

  - calls adjust_shared_perm to fix permissions, which...

  - lazy-loads the config (as of ae5f677), which...

  - calls system_path() to find the location of
    /etc/gitconfig

On systems with RUNTIME_PREFIX, this means credential-store
reliably hits that assert() and cannot be used.

We never noticed in the test suite, because we set
GIT_CONFIG_NOSYSTEM there, which skips the system_path()
lookup entirely.  But if we were to tweak git_config() to
find /etc/gitconfig even when we aren't going to open it,
then the test suite shows multiple failures (for
credential-store, and for some other test helpers). I didn't
include that tweak here because it's way too specific to
this particular call to be worth carrying around what is
essentially dead code.

The implementation is fairly straightforward, with one
exception: there is exactly one caller (git.c) that actually
cares about the result of the function, and not the
side-effect of setting up argv0_path. We can accommodate
that by simply replacing the value of argv[0] in the array
we hand down to cmd_main().

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01 15:09:10 -07:00
Jeff King
3f2e2297b9 add an extra level of indirection to main()
There are certain startup tasks that we expect every git
process to do. In some cases this is just to improve the
quality of the program (e.g., setting up gettext()). In
others it is a requirement for using certain functions in
libgit.a (e.g., system_path() expects that you have called
git_extract_argv0_path()).

Most commands are builtins and are covered by the git.c
version of main(). However, there are still a few external
commands that use their own main(). Each of these has to
remember to include the correct startup sequence, and we are
not always consistent.

Rather than just fix the inconsistencies, let's make this
harder to get wrong by providing a common main() that can
run this standard startup.

We basically have two options to do this:

 - the compat/mingw.h file already does something like this by
   adding a #define that replaces the definition of main with a
   wrapper that calls mingw_startup().

   The upside is that the code in each program doesn't need
   to be changed at all; it's rewritten on the fly by the
   preprocessor.

   The downside is that it may make debugging of the startup
   sequence a bit more confusing, as the preprocessor is
   quietly inserting new code.

 - the builtin functions are all of the form cmd_foo(),
   and git.c's main() calls them.

   This is much more explicit, which may make things more
   obvious to somebody reading the code. It's also more
   flexible (because of course we have to figure out _which_
   cmd_foo() to call).

   The downside is that each of the builtins must define
   cmd_foo(), instead of just main().

This patch chooses the latter option, preferring the more
explicit approach, even though it is more invasive. We
introduce a new file common-main.c, with the "real" main. It
expects to call cmd_main() from whatever other objects it is
linked against.

We link common-main.o against anything that links against
libgit.a, since we know that such programs will need to do
this setup. Note that common-main.o can't actually go inside
libgit.a, as the linker would not pick up its main()
function automatically (it has no callers).

The rest of the patch is just adjusting all of the various
external programs (mostly in t/helper) to use cmd_main().
I've provided a global declaration for cmd_main(), which
means that all of the programs also need to match its
signature. In particular, many functions need to switch to
"const char **" instead of "char **" for argv. This effect
ripples out to a few other variables and functions, as well.

This makes the patch even more invasive, but the end result
is much better. We should be treating argv strings as const
anyway, and now all programs conform to the same signature
(which also matches the way builtins are defined).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01 15:09:10 -07:00
Nguyễn Thái Ngọc Duy
cccf74e2da fetch, upload-pack: --deepen=N extends shallow boundary by N commits
In git-fetch, --depth argument is always relative with the latest
remote refs. This makes it a bit difficult to cover this use case,
where the user wants to make the shallow history, say 3 levels
deeper. It would work if remote refs have not moved yet, but nobody
can guarantee that, especially when that use case is performed a
couple months after the last clone or "git fetch --depth". Also,
modifying shallow boundary using --depth does not work well with
clones created by --since or --not.

This patch fixes that. A new argument --deepen=<N> will add <N> more (*)
parent commits to the current history regardless of where remote refs
are.

Have/Want negotiation is still respected. So if remote refs move, the
server will send two chunks: one between "have" and "want" and another
to extend shallow history. In theory, the client could send no "want"s
in order to get the second chunk only. But the protocol does not allow
that. Either you send no want lines, which means ls-remote; or you
have to send at least one want line that carries deep-relative to the
server..

The main work was done by Dongcan Jiang. I fixed it up here and there.
And of course all the bugs belong to me.

(*) We could even support --deepen=<N> where <N> is negative. In that
case we can cut some history from the shallow clone. This operation
(and --depth=<shorter depth>) does not require interaction with remote
side (and more complicated to implement as a result).

Helped-by: Duy Nguyen <pclouds@gmail.com>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com>
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13 14:38:16 -07:00
Nguyễn Thái Ngọc Duy
a45a260086 fetch: define shallow boundary with --shallow-exclude
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13 14:38:16 -07:00
Nguyễn Thái Ngọc Duy
508ea88226 fetch: define shallow boundary with --shallow-since
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13 14:38:16 -07:00
Nguyễn Thái Ngọc Duy
b5f62ebea5 remote-curl.c: convert fetch_git() to use argv_array
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13 14:38:16 -07:00
Johannes Schindelin
8cb01e2fd3 http: support sending custom HTTP headers
We introduce a way to send custom HTTP headers with all requests.

This allows us, for example, to send an extra token from build agents
for temporary access to private repositories. (This is the use case that
triggered this patch.)

This feature can be used like this:

	git -c http.extraheader='Secret: sssh!' fetch $URL $REF

Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only
a single list, overriding any previous call. This means we have to
collect _all_ of the headers we want to use into a single list, and
feed it to cURL in one shot. Since we already unconditionally set a
"pragma" header when initializing the curl handles, we can add our new
headers to that list.

For callers which override the default header list (like probe_rpc),
we provide `http_copy_default_headers()` so they can do the same
trick.

Big thanks to Jeff King and Junio Hamano for their outstanding help and
patient reviews.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-27 14:02:33 -07:00
Junio C Hamano
11529ecec9 Merge branch 'jk/tighten-alloc'
Update various codepaths to avoid manually-counted malloc().

* jk/tighten-alloc: (22 commits)
  ewah: convert to REALLOC_ARRAY, etc
  convert ewah/bitmap code to use xmalloc
  diff_populate_gitlink: use a strbuf
  transport_anonymize_url: use xstrfmt
  git-compat-util: drop mempcpy compat code
  sequencer: simplify memory allocation of get_message
  test-path-utils: fix normalize_path_copy output buffer size
  fetch-pack: simplify add_sought_entry
  fast-import: simplify allocation in start_packfile
  write_untracked_extension: use FLEX_ALLOC helper
  prepare_{git,shell}_cmd: use argv_array
  use st_add and st_mult for allocation size computation
  convert trivial cases to FLEX_ARRAY macros
  use xmallocz to avoid size arithmetic
  convert trivial cases to ALLOC_ARRAY
  convert manual allocations to argv_array
  argv-array: add detach function
  add helpers for allocating flex-array structs
  harden REALLOC_ARRAY and xcalloc against size_t overflow
  tree-diff: catch integer overflow in combine_diff_path allocation
  ...
2016-02-26 13:37:16 -08:00
Junio C Hamano
97c49af6a7 Merge branch 'sp/remote-curl-ssl-strerror'
Help those who debug http(s) part of the system.

* sp/remote-curl-ssl-strerror:
  remote-curl: include curl_errorstr on SSL setup failures
2016-02-24 13:25:56 -08:00
Junio C Hamano
e84d5e9fa1 Merge branch 'ew/force-ipv4'
"git fetch" and friends that make network connections can now be
told to only use ipv4 (or ipv6).

* ew/force-ipv4:
  connect & http: support -4 and -6 switches for remote operations
2016-02-24 13:25:54 -08:00
Jeff King
b32fa95fd8 convert trivial cases to ALLOC_ARRAY
Each of these cases can be converted to use ALLOC_ARRAY or
REALLOC_ARRAY, which has two advantages:

  1. It automatically checks the array-size multiplication
     for overflow.

  2. It always uses sizeof(*array) for the element-size,
     so that it can never go out of sync with the declared
     type of the array.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22 14:51:09 -08:00
Jeff King
850d2fec53 convert manual allocations to argv_array
There are many manual argv allocations that predate the
argv_array API. Switching to that API brings a few
advantages:

  1. We no longer have to manually compute the correct final
     array size (so it's one less thing we can screw up).

  2. In many cases we had to make a separate pass to count,
     then allocate, then fill in the array. Now we can do it
     in one pass, making the code shorter and easier to
     follow.

  3. argv_array handles memory ownership for us, making it
     more obvious when things should be free()d and and when
     not.

Most of these cases are pretty straightforward. In some, we
switch from "run_command_v" to "run_command" which lets us
directly use the argv_array embedded in "struct
child_process".

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22 14:50:32 -08:00
Shawn Pearce
00540458a8 remote-curl: include curl_errorstr on SSL setup failures
For curl error 35 (CURLE_SSL_CONNECT_ERROR) users need the
additional text stored in CURLOPT_ERRORBUFFER to debug why
the connection did not start. This is curl_errorstr inside
of http.c, so include that in the message if it is non-empty.

Sometimes HTTP response codes aren't yet available, such as
when the SSL setup fails. Don't include HTTP 0 in the message.

Signed-off-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-15 13:21:43 -08:00
Eric Wong
c915f11eb4 connect & http: support -4 and -6 switches for remote operations
Sometimes it is necessary to force IPv4-only or IPv6-only operation
on networks where name lookups may return a non-routable address and
stall remote operations.

The ssh(1) command has an equivalent switches which we may pass when
we run them.  There may be old ssh(1) implementations out there
which do not support these switches; they should report the
appropriate error in that case.

rsync support is untouched for now since it is deprecated and
scheduled to be removed.

Signed-off-by: Eric Wong <normalperson@yhbt.net>
Reviewed-by: Torsten Bögershausen <tboegi@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-12 11:34:14 -08:00
Junio C Hamano
8f309aeb82 strbuf: introduce strbuf_getline_{lf,nul}()
The strbuf_getline() interface allows a byte other than LF or NUL as
the line terminator, but this is only because I wrote these
codepaths anticipating that there might be a value other than NUL
and LF that could be useful when I introduced line_termination long
time ago.  No useful caller that uses other value has emerged.

By now, it is clear that the interface is overly broad without a
good reason.  Many codepaths have hardcoded preference to read
either LF terminated or NUL terminated records from their input, and
then call strbuf_getline() with LF or NUL as the third parameter.

This step introduces two thin wrappers around strbuf_getline(),
namely, strbuf_getline_lf() and strbuf_getline_nul(), and
mechanically rewrites these call sites to call either one of
them.  The changes contained in this patch are:

 * introduction of these two functions in strbuf.[ch]

 * mechanical conversion of all callers to strbuf_getline() with
   either '\n' or '\0' as the third parameter to instead call the
   respective thin wrapper.

After this step, output from "git grep 'strbuf_getline('" would
become a lot smaller.  An interim goal of this series is to make
this an empty set, so that we can have strbuf_getline_crlf() take
over the shorter name strbuf_getline().

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-15 10:12:51 -08:00
brian m. carlson
8338c911d1 parse_fetch: convert to use struct object_id
Convert the parse_fetch function to use struct object_id.  Remove the
strlen check as get_oid_hex will fail safely on receiving a too-short
NUL-terminated string.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
2015-11-20 08:02:05 -05:00
brian m. carlson
f4e54d02b8 Convert struct ref to use object_id.
Use struct object_id in three fields in struct ref and convert all the
necessary places that use it.

Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
2015-11-20 08:02:05 -05:00
Jeff King
6f687c21c0 use alloc_ref rather than hand-allocating "struct ref"
This saves us some manual computation, and eliminates a call
to strcpy.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-10-05 11:08:05 -07:00
Dave Borowitz
30261094b1 push: support signing pushes iff the server supports it
Add a new flag --sign=true (or --sign=false), which means the same
thing as the original --signed (or --no-signed).  Give it a third
value --sign=if-asked to tell push and send-pack to send a push
certificate if and only if the server advertised a push cert nonce.

If not, warn the user that their push may not be as secure as they
thought.

Signed-off-by: Dave Borowitz <dborowitz@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-19 12:58:45 -07:00
Junio C Hamano
6902c4da58 Merge branch 'rs/deflate-init-cleanup'
Code simplification.

* rs/deflate-init-cleanup:
  zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}
2015-03-17 16:01:26 -07:00
René Scharfe
9a6f1287fb zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}
Clear the git_zstream variable at the start of git_deflate_init() etc.
so that callers don't have to do that.

Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-05 15:46:03 -08:00
Junio C Hamano
f18e3896f7 Merge branch 'ye/http-accept-language'
Using environment variable LANGUAGE and friends on the client side,
HTTP-based transports now send Accept-Language when making requests.

* ye/http-accept-language:
  http: add Accept-Language header if possible
2015-02-18 11:44:57 -08:00
Junio C Hamano
f6b50a8bf4 Merge branch 'jk/remote-curl-an-array-in-struct-cannot-be-null'
Fix a misspelled conditional that is always true.

* jk/remote-curl-an-array-in-struct-cannot-be-null:
  do not check truth value of flex arrays
2015-02-17 10:15:27 -08:00
Jeff King
94ee8e2c98 do not check truth value of flex arrays
There is no point in checking "!ref->name" when ref is a
"struct ref". The name field is a flex-array, and there
always has a non-zero address. This is almost certainly not
hurting anything, but it does cause clang-3.6 to complain.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-28 12:46:07 -08:00
Yi EungJun
f18604bbf2 http: add Accept-Language header if possible
Add an Accept-Language header which indicates the user's preferred
languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG.

Examples:
  LANGUAGE= -> ""
  LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1"
  LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1"
  LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1"

This gives git servers a chance to display remote error messages in
the user's preferred language.

Limit the number of languages to 1,000 because q-value must not be
smaller than 0.001, and limit the length of Accept-Language header to
4,000 bytes for some HTTP servers which cannot accept such long header.

Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-28 11:17:08 -08:00
Junio C Hamano
fb06b5280e Merge branch 'jc/push-cert'
Allow "git push" request to be signed, so that it can be verified and
audited, using the GPG signature of the person who pushed, that the
tips of branches at a public repository really point the commits
the pusher wanted to, without having to "trust" the server.

* jc/push-cert: (24 commits)
  receive-pack::hmac_sha1(): copy the entire SHA-1 hash out
  signed push: allow stale nonce in stateless mode
  signed push: teach smart-HTTP to pass "git push --signed" around
  signed push: fortify against replay attacks
  signed push: add "pushee" header to push certificate
  signed push: remove duplicated protocol info
  send-pack: send feature request on push-cert packet
  receive-pack: GPG-validate push certificates
  push: the beginning of "git push --signed"
  pack-protocol doc: typofix for PKT-LINE
  gpg-interface: move parse_signature() to where it should be
  gpg-interface: move parse_gpg_output() to where it should be
  send-pack: clarify that cmds_sent is a boolean
  send-pack: refactor inspecting and resetting status and sending commands
  send-pack: rename "new_refs" to "need_pack_data"
  receive-pack: factor out capability string generation
  send-pack: factor out capability string generation
  send-pack: always send capabilities
  send-pack: refactor decision to send update per ref
  send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher
  ...
2014-10-08 13:05:25 -07:00
Junio C Hamano
83510ef3fd Merge branch 'da/styles'
* da/styles:
  stylefix: asterisks stick to the variable, not the type
2014-09-19 11:38:35 -07:00
Junio C Hamano
d9dd4cebec Merge branch 'jk/send-pack-many-refspecs'
The number of refs that can be pushed at once over smart HTTP was
limited by the command line length.  The limitation has been lifted
by passing these refs from the standard input of send-pack.

* jk/send-pack-many-refspecs:
  send-pack: take refspecs over stdin
2014-09-19 11:38:31 -07:00
Junio C Hamano
0ea47f9d33 signed push: teach smart-HTTP to pass "git push --signed" around
The "--signed" option received by "git push" is first passed to the
transport layer, which the native transport directly uses to notice
that a push certificate needs to be sent.  When the transport-helper
is involved, however, the option needs to be told to the helper with
set_helper_option(), and the helper needs to take necessary action.
For the smart-HTTP helper, the "necessary action" involves spawning
the "git send-pack" subprocess with the "--signed" option.

Once the above all gets wired in, the smart-HTTP transport now can
use the push certificate mechanism to authenticate its pushes.

Add a test that is modeled after tests for the native transport in
t5534-push-signed.sh to t5541-http-push-smart.sh.  Update the test
Apache configuration to pass GNUPGHOME environment variable through.
As PassEnv would trigger warnings for an environment variable that
is not set, export it from test-lib.sh set to a harmless value when
GnuPG is not being used in the tests.

Note that the added test is deliberately loose and does not check
the nonce in this step.  This is because the stateless RPC mode is
inevitably flaky and a nonce that comes back in the actual push
processing is one issued by a different process; if the two
interactions with the server crossed a second boundary, the nonces
will not match and such a check will fail.  A later patch in the
series will work around this shortcoming.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-17 14:58:04 -07:00
David Aguilar
24d36f1472 stylefix: asterisks stick to the variable, not the type
Signed-off-by: David Aguilar <davvid@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-02 11:33:32 -07:00
Jeff King
26be19ba8d send-pack: take refspecs over stdin
Pushing a large number of refs works over most transports,
because we implement send-pack as an internal function.
However, it can sometimes fail when pushing over http,
because we have to spawn "git send-pack --stateless-rpc" to
do the heavy lifting, and we pass each refspec on the
command line. This can cause us to overflow the OS limits on
the size of the command line for a large push.

We can solve this by giving send-pack a --stdin option and
using it from remote-curl.  We already dealt with this on
the fetch-pack side in 078b895 (fetch-pack: new --stdin
option to read refs from stdin, 2012-04-02). The stdin
option (and in particular, its use of packet-lines for
stateless-rpc input) is modeled after that solution.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-26 12:58:02 -07:00
René Scharfe
d318027932 run-command: introduce CHILD_PROCESS_INIT
Most struct child_process variables are cleared using memset first after
declaration.  Provide a macro, CHILD_PROCESS_INIT, that can be used to
initialize them statically instead.  That's shorter, doesn't require a
function call and is slightly more readable (especially given that we
already have STRBUF_INIT, ARGV_ARRAY_INIT etc.).

Helped-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20 09:53:37 -07:00
Junio C Hamano
0ac744305f Merge branch 'jk/remote-curl-squelch-extra-errors'
* jk/remote-curl-squelch-extra-errors:
  remote-curl: mark helper-protocol errors more clearly
  remote-curl: use error instead of fprintf(stderr)
  remote-curl: do not complain on EOF from parent git
2014-07-21 11:18:41 -07:00
Jeff King
cdaa4e98ca remote-curl: mark helper-protocol errors more clearly
When we encounter an error in remote-curl, we generally just
report it to stderr. There is no need for the user to care
that the "could not connect to server" error was generated
by git-remote-https rather than a function in the parent
git-fetch process.

However, when the error is in the protocol between git and
the helper, it makes sense to clearly identify which side is
complaining. These cases shouldn't ever happen, but when
they do, we can make them less confusing by being more
verbose.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-10 10:54:22 -07:00
Jeff King
b725b270d1 remote-curl: use error instead of fprintf(stderr)
We usually prefix our error messages with "error: ", but
many error messages from remote-curl are simply printed with
fprintf. This can make the output a little harder to read
(especially because such message may be intermingled with
errors from the parent git process).

There is no reason to avoid error(), as we are already
calling it many places (in addition to libgit.a functions
which use it).

While we're adjusting the messages, we can also drop the
capitalization which makes them unlike other git error
messages.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-10 10:53:47 -07:00
Jeff King
37943e4c38 remote-curl: do not complain on EOF from parent git
The parent git process is supposed to send us an empty line
to indicate that the conversation is over. However, the
parent process may die() if there is a problem with the
operation (e.g., we try to fetch a ref that does not exist).
In this case, it produces a useful message, but then
remote-curl _also_ produces an unhelpful message:

  $ git pull origin matser
  fatal: couldn't find remote ref matser
  Unexpected end of command stream

The "right" way to fix this is to teach the parent git to
always cleanly close the connection to the helper, letting
it know that we are done. Implementing that is rather
clunky, though, as it would involve either replacing die()
operations with returning errors up the stack (until we
disconnect the transport), or adding an atexit handler to
clean up any transport helpers left open.

It's much simpler to just suppress the EOF message in
remote-curl. It was not added to address any real-world
situation in the first place, but rather a "we should
probably report unexpected things" suggestion[1].

It is the parent git which drives the operation, and whose
exit value actually matters. If the parent dies, then the
helper has no need to complain (except as a debugging aid).
In the off chance that the pipe is closed without the parent
dying, it can still notice the non-zero exit code.

[1] http://article.gmane.org/gmane.comp.version-control.git/176036

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-10 10:53:00 -07:00
Jeff King
95b567c7c3 use skip_prefix to avoid repeating strings
It's a common idiom to match a prefix and then skip past it
with strlen, like:

  if (starts_with(foo, "bar"))
	  foo += strlen("bar");

This avoids magic numbers, but means we have to repeat the
string (and there is no compiler check that we didn't make a
typo in one of the strings).

We can use skip_prefix to handle this case without repeating
ourselves.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-20 10:44:45 -07:00
Jeff King
fc1b774c72 remote-curl: reencode http error messages
We currently recognize an error message with a content-type
"text/plain; charset=utf-16" as text, but we ignore the
charset parameter entirely. Let's encode it to
log_output_encoding, which is presumably something the
user's terminal can handle.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:59:22 -07:00
Jeff King
bf197fd7ee http: extract type/subtype portion of content-type
When we get a content-type from curl, we get the whole
header line, including any parameters, and without any
normalization (like downcasing or whitespace) applied.
If we later try to match it with strcmp() or even
strcasecmp(), we may get false negatives.

This could cause two visible behaviors:

  1. We might fail to recognize a smart-http server by its
     content-type.

  2. We might fail to relay text/plain error messages to
     users (especially if they contain a charset parameter).

This patch teaches the http code to extract and normalize
just the type/subtype portion of the string. This is
technically passing out less information to the callers, who
can no longer see the parameters. But none of the current
callers cares, and a future patch will add back an
easier-to-use method for accessing those parameters.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27 09:57:00 -07:00
Jeff King
beed336c3e http: never use curl_easy_perform
We currently don't reuse http connections when fetching via
the smart-http protocol. This is bad because the TCP
handshake introduces latency, and especially because SSL
connection setup may be non-trivial.

We can fix it by consistently using curl's "multi"
interface.  The reason is rather complicated:

Our http code has two ways of being used: queuing many
"slots" to be fetched in parallel, or fetching a single
request in a blocking manner. The parallel code is built on
curl's "multi" interface. Most of the single-request code
uses http_request, which is built on top of the parallel
code (we just feed it one slot, and wait until it finishes).

However, one could also accomplish the single-request scheme
by avoiding curl's multi interface entirely and just using
curl_easy_perform. This is simpler, and is used by post_rpc
in the smart-http protocol.

It does work to use the same curl handle in both contexts,
as long as it is not at the same time.  However, internally
curl may not share all of the cached resources between both
contexts. In particular, a connection formed using the
"multi" code will go into a reuse pool connected to the
"multi" object. Further requests using the "easy" interface
will not be able to reuse that connection.

The smart http protocol does ref discovery via http_request,
which uses the "multi" interface, and then follows up with
the "easy" interface for its rpc calls. As a result, we make
two HTTP connections rather than reusing a single one.

We could teach the ref discovery to use the "easy"
interface. But it is only once we have done this discovery
that we know whether the protocol will be smart or dumb. If
it is dumb, then our further requests, which want to fetch
objects in parallel, will not be able to reuse the same
connection.

Instead, this patch switches post_rpc to build on the
parallel interface, which means that we use it consistently
everywhere. It's a little more complicated to use, but since
we have the infrastructure already, it doesn't add any code;
we can just factor out the relevant bits from http_request.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18 15:50:57 -08:00
Junio C Hamano
92251b1b5b Merge branch 'nd/shallow-clone'
Fetching from a shallow-cloned repository used to be forbidden,
primarily because the codepaths involved were not carefully vetted
and we did not bother supporting such usage. This attempts to allow
object transfer out of a shallow-cloned repository in a controlled
way (i.e. the receiver become a shallow repository with truncated
history).

* nd/shallow-clone: (31 commits)
  t5537: fix incorrect expectation in test case 10
  shallow: remove unused code
  send-pack.c: mark a file-local function static
  git-clone.txt: remove shallow clone limitations
  prune: clean .git/shallow after pruning objects
  clone: use git protocol for cloning shallow repo locally
  send-pack: support pushing from a shallow clone via http
  receive-pack: support pushing to a shallow clone via http
  smart-http: support shallow fetch/clone
  remote-curl: pass ref SHA-1 to fetch-pack as well
  send-pack: support pushing to a shallow clone
  receive-pack: allow pushes that update .git/shallow
  connected.c: add new variant that runs with --shallow-file
  add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses
  receive/send-pack: support pushing from a shallow clone
  receive-pack: reorder some code in unpack()
  fetch: add --update-shallow to accept refs that update .git/shallow
  upload-pack: make sure deepening preserves shallow roots
  fetch: support fetching from a shallow repository
  clone: support remote shallow repository
  ...
2014-01-17 12:21:20 -08:00
Junio C Hamano
ad70448576 Merge branch 'cc/starts-n-ends-with'
Remove a few duplicate implementations of prefix/suffix comparison
functions, and rename them to starts_with and ends_with.

* cc/starts-n-ends-with:
  replace {pre,suf}fixcmp() with {starts,ends}_with()
  strbuf: introduce starts_with() and ends_with()
  builtin/remote: remove postfixcmp() and use suffixcmp() instead
  environment: normalize use of prefixcmp() by removing " != 0"
2013-12-17 12:02:44 -08:00
Nguyễn Thái Ngọc Duy
16094885ca smart-http: support shallow fetch/clone
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:18 -08:00
Nguyễn Thái Ngọc Duy
58f2ed051f remote-curl: pass ref SHA-1 to fetch-pack as well
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:18 -08:00
Nguyễn Thái Ngọc Duy
b06dcd7d68 connect.c: teach get_remote_heads to parse "shallow" lines
No callers pass a non-empty pointer as shallow_points at this
stage. As a result, all clients still refuse to talk to shallow
repository on the other end.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10 16:14:16 -08:00
Christian Couder
5955654823 replace {pre,suf}fixcmp() with {starts,ends}_with()
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.

The change can be recreated by mechanically applying this:

    $ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
      grep -v strbuf\\.c |
      xargs perl -pi -e '
        s|!prefixcmp\(|starts_with\(|g;
        s|prefixcmp\(|!starts_with\(|g;
        s|!suffixcmp\(|ends_with\(|g;
        s|suffixcmp\(|!ends_with\(|g;
      '

on the result of preparatory changes in this series.

Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05 14:13:21 -08:00
Junio C Hamano
c5a77e8f92 Merge branch 'bc/http-100-continue'
Issue "100 Continue" responses to help use of GSS-Negotiate
authentication scheme over HTTP transport when needed.

* bc/http-100-continue:
  remote-curl: fix large pushes with GSSAPI
  remote-curl: pass curl slot_results back through run_slot
  http: return curl's AUTHAVAIL via slot_results
2013-12-05 12:58:59 -08:00
Brian M. Carlson
c80d96ca0c remote-curl: fix large pushes with GSSAPI
Due to an interaction between the way libcurl handles GSSAPI
authentication over HTTP and the way git uses libcurl, large
pushes (those over http.postBuffer bytes) would fail due to
an authentication failure requiring a rewind of the curl
buffer.  Such a rewind was not possible because the data did
not fit into the entire buffer.

Enable the use of the Expect: 100-continue header for large
requests where the server offers GSSAPI authentication to
avoid this issue, since the request would otherwise fail.
This allows git to get the authentication data right before
sending the pack contents.  Existing cases where pushes
would succeed, including small requests using GSSAPI, still
disable the use of 100 Continue, as it causes problems for
some remote HTTP implementations (servers and proxies).

Signed-off-by: Brian M. Carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
2013-10-31 10:13:40 -07:00
Jeff King
3a347ed707 remote-curl: pass curl slot_results back through run_slot
Some callers may want to know more than just the integer
error code we return. Let them optionally pass a
slot_results struct to fill in (or NULL if they do not
care). In either case we continue to return the integer
code.

We can also give probe_rpc the same treatment (since it
builds directly on run_slot).

Signed-off-by: Jeff King <peff@peff.net>
2013-10-31 10:05:59 -07:00
Junio C Hamano
177f0a4009 Merge branch 'jk/http-auth-redirects'
Handle the case where http transport gets redirected during the
authorization request better.

* jk/http-auth-redirects:
  http.c: Spell the null pointer as NULL
  remote-curl: rewrite base url from info/refs redirects
  remote-curl: store url as a strbuf
  remote-curl: make refs_url a strbuf
  http: update base URLs when we see redirects
  http: provide effective url to callers
  http: hoist credential request out of handle_curl_result
  http: refactor options to http_get_*
  http_request: factor out curlinfo_strbuf
  http_get_file: style fixes
2013-10-30 12:09:53 -07:00
Jeff King
050ef3655c remote-curl: rewrite base url from info/refs redirects
For efficiency and security reasons, an earlier commit in
this series taught http_get_* to re-write the base url based
on redirections we saw while making a specific request.

This commit wires that option into the info/refs request,
meaning that a redirect from

    http://example.com/foo.git/info/refs

to

    https://example.com/bar.git/info/refs

will behave as if "https://example.com/bar.git" had been
provided to git in the first place.

The tests bear some explanation. We introduce two new
hierearchies into the httpd test config:

  1. Requests to /smart-redir-limited will work only for the
     initial info/refs request, but not any subsequent
     requests. As a result, we can confirm whether the
     client is re-rooting its requests after the initial
     contact, since otherwise it will fail (it will ask for
     "repo.git/git-upload-pack", which is not redirected).

  2. Requests to smart-redir-auth will redirect, and require
     auth after the redirection. Since we are using the
     redirected base for further requests, we also update
     the credential struct, in order not to mislead the user
     (or credential helpers) about which credential is
     needed. We can therefore check the GIT_ASKPASS prompts
     to make sure we are prompting for the new location.
     Because we have neither multiple servers nor https
     support in our test setup, we can only redirect between
     paths, meaning we need to turn on
     credential.useHttpPath to see the difference.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 17:01:34 -07:00
Jeff King
b227bbc43a remote-curl: store url as a strbuf
We use a strbuf to generate the string containing the remote
URL, but then detach it to a bare pointer. This makes it
harder to later manipulate the URL, as we have forgotten the
length (and the allocation semantics are not as clear).

Let's instead keep the strbuf around. As a bonus, this
eliminates a confusing double-use of the "buf" strbuf in
main(). Prior to this, it was used both for constructing the
url, and for reading commands from stdin.

The downside is that we have to update each call site to
refer to "url.buf" rather than just "url" when they want the
C string.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 17:01:15 -07:00
Jeff King
c65d5692cd remote-curl: make refs_url a strbuf
In the discover_refs function, we use a strbuf named
"buffer" for multiple purposes. First we build the info/refs
URL in it, and then detach that to a bare pointer. Then, we
use the same strbuf to store the result of fetching the
refs.

Let's instead keep a separate refs_url strbuf. This is less
confusing, as the "buffer" strbuf is now used for only one
thing.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 16:57:04 -07:00
Jeff King
2501aff8b7 http: hoist credential request out of handle_curl_result
When we are handling a curl response code in http_request or
in the remote-curl RPC code, we use the handle_curl_result
helper to translate curl's response into an easy-to-use
code. When we see an HTTP 401, we do one of two things:

  1. If we already had a filled-in credential, we mark it as
     rejected, and then return HTTP_NOAUTH to indicate to
     the caller that we failed.

  2. If we didn't, then we ask for a new credential and tell
     the caller HTTP_REAUTH to indicate that they may want
     to try again.

Rejecting in the first case makes sense; it is the natural
result of the request we just made. However, prompting for
more credentials in the second step does not always make
sense. We do not know for sure that the caller is going to
make a second request, and nor are we sure that it will be
to the same URL. Logically, the prompt belongs not to the
request we just finished, but to the request we are (maybe)
about to make.

In practice, it is very hard to trigger any bad behavior.
Currently, if we make a second request, it will always be to
the same URL (even in the face of redirects, because curl
handles the redirects internally). And we almost always
retry on HTTP_REAUTH these days. The one exception is if we
are streaming a large RPC request to the server (e.g., a
pushed packfile), in which case we cannot restart. It's
extremely unlikely to see a 401 response at this stage,
though, as we would typically have seen it when we sent a
probe request, before streaming the data.

This patch drops the automatic prompt out of case 2, and
instead requires the caller to do it. This is a few extra
lines of code, and the bug it fixes is unlikely to come up
in practice. But it is conceptually cleaner, and paves the
way for better handling of credentials across redirects.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14 16:55:13 -07:00
Jeff King
1bbcc224cc http: refactor options to http_get_*
Over time, the http_get_strbuf function has grown several
optional parameters. We now have a bitfield with multiple
boolean options, as well as an optional strbuf for returning
the content-type of the response. And a future patch in this
series is going to add another strbuf option.

Treating these as separate arguments has a few downsides:

  1. Most call sites need to add extra NULLs and 0s for the
     options they aren't interested in.

  2. The http_get_* functions are actually wrappers around
     2 layers of low-level implementation functions. We have
     to pass these options through individually.

  3. The http_get_strbuf wrapper learned these options, but
     nobody bothered to do so for http_get_file, even though
     it is backed by the same function that does understand
     the options.

Let's consolidate the options into a single struct. For the
common case of the default options, we'll allow callers to
simply pass a NULL for the options struct.

The resulting code is often a few lines longer, but it ends
up being easier to read (and to change as we add new
options, since we do not need to update each call site).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30 17:21:59 -07:00
Junio C Hamano
2233ad4534 Merge branch 'jc/push-cas'
Allow a safer "rewind of the remote tip" push than blind "--force",
by requiring that the overwritten remote ref to be unchanged since
the new history to replace it was prepared.

The machinery is more or less ready.  The "--force" option is again
the big red button to override any safety, thanks to J6t's sanity
(the original round allowed --lockref to defeat --force).

The logic to choose the default implemented here is fragile
(e.g. "git fetch" after seeing a failure will update the
remote-tracking branch and will make the next "push" pass,
defeating the safety pretty easily).  It is suitable only for the
simplest workflows, and it may hurt users more than it helps them.

* jc/push-cas:
  push: teach --force-with-lease to smart-http transport
  send-pack: fix parsing of --force-with-lease option
  t5540/5541: smart-http does not support "--force-with-lease"
  t5533: test "push --force-with-lease"
  push --force-with-lease: tie it all together
  push --force-with-lease: implement logic to populate old_sha1_expect[]
  remote.c: add command line option parser for "--force-with-lease"
  builtin/push.c: use OPT_BOOL, not OPT_BOOLEAN
  cache.h: move remote/connect API out of it
2013-09-09 14:30:29 -07:00
Junio C Hamano
711b276974 Merge branch 'nd/clone-connectivity-shortcut'
* nd/clone-connectivity-shortcut:
  smart http: use the same connectivity check on cloning
2013-09-09 14:30:01 -07:00
Junio C Hamano
05c1eb1034 push: teach --force-with-lease to smart-http transport
We have been passing enough information to enable the
compare-and-swap logic down to the transport layer, but the
transport helper was not passing it to smart-http transport.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-02 16:11:06 -07:00
Nguyễn Thái Ngọc Duy
9ba380481c smart http: use the same connectivity check on cloning
This is an extension of c6807a4 (clone: open a shortcut for
connectivity check - 2013-05-26) to reduce the cost of connectivity
check at clone time, this time with smart http protocol.

Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-23 12:18:18 -07:00
Junio C Hamano
222b1212c1 remote-http: use argv-array
Instead of using a hand-managed argument array, use argv-array API
to manage dynamically formulated command line.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-09 12:34:16 -07:00
Jeff King
de89f0b25a remote-curl: die directly with http error messages
When we encounter an unknown http error (e.g., a 403), we
hand the error code to http_error, which then prints it with
error(). After that we die with the redundant message "HTTP
request failed".

Instead, let's just drop http_error entirely, which does
nothing but pass arguments to error(), and instead die
directly with a useful message.

So before:

  $ git clone https://example.com/repo.git
  Cloning into 'repo'...
  error: unable to access 'https://example.com/repo.git': The requested URL returned error: 403 Forbidden
  fatal: HTTP request failed

and after:

  $ git clone https://example.com/repo.git
  Cloning into 'repo'...
  fatal: unable to access 'https://example.com/repo.git': The requested URL returned error: 403 Forbidden

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:45 -07:00
Jeff King
67d2a7b5c5 http: simplify http_error helper function
This helper function should really be a one-liner that
prints an error message, but it has ended up unnecessarily
complicated:

  1. We call error() directly when we fail to start the curl
     request, so we must later avoid printing a duplicate
     error in http_error().

     It would be much simpler in this case to just stuff the
     error message into our usual curl_errorstr buffer
     rather than printing it ourselves. This means that
     http_error does not even have to care about curl's exit
     value (the interesting part is in the errorstr buffer
     already).

  2. We return the "ret" value passed in to us, but none of
     the callers actually cares about our return value. We
     can just drop this entirely.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:44 -07:00
Jeff King
d5ccbe4dfb remote-curl: consistently report repo url for http errors
When we report http errors in fetching the initial ref
advertisement, we show the full URL we attempted to use,
including "info/refs?service=git-upload-pack". While this
may be useful for debugging a broken server, it is
unnecessarily verbose and confusing for most cases, in which
the client user is not even the same person as the owner of
the repository.

Let's just show the repository URL; debugging can happen
with GIT_CURL_VERBOSE, which shows way more useful
information, anyway.

At the same time, let's also make sure to mention the
repository URL when we report failed authentication
(previously we said only "Authentication failed"). Knowing
the URL can help the user realize why authentication failed
(e.g., they meant to push to remote A, not remote B).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:43 -07:00
Jeff King
cfa0f4040d remote-curl: always show friendlier 404 message
When we get an http 404 trying to get the initial list of
refs from the server, we try to be helpful and remind the
user that update-server-info may need to be run. This looks
like:

  $ git clone https://github.com/non/existent
  Cloning into 'existent'...
  fatal: https://github.com/non/existent/info/refs?service=git-upload-pack not found: did you run git update-server-info on the server?

Suggesting update-server-info may be a good suggestion for
users who are in control of the server repo and who are
planning to set up dumb http. But for users of smart http,
and especially users who are not in control of the server
repo, the advice is useless and confusing.

Since most people are expected to use smart http these days,
it does not make sense to keep the update-server-info hint.

We not only drop the mention of update-server-info, but also
show only the main repo URL, not the full "info/refs" and
service parameter. These elements may be useful for
debugging a broken server configuration, but in the majority
of cases, users are not fetching from their own
repositories, but rather from other people's repositories;
they have neither the power nor interest to fix a broken
configuration, and the extra components just make the
message more confusing. Users who do want to debug can and
should use GIT_CURL_VERBOSE to get more complete information
on the actual URLs visited.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:43 -07:00
Jeff King
110bcdc3d0 remote-curl: let servers override http 404 advice
When we get an http 404 trying to get the initial list of
refs from the server, we try to be helpful and remind the
user that update-server-info may need to be run. This looks
like:

  $ git clone https://github.com/non/existent
  Cloning into 'existent'...
  fatal: https://github.com/non/existent/info/refs?service=git-upload-pack not found: did you run git update-server-info on the server?

Suggesting update-server-info may be a good suggestion for
users who are in control of the server repo and who are
planning to set up dumb http. But for users of smart http,
and especially users who are not in control of the server
repo, the advice is useless and confusing.

The previous patch taught remote-curl to show custom advice
from the server when it is available. When we have shown
messages from the server, we can also drop our custom
advice; what the server has to say is likely to be more
accurate and helpful.

We not only drop the mention of update-server-info, but also
show only the main repo URL, not the full "info/refs" and
service parameter. These elements may be useful for
debugging a broken server configuration, but again, anything
the server has provided is likely to be more useful (and one
can still use GIT_CURL_VERBOSE to get much more complete
debugging information).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:42 -07:00
Jeff King
426e70d4a1 remote-curl: show server content on http errors
If an http request to a remote git server fails, we show
only the http response code, or sometimes a custom message
for particular codes. This gives the server no opportunity
to offer a more detailed explanation of the reason for the
failure, or to give extra advice.

This patch teaches remote-curl to record and display the
body content of a failed http response. We only display such
responses when the content-type is advertised as text/plain,
as it is the most likely to look presentable on the user's
terminal (and it is hoped to be a good indication that the
message is intended for git clients, and not for a web
browser).

Each line of the new output is prepended with "remote:".
Example output may look like this (assuming the server is
configured to display such a helpful message):

  $ GIT_SMART_HTTP=0 git clone https://example.com/some/repo.git
  Cloning into 'repo'...
  remote: Sorry, fetching via dumb http is forbidden.
  remote: Please upgrade your git client to v1.6.6 or greater
  remote: and make sure that smart-http is enabled.
  error: The requested URL returned error: 403 while accessing http://localhost:5001/some/repo.git/info/refs
  fatal: HTTP request failed

For the sake of simplicity, we only record and display these
errors during the initial fetch of the ref list, as that is
the initial contact with the server and where the most
common, interesting errors happen (and there is already
precedent, as that is the only place we currently massage
http error codes into more helpful messages).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06 18:56:42 -07:00
Jeff King
2a4552021a remote-curl: always parse incoming refs
When remote-curl receives a list of refs from a server, it
keeps the whole buffer intact. When we get a "list" command,
we feed the result to get_remote_heads, and when we get a
"fetch" or "push" command, we feed it to fetch-pack or
send-pack, respectively.

If the HTTP response from the server is truncated for any
reason, we will get an incomplete ref advertisement. If we
then feed this incomplete list to fetch-pack, one of a few
things may happen:

  1. If the truncation is in a packet header, fetch-pack
     will notice the bogus line and complain.

  2. If the truncation is inside a packet, fetch-pack will
     keep waiting for us to send the rest of the packet,
     which we never will.

  3. If the truncation is at a packet boundary, fetch-pack
     will keep waiting for us to send the next packet, which
     we never will.

As a result, fetch-pack hangs, waiting for input.  However,
remote-curl believes it has sent all of the advertisement,
and therefore waits for fetch-pack to speak. The two
processes end up in a deadlock.

We do notice the broken ref list if we feed it to
get_remote_heads. So if git asks the helper to do a "list"
followed by a "fetch", we are safe; we'll abort during the
list operation, which parses the refs.

This patch teaches remote-curl to always parse and save the
incoming ref list when we read the ref advertisement from a
server. That means that we will always verify and abort
before even running fetch-pack (or send-pack) when reading a
corrupted list, even if we do not run the "list" command
explicitly.

Since we save the result, in the common case of running
"list" then "fetch", we do not do any extra parsing at all.
In the case of just a "fetch", we do an extra round of
parsing, but only once.

Note also that the "fetch" case will now also initialize
server_capabilities from the remote (in remote-curl; we
already would do so inside fetch-pack).  Doing "list+fetch"
already does this. It doesn't actually matter now, but the
new behavior is arguably more correct, should remote-curl
ever start caring about the server's capability list.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24 00:17:38 -08:00
Jeff King
b8054bbee7 remote-curl: move ref-parsing code up in file
The ref-parsing functions are static. Let's move them up in
the file to be available to more functions, which will help
us with later refactoring.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24 00:17:38 -08:00
Jeff King
5dbf43602d remote-curl: pass buffer straight to get_remote_heads
Until recently, get_remote_heads only knew how to read refs
from a file descriptor. To hack around this, we spawned a
thread (or forked a process) to write the buffer back to us.

Now that we can just pass it our buffer directly, we don't
have to use this hack anymore.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24 00:17:38 -08:00
Jeff King
85edf4f58b teach get_remote_heads to read from a memory buffer
Now that we can read packet data from memory as easily as a
descriptor, get_remote_heads can take either one as a
source. This will allow further refactoring in remote-curl.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24 00:17:38 -08:00
Jeff King
4981fe750b pkt-line: share buffer/descriptor reading implementation
The packet_read function reads from a descriptor. The
packet_get_line function is similar, but reads from an
in-memory buffer, and uses a completely separate
implementation. This patch teaches the generic packet_read
function to accept either source, and we can do away with
packet_get_line's implementation.

There are two other differences to account for between the
old and new functions. The first is that we used to read
into a strbuf, but now read into a fixed size buffer. The
only two callers are fine with that, and in fact it
simplifies their code, since they can use the same
static-buffer interface as the rest of the packet_read_line
callers (and we provide a similar convenience wrapper for
reading from a buffer rather than a descriptor).

This is technically an externally-visible behavior change in
that we used to accept arbitrary sized packets up to 65532
bytes, and now cap out at LARGE_PACKET_MAX, 65520. In
practice this doesn't matter, as we use it only for parsing
smart-http headers (of which there is exactly one defined,
and it is small and fixed-size). And any extension headers
would be breaking the protocol to go over LARGE_PACKET_MAX
anyway.

The other difference is that packet_get_line would return
on error rather than dying. However, both callers of
packet_get_line are actually improved by dying.

The first caller does its own error checking, but we can
drop that; as a result, we'll actually get more specific
reporting about protocol breakage when packet_read dies
internally. The only downside is that packet_read will not
print the smart-http URL that failed, but that's not a big
deal; anybody not debugging can already see the remote's URL
already, and anybody debugging would want to run with
GIT_CURL_VERBOSE anyway to see way more information.

The second caller, which is just trying to skip past any
extra smart-http headers (of which there are none defined,
but which we allow to keep room for future expansion), did
not error check at all. As a result, it would treat an error
just like a flush packet. The resulting mess would generally
cause an error later in get_remote_heads, but now we get
error reporting much closer to the source of the problem.

Brown-paper-bag-fixes-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-24 00:14:15 -08:00
Jeff King
819b929d33 pkt-line: teach packet_read_line to chomp newlines
The packets sent during ref negotiation are all terminated
by newline; even though the code to chomp these newlines is
short, we end up doing it in a lot of places.

This patch teaches packet_read_line to auto-chomp the
trailing newline; this lets us get rid of a lot of inline
chomping code.

As a result, some call-sites which are not reading
line-oriented data (e.g., when reading chunks of packfiles
alongside sideband) transition away from packet_read_line to
the generic packet_read interface. This patch converts all
of the existing callsites.

Since the function signature of packet_read_line does not
change (but its behavior does), there is a possibility of
new callsites being introduced in later commits, silently
introducing an incompatibility.  However, since a later
patch in this series will change the signature, such a
commit would have to be merged directly into this commit,
not to the tip of the series; we can therefore ignore the
issue.

This is an internal cleanup and should produce no change of
behavior in the normal case. However, there is one corner
case to note. Callers of packet_read_line have never been
able to tell the difference between a flush packet ("0000")
and an empty packet ("0004"), as both cause packet_read_line
to return a length of 0. Readers treat them identically,
even though Documentation/technical/protocol-common.txt says
we must not; it also says that implementations should not
send an empty pkt-line.

By stripping out the newline before the result gets to the
caller, we will now treat the newline-only packet ("0005\n")
the same as an empty packet, which in turn gets treated like
a flush packet. In practice this doesn't matter, as neither
empty nor newline-only packets are part of git's protocols
(at least not for the line-oriented bits, and readers who
are not expecting line-oriented packets will be calling
packet_read directly, anyway). But even if we do decide to
care about the distinction later, it is orthogonal to this
patch.  The right place to tighten would be to stop treating
empty packets as flush packets, and this change does not
make doing so any harder.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:21 -08:00
Jeff King
cdf4fb8e33 pkt-line: drop safe_write function
This is just write_or_die by another name. The one
distinction is that write_or_die will treat EPIPE specially
by suppressing error messages. That's fine, as we die by
SIGPIPE anyway (and in the off chance that it is disabled,
write_or_die will simulate it).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20 13:42:21 -08:00
Shawn Pearce
4656bf47fc Verify Content-Type from smart HTTP servers
Before parsing a suspected smart-HTTP response verify the returned
Content-Type matches the standard. This protects a client from
attempting to process a payload that smells like a smart-HTTP
server response.

JGit has been doing this check on all responses since the dawn of
time. I mistakenly failed to include it in git-core when smart HTTP
was introduced. At the time I didn't know how to get the Content-Type
from libcurl. I punted, meant to circle back and fix this, and just
plain forgot about it.

Signed-off-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04 10:22:36 -08:00
Junio C Hamano
fda800f0b1 Merge branch 'jk/maint-http-half-auth-fetch'
Finishing touches to squelch a compiler warning.

* jk/maint-http-half-auth-fetch:
  remote-curl.c: Fix a compiler warning
2012-11-21 11:59:29 -08:00
Ramsay Jones
377115493a remote-curl.c: Fix a compiler warning
In particular, gcc issues an "'gzip_size' might be used uninitialized"
warning (-Wuninitialized). However, this warning is a false positive,
since the 'gzip_size' variable would not, in fact, be used uninitialized.
In order to suppress the warning, we simply initialise the variable to
zero in it's declaration.

Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-21 11:54:32 -08:00
Junio C Hamano
a9bb4e55a3 Merge branch 'jk/maint-http-half-auth-fetch'
Fixes fetch from servers that ask for auth only during the actual
packing phase. This is not really a recommended configuration, but it
cleans up the code at the same time.

* jk/maint-http-half-auth-fetch:
  remote-curl: retry failed requests for auth even with gzip
  remote-curl: hoist gzip buffer size to top of post_rpc
2012-11-20 10:30:17 -08:00
Jeff King
2e736fd5e9 remote-curl: retry failed requests for auth even with gzip
Commit b81401c taught the post_rpc function to retry the
http request after prompting for credentials. However, it
did not handle two cases:

  1. If we have a large request, we do not retry. That's OK,
     since we would have sent a probe (with retry) already.

  2. If we are gzipping the request, we do not retry. That
     was considered OK, because the intended use was for
     push (e.g., listing refs is OK, but actually pushing
     objects is not), and we never gzip on push.

This patch teaches post_rpc to retry even a gzipped request.
This has two advantages:

  1. It is possible to configure a "half-auth" state for
     fetching, where the set of refs and their sha1s are
     advertised, but one cannot actually fetch objects.

     This is not a recommended configuration, as it leaks
     some information about what is in the repository (e.g.,
     an attacker can try brute-forcing possible content in
     your repository and checking whether it matches your
     branch sha1). However, it can be slightly more
     convenient, since a no-op fetch will not require a
     password at all.

  2. It future-proofs us should we decide to ever gzip more
     requests.

Signed-off-by: Jeff King <peff@peff.net>
2012-10-31 07:45:13 -04:00
Jeff King
df126e108b remote-curl: hoist gzip buffer size to top of post_rpc
When we gzip the post data for a smart-http rpc request, we
compute the gzip body and its size inside the "use_gzip"
conditional. We keep track of the body after the conditional
ends, but not the size. Let's remember both, which will
enable us to retry failed gzip requests in a future patch.

Signed-off-by: Jeff King <peff@peff.net>
2012-10-31 07:45:08 -04:00
Jeff King
58f3f9893d Merge branch 'jk/maint-http-init-not-in-result-handler'
Further clean-up to the http codepath that picks up results after
cURL library is done with one request slot.

* jk/maint-http-init-not-in-result-handler:
  http: do not set up curl auth after a 401
  remote-curl: do not call run_slot repeatedly
2012-10-29 04:13:09 -04:00
Junio C Hamano
053a08f5bb Merge branch 'jk/maint-http-half-auth-push'
Fixes a regression in maint-1.7.11 (v1.7.11.7), maint (v1.7.12.1)
and master (v1.8.0-rc0).

* jk/maint-http-half-auth-push:
  http: fix segfault in handle_curl_result
2012-10-16 11:44:37 -07:00
Jeff King
1960897ebc http: do not set up curl auth after a 401
When we get an http 401, we prompt for credentials and put
them in our global credential struct. We also feed them to
the curl handle that produced the 401, with the intent that
they will be used on a retry.

When the code was originally introduced in commit 42653c0,
this was a necessary step. However, since dfa1725, we always
feed our global credential into every curl handle when we
initialize the slot with get_active_slot. So every further
request already feeds the credential to curl.

Moreover, accessing the slot here is somewhat dubious. After
the slot has produced a response, we don't actually control
it any more.  If we are using curl_multi, it may even have
been re-initialized to handle a different request.

It just so happens that we will reuse the curl handle within
the slot in such a case, and that because we only keep one
global credential, it will be the one we want.  So the
current code is not buggy, but it is misleading.

By cleaning it up, we can remove the slot argument entirely
from handle_curl_result, making it much more obvious that
slots should not be accessed after they are marked as
finished.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12 09:45:15 -07:00
Jeff King
abf8df869c remote-curl: do not call run_slot repeatedly
Commit b81401c (http: prompt for credentials on failed POST)
taught post_rpc to call run_slot in a loop in order to retry
a request after asking the user for credentials. However,
after a call to run_slot we will have called
finish_active_slot. This means we have released the slot,
and we should no longer look at it.

As it happens, this does not cause any bugs in the current
code, since we know that we are not using curl_multi in this
code path, and therefore nobody will have taken over our
slot in the meantime. However, it is good form to actually
call get_active_slot again. It also future proofs us against
changes in the http code.

We can do this by jumping back to a retry label at the top
of our function. We just need to reorder a few setup lines
that should not be repeated; everything else within the loop
is either idempotent, needs to be repeated, or in a path we
do not follow (e.g., we do not even try when large_request
is set, because we don't know how much data we might have
streamed from our helper program).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12 09:45:13 -07:00
Jeff King
188923f0d1 http: fix segfault in handle_curl_result
When we create an http active_request_slot, we can set its
"results" pointer back to local storage. The http code will
fill in the details of how the request went, and we can
access those details even after the slot has been cleaned
up.

Commit 8809703 (http: factor out http error code handling)
switched us from accessing our local results struct directly
to accessing it via the "results" pointer of the slot. That
means we're accessing the slot after it has been marked as
finished, defeating the whole purpose of keeping the results
storage separate.

Most of the time this doesn't matter, as finishing the slot
does not actually clean up the pointer. However, when using
curl's multi interface with the dumb-http revision walker,
we might actually start a new request before handing control
back to the original caller. In that case, we may reuse the
slot, zeroing its results pointer, and leading the original
caller to segfault while looking for its results inside the
slot.

Instead, we need to pass a pointer to our local results
storage to the handle_curl_result function, rather than
relying on the pointer in the slot struct. This matches what
the original code did before the refactoring (which did not
use a separate function, and therefore just accessed the
results struct directly).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12 09:42:31 -07:00
Junio C Hamano
fb1e4a85bf Merge branch 'jk/smart-http-switch'
Allows users to turn off smart-http when talking to dumb-only
servers.

* jk/smart-http-switch:
  remote-curl: let users turn off smart http
  remote-curl: rename is_http variable
2012-09-29 22:28:25 -07:00
Junio C Hamano
c318040a36 Merge branch 'sp/maint-http-enable-gzip'
Allows a more common 'gzip' Accept-Encoding to be used.

* sp/maint-http-enable-gzip:
  Enable info/refs gzip decompression in HTTP client
2012-09-29 22:28:20 -07:00
Jeff King
02572c2e3a remote-curl: let users turn off smart http
Usually there is no need for users to specify whether an
http remote is smart or dumb; the protocol is designed so
that a single initial request is made, and the client can
determine the server's capability from the response.

However, some misconfigured dumb-only servers may not like
the initial request by a smart client, as it contains a
query string. Until recently, commit 703e6e7 worked around
this by making a second request. However, that commit was
recently reverted due to its side effect of masking the
initial request's error code.

Since git has had that workaround for several years, we
don't know exactly how many such misconfigured servers are
out there. The reversion of 703e6e7 assumes they are rare
enough not to worry about. Still, that reversion leaves
somebody who does run into such a server with no escape
hatch at all. Let's give them an environment variable they
can tweak to perform the "dumb" request.

This is intentionally not a documented interface. It's
overly simple and is really there for debugging in case
somebody does complain about git not working with their
server. A real user-facing interface would entail a
per-remote or per-URL config variable.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-21 10:33:11 -07:00
Jeff King
243c329c1e remote-curl: rename is_http variable
We don't actually care whether the connection is http or
not; what we care about is whether it might be smart http.
Rename the variable to be more accurate, which will make it
easier to later make smart-http optional.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20 10:48:45 -07:00
Shawn O. Pearce
aa90b9697f Enable info/refs gzip decompression in HTTP client
Some HTTP servers try to use gzip compression on the /info/refs
request to save transfer bandwidth. Repositories with many tags
may find the /info/refs request can be gzipped to be 50% of the
original size due to the few but often repeated bytes used (hex
SHA-1 and commonly digits in tag names).

For most HTTP requests enable "Accept-Encoding: gzip" ensuring
the /info/refs payload can use this encoding format.

Only request gzip encoding from servers. Although deflate is
supported by libcurl, most servers have standardized on gzip
encoding for compression as that is what most browsers support.
Asking for deflate increases request sizes by a few bytes, but is
unlikely to ever be used by a server.

Disable the Accept-Encoding header on probe RPCs as response bodies
are supposed to be exactly 4 bytes long, "0000". The HTTP headers
requesting and indicating compression use more space than the data
transferred in the body.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20 10:26:50 -07:00
Shawn O. Pearce
6ac964a627 Revert "retry request without query when info/refs?query fails"
This reverts commit 703e6e76a1.

Retrying without the query parameter was added as a workaround
for a single broken HTTP server at git.debian.org[1]. The server
was misconfigured to route every request with a query parameter
into gitweb.cgi. Admins fixed the server's configuration within
16 hours of the bug report to the Git mailing list, but we still
patched Git with this fallback and have been paying for it since.

Most Git hosting services configure the smart HTTP protocol and the
retry logic confuses users when there is a transient HTTP error as
Git dropped the real error from the smart HTTP request. Removing the
retry makes root causes easier to identify.

[1] http://thread.gmane.org/gmane.comp.version-control.git/137609

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20 10:25:21 -07:00
Jeff King
b81401c1de http: prompt for credentials on failed POST
All of the smart-http GET requests go through the http_get_*
functions, which will prompt for credentials and retry if we
see an HTTP 401.

POST requests, however, do not go through any central point.
Moreover, it is difficult to retry in the general case; we
cannot assume the request body fits in memory or is even
seekable, and we don't know how much of it was consumed
during the attempt.

Most of the time, this is not a big deal; for both fetching
and pushing, we make a GET request before doing any POSTs,
so typically we figure out the credentials during the first
request, then reuse them during the POST. However, some
servers may allow a client to get the list of refs from
receive-pack without authentication, and then require
authentication when the client actually tries to POST the
pack.

This is not ideal, as the client may do a non-trivial amount
of work to generate the pack (e.g., delta-compressing
objects). However, for a long time it has been the
recommended example configuration in git-http-backend(1) for
setting up a repository with anonymous fetch and
authenticated push. This setup has always been broken
without putting a username into the URL. Prior to commit
986bbc0, it did work with a username in the URL, because git
would prompt for credentials before making any requests at
all. However, post-986bbc0, it is totally broken. Since it
has been advertised in the manpage for some time, we should
make sure it works.

Unfortunately, it is not as easy as simply calling post_rpc
again when it fails, due to the input issue mentioned above.
However, we can still make this specific case work by
retrying in two specific instances:

  1. If the request is large (bigger than LARGE_PACKET_MAX),
     we will first send a probe request with a single flush
     packet. Since this request is static, we can freely
     retry it.

  2. If the request is small and we are not using gzip, then
     we have the whole thing in-core, and we can freely
     retry.

That means we will not retry in some instances, including:

  1. If we are using gzip. However, we only do so when
     calling git-upload-pack, so it does not apply to
     pushes.

  2. If we have a large request, the probe succeeds, but
     then the real POST wants authentication. This is an
     extremely unlikely configuration and not worth worrying
     about.

While it might be nice to cover those instances, doing so
would be significantly more complex for very little
real-world gain. In the long run, we will be much better off
when curl learns to internally handle authentication as a
callback, and we can cleanly handle all cases that way.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27 10:49:09 -07:00
Junio C Hamano
25047b8896 Merge branch 'jk/maint-push-progress' into maint
"git push" over smart-http lost progress output a few releases ago.

By Jeff King
* jk/maint-push-progress:
  t5541: test more combinations of --progress
  teach send-pack about --[no-]progress
  send-pack: show progress when isatty(2)
2012-05-10 10:08:54 -07:00
Junio C Hamano
cda03b6ad3 Merge branch 'it/fetch-pack-many-refs' into maint
When "git fetch" encounters repositories with too many references, the
command line of "fetch-pack" that is run by a helper e.g. remote-curl, may
fail to hold all of them. Now such an internal invocation can feed the
references through the standard input of "fetch-pack".

By Ivan Todoroski
* it/fetch-pack-many-refs:
  remote-curl: main test case for the OS command line overflow
  fetch-pack: test cases for the new --stdin option
  remote-curl: send the refs to fetch-pack on stdin
  fetch-pack: new --stdin option to read refs from stdin

Conflicts:
	t/t5500-fetch-pack.sh
2012-05-01 21:12:36 -07:00
Jeff King
391b1f2003 teach send-pack about --[no-]progress
The send_pack function gets a "progress" flag saying "yes,
definitely show progress" or "no, definitely do not show
progress". This gets set properly by transport_push when
send_pack is called directly.

However, when the send-pack command is executed separately
(as it is for the remote-curl helper), there is no way to
tell it "definitely do this". As a result, we do not
properly respect "git push --no-progress" for smart-http
remotes; you will still get progress if stderr is a tty.

This patch teaches send-pack --progress and --no-progress,
and teaches remote-curl to pass the appropriate option to
override send-pack's isatty check. This fixes the
--no-progress case above, and as a bonus, also makes "git
push --progress" work when stderr is not a tty.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-05-01 09:40:30 -07:00
Ivan Todoroski
8150749da1 remote-curl: send the refs to fetch-pack on stdin
Now that we can throw an arbitrary number of refs at fetch-pack using
its --stdin option, we use it in the remote-curl helper to bypass the
OS command line length limit.

Signed-off-by: Ivan Todoroski <grnch@gmx.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-10 14:49:18 -07:00
Junio C Hamano
f2120eb4db Merge branch 'sp/smart-http-failure-to-push' into maint
* sp/smart-http-failure-to-push:
  remote-curl: Fix push status report when all branches fail
2012-02-05 23:58:43 -08:00
Junio C Hamano
2bbf77dde2 Merge branch 'sp/smart-http-failure-to-push'
* sp/smart-http-failure-to-push:
  remote-curl: Fix push status report when all branches fail
2012-01-29 13:18:54 -08:00
Shawn O. Pearce
5238cbf656 remote-curl: Fix push status report when all branches fail
The protocol between transport-helper.c and remote-curl requires
remote-curl to always print a blank line after the push command
has run. If the blank line is ommitted, transport-helper kills its
container process (the git push the user started) with exit(128)
and no message indicating a problem, assuming the helper already
printed reasonable error text to the console.

However if the remote rejects all branches with "ng" commands in the
report-status reply, send-pack terminates with non-zero status, and
in turn remote-curl exited with non-zero status before outputting
the blank line after the helper status printed by send-pack. No
error messages reach the user.

This caused users to see the following from git push over HTTP
when the remote side's update hook rejected the branch:

  $ git push http://... master
  Counting objects: 4, done.
  Delta compression using up to 6 threads.
  Compressing objects: 100% (2/2), done.
  Writing objects: 100% (3/3), 301 bytes, done.
  Total 3 (delta 0), reused 0 (delta 0)
  $

Always print a blank line after the send-pack process terminates,
ensuring the helper status report (if it was output) will be
correctly parsed by the calling transport-helper.c. This ensures
the helper doesn't abort before the status report can be shown to
the user.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-20 10:14:32 -08:00
Clemens Buchacher
c207e34f77 fix push --quiet: add 'quiet' capability to receive-pack
Currently, git push --quiet produces some non-error output, e.g.:

 $ git push --quiet
 Unpacking objects: 100% (3/3), done.

This fixes a bug reported for the fedora git package:

 https://bugzilla.redhat.com/show_bug.cgi?id=725593

Reported-by: Jesse Keating <jkeating@redhat.com>
Cc: Todd Zullinger <tmz@pobox.com>

Commit 90a6c7d4 (propagate --quiet to send-pack/receive-pack)
introduced the --quiet option to receive-pack and made send-pack
pass that option. Older versions of receive-pack do not recognize
the option, however, and terminate immediately. The commit was
therefore reverted.

This change instead adds a 'quiet' capability to receive-pack,
which is a backwards compatible.

In addition, this fixes push --quiet via http: A verbosity of 0
means quiet for remote helpers.

Reported-by: Tobias Ulmer <tobiasu@tmux.org>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-08 14:27:28 -08:00
Junio C Hamano
200888ef3b Merge branch 'jk/http-push-to-empty'
* jk/http-push-to-empty:
  remote-curl: don't pass back fake refs

Conflicts:
	remote-curl.c
2011-12-22 11:27:22 -08:00
Junio C Hamano
1d3a035d6d Merge branch 'jk/maint-push-over-dav'
* jk/maint-push-over-dav:
  http-push: enable "proactive auth"
  t5540: test DAV push with authentication

Conflicts:
	http.c
2011-12-19 16:05:59 -08:00
Jeff King
02f7914734 remote-curl: don't pass back fake refs
When receive-pack advertises its list of refs, it generally hides the
capabilities information after a NUL at the end of the first ref.

However, when we have an empty repository, there are no refs, and
therefore receive-pack writes a fake ref "capabilities^{}" with the
capabilities afterwards.

On the client side, git reads the result with get_remote_heads(). We pick
the capabilities from the end of the line, and then call check_ref() to
make sure the ref name is valid. We see that it isn't, and don't bother
adding it to our list of refs.

However, the call to check_ref() is enabled by passing the REF_NORMAL flag
to get_remote_heads. For the regular git transport, we pass REF_NORMAL in
get_refs_via_connect() if we are doing a push (since only receive-pack
uses this fake ref).  But in remote-curl, we never use this flag, and we
accept the fake ref as a real one, passing it back from the helper to the
parent git-push.

Most of the time this bug goes unnoticed, as the fake ref won't match our
refspecs. However, if "--mirror" is used, then we see it as remote cruft
to be pruned, and try to pass along a deletion refspec for it. Of course
this refspec has bogus syntax (because of the ^{}), and the helper
complains, aborting the push.

Let's have remote-curl mirror what the builtin get_refs_via_connect() does
(at least for the case of using git protocol; we can leave the dumb
info/refs reader as it is).

This also fixes pushing with --mirror to a smart-http remote that uses
alternates. The fake ".have" refs the server gives to avoid unnecessary
network transfer has a similar bad interactions with the machinery.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-19 11:21:29 -08:00
Jeff King
a4ddbc33d7 http-push: enable "proactive auth"
Before commit 986bbc08, git was proactive about asking for
http passwords. It assumed that if you had a username in
your URL, you would also want a password, and asked for it
before making any http requests.

However, this could interfere with the use of .netrc (see
986bbc08 for details). And it was also unnecessary, since
the http fetching code had learned to recognize an HTTP 401
and prompt the user then. Furthermore, the proactive prompt
could interfere with the usage of .netrc (see 986bbc08 for
details).

Unfortunately, the http push-over-DAV code never learned to
recognize HTTP 401, and so was broken by this change. This
patch does a quick fix of re-enabling the "proactive auth"
strategy only for http-push, leaving the dumb http fetch and
smart-http as-is.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-13 16:34:44 -08:00
Jeff King
afe7c5ff1f drop "match" parameter from get_remote_heads
The get_remote_heads function reads the list of remote refs
during git protocol session. It dates all the way back to
def88e9 (Commit first cut at "git-fetch-pack", 2005-07-04).
At that time, the idea was to come up with a list of refs we
were interested in, and then filter the list as we got it
from the remote side.

Later, 1baaae5 (Make maximal use of the remote refs,
2005-10-28) stopped filtering at the get_remote_heads layer,
letting us use the non-matching refs to find common history.

As a result, all callers now simply pass an empty match
list (and any future callers will want to do the same). So
let's drop these now-useless parameters.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-13 10:08:24 -08:00
Junio C Hamano
963838402a Merge branch 'jk/http-auth'
* jk/http-auth:
  http_init: accept separate URL parameter
  http: use hostname in credential description
  http: retry authentication failures for all http requests
  remote-curl: don't retry auth failures with dumb protocol
  improve httpd auth tests
  url: decode buffers that are not NUL-terminated
2011-10-17 21:37:15 -07:00
Jeff King
deba49377b http_init: accept separate URL parameter
The http_init function takes a "struct remote". Part of its
initialization procedure is to look at the remote's url and
grab some auth-related parameters. However, using the url
included in the remote is:

  - wrong; the remote-curl helper may have a separate,
    unrelated URL (e.g., from remote.*.pushurl). Looking at
    the remote's configured url is incorrect.

  - incomplete; http-fetch doesn't have a remote, so passes
    NULL. So http_init never gets to see the URL we are
    actually going to use.

  - cumbersome; http-push has a similar problem to
    http-fetch, but actually builds a fake remote just to
    pass in the URL.

Instead, let's just add a separate URL parameter to
http_init, and all three callsites can pass in the
appropriate information.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-15 21:18:36 -07:00
Junio C Hamano
7ed72d1813 Merge branch 'sp/smart-http-failure'
* sp/smart-http-failure:
  remote-curl: Fix warning after HTTP failure
2011-10-12 12:34:27 -07:00
Shawn O. Pearce
6cdf0223fe remote-curl: Fix warning after HTTP failure
If the HTTP connection is broken in the middle of a fetch or clone
body, the client presented a useless error message due to part of
the upload-pack->remote-curl pkt-line protocol leaking out of the
helper as the helper's "fetch result":

  error: RPC failed; result=18, HTTP code = 200
  fatal: The remote end hung up unexpectedly
  fatal: early EOF
  fatal: unpack-objects failed
  warning: https unexpectedly said: '0000'

Instead when the HTTP RPC fails discard all remaining data from
upload-pack and report nothing to the transport helper. Errors
were already sent to stderr.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-04 19:11:50 -07:00
Junio C Hamano
48f36dcd73 Sync with 1.7.6.2
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-06 11:42:12 -07:00
Junio C Hamano
5a277f3ff7 Revert "Merge branch 'cb/maint-quiet-push' into maint"
This reverts commit ffa69e61d3, reversing
changes made to 4a13c4d148.

Adding a new command line option to receive-pack and feed it from
send-pack is not an acceptable way to add features, as there is no
guarantee that your updated send-pack will be talking to updated
receive-pack. New features need to be added via the capability mechanism
negotiated over the protocol.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-06 11:10:41 -07:00
Junio C Hamano
ffa69e61d3 Merge branch 'cb/maint-quiet-push' into maint
* cb/maint-quiet-push:
  receive-pack: do not overstep command line argument array
  propagate --quiet to send-pack/receive-pack

Conflicts:
	Documentation/git-receive-pack.txt
	Documentation/git-send-pack.txt
2011-08-23 15:19:45 -07:00
Junio C Hamano
6dd5622f68 Merge branch 'cb/maint-quiet-push'
* cb/maint-quiet-push:
  receive-pack: do not overstep command line argument array
  propagate --quiet to send-pack/receive-pack

Conflicts:
	Documentation/git-receive-pack.txt
	Documentation/git-send-pack.txt
2011-08-17 17:26:05 -07:00
Junio C Hamano
a35d78c0f4 Merge branch 'jc/zlib-wrap' into maint
* jc/zlib-wrap:
  zlib: allow feeding more than 4GB in one go
  zlib: zlib can only process 4GB at a time
  zlib: wrap deflateBound() too
  zlib: wrap deflate side of the API
  zlib: wrap inflateInit2 used to accept only for gzip format
  zlib: wrap remaining calls to direct inflate/inflateEnd
  zlib wrapper: refactor error message formatter
2011-08-16 11:23:26 -07:00
Junio C Hamano
59d9ba869e Merge branch 'sr/transport-helper-fix'
* sr/transport-helper-fix: (21 commits)
  transport-helper: die early on encountering deleted refs
  transport-helper: implement marks location as capability
  transport-helper: Use capname for refspec capability too
  transport-helper: change import semantics
  transport-helper: update ref status after push with export
  transport-helper: use the new done feature where possible
  transport-helper: check status code of finish_command
  transport-helper: factor out push_update_refs_status
  fast-export: support done feature
  fast-import: introduce 'done' command
  git-remote-testgit: fix error handling
  git-remote-testgit: only push for non-local repositories
  remote-curl: accept empty line as terminator
  remote-helpers: export GIT_DIR variable to helpers
  git_remote_helpers: push all refs during a non-local export
  transport-helper: don't feed bogus refs to export push
  git-remote-testgit: import non-HEAD refs
  t5800: document some non-functional parts of remote helpers
  t5800: use skip_all instead of prereq
  t5800: factor out some ref tests
  ...
2011-08-01 15:00:14 -07:00
Clemens Buchacher
90a6c7d443 propagate --quiet to send-pack/receive-pack
Currently, git push --quiet produces some non-error output, e.g.:

 $ git push --quiet
 Unpacking objects: 100% (3/3), done.

Add the --quiet option to send-pack/receive-pack and pass it to
unpack-objects in the receive-pack codepath and to receive-pack in
the push codepath.

This fixes a bug reported for the fedora git package:

 https://bugzilla.redhat.com/show_bug.cgi?id=725593

Reported-by: Jesse Keating <jkeating@redhat.com>
Cc: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-31 18:45:41 -07:00
Junio C Hamano
ed16d0dbf1 Merge branch 'maint'
* maint:
  doc/fast-import: clarify notemodify command
  Documentation: minor grammatical fix in rev-list-options.txt
  Documentation: git-filter-branch honors replacement refs
  remote-curl: Add a format check to parsing of info/refs
  git-config: Remove extra whitespaces
2011-07-22 13:58:46 -07:00
Julian Phillips
6e8e67f307 remote-curl: Add a format check to parsing of info/refs
When parsing info/refs, no checks were applied that the file was in
the requried format.  Since the file is read from a remote webserver,
this isn't guarenteed to be true.  Add a check that the file at least
only contains lines that consist of 40 characters followed by a tab
and then the ref name.

Signed-off-by: Julian Phillips <julian@quantumfyre.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-20 15:51:55 -07:00
Jeff King
28d0c1017a remote-curl: don't retry auth failures with dumb protocol
When fetching an http URL, we first try fetching info/refs
with an extra "service" parameter. This will work for a
smart-http server, or a dumb server which ignores extra
parameters when fetching files. If that fails, we retry
without the extra parameter to remain compatible with dumb
servers which didn't like our first request.

If the server returned a "401 Unauthorized", indicating that
the credentials we provided were not good, there is not much
point in retrying. With the current code, we just waste an
extra round trip to the HTTP server before failing.

But as the http code becomes smarter about throwing away
rejected credentials and re-prompting the user for new ones
(which it will later in this series), this will become more
confusing. At some point we will stop asking for credentials
to retry smart http, and will be asking for credentials to
retry dumb http. So now we're not only wasting an extra HTTP
round trip for something that is unlikely to work, but we're
making the user re-type their password for it.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-20 11:38:35 -07:00
Sverre Rabbelier
1843f0ce4d remote-curl: accept empty line as terminator
This went unnoticed because the transport helper infrastructore did
not check the return value of the helper, nor did the helper print
anything before exiting.

While at it also make sure that the stream doesn't end unexpectedly.

Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-19 11:17:47 -07:00
Junio C Hamano
eb4f4076aa Merge branch 'jc/zlib-wrap'
* jc/zlib-wrap:
  zlib: allow feeding more than 4GB in one go
  zlib: zlib can only process 4GB at a time
  zlib: wrap deflateBound() too
  zlib: wrap deflate side of the API
  zlib: wrap inflateInit2 used to accept only for gzip format
  zlib: wrap remaining calls to direct inflate/inflateEnd
  zlib wrapper: refactor error message formatter

Conflicts:
	sha1_file.c
2011-07-19 09:33:04 -07:00
Jim Meyering
dc4cd76710 plug a few coverity-spotted leaks
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-20 14:27:36 -07:00
Junio C Hamano
ef49a7a012 zlib: zlib can only process 4GB at a time
The size of objects we read from the repository and data we try to put
into the repository are represented in "unsigned long", so that on larger
architectures we can handle objects that weigh more than 4GB.

But the interface defined in zlib.h to communicate with inflate/deflate
limits avail_in (how many bytes of input are we calling zlib with) and
avail_out (how many bytes of output from zlib are we ready to accept)
fields effectively to 4GB by defining their type to be uInt.

In many places in our code, we allocate a large buffer (e.g. mmap'ing a
large loose object file) and tell zlib its size by assigning the size to
avail_in field of the stream, but that will truncate the high octets of
the real size. The worst part of this story is that we often pass around
z_stream (the state object used by zlib) to keep track of the number of
used bytes in input/output buffer by inspecting these two fields, which
practically limits our callchain to the same 4GB limit.

Wrap z_stream in another structure git_zstream that can express avail_in
and avail_out in unsigned long. For now, just die() when the caller gives
a size that cannot be given to a single zlib call. In later patches in the
series, we would make git_inflate() and git_deflate() internally loop to
give callers an illusion that our "improved" version of zlib interface can
operate on a buffer larger than 4GB in one go.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10 11:52:15 -07:00
Junio C Hamano
225a6f1068 zlib: wrap deflateBound() too
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10 11:18:17 -07:00
Junio C Hamano
55bb5c9147 zlib: wrap deflate side of the API
Wrap deflateInit, deflate, and deflateEnd for everybody, and the sole use
of deflateInit2 in remote-curl.c to tell the library to use gzip header
and trailer in git_deflate_init_gzip().

There is only one caller that cares about the status from deflateEnd().
Introduce git_deflate_end_gently() to let that sole caller retrieve the
status and act on it (i.e. die) for now, but we would probably want to
make inflate_end/deflate_end die when they ran out of memory and get
rid of the _gently() kind.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10 11:10:29 -07:00
Dan McGee
a04ff3ec32 http: make curl callbacks match contracts from curl header
Yes, these don't match perfectly with the void* first parameter of the
fread/fwrite in the standard library, but they do match the curl
expected method signature. This is needed when a refactor passes a
curl_write_callback around, which would otherwise give incorrect
parameter warnings.

Signed-off-by: Dan McGee <dpmcgee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-05-04 13:30:28 -07:00
Junio C Hamano
a35138af75 Merge branch 'sp/maint-smart-http-sans-100-continue'
* sp/maint-smart-http-sans-100-continue:
  smart-http: Really never use Expect: 100-continue
2011-03-14 11:59:10 -07:00
Shawn O. Pearce
959dfcf42f smart-http: Really never use Expect: 100-continue
libcurl may choose to try and use Expect: 100-continue for
any type of POST, not just a Transfer: chunked-encoding type.
Force it to disable this feature, as not all proxy servers support
100-continue and leaving it enabled can cause 1 second stalls during
the negotiation phase of fetch-pack/upload-pack.

In ("206b099d26 smart-http: Don't use Expect: 100-Continue") we
tried to disable this for only large POST bodies, but it should be
disabled for every POST body.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-14 11:58:51 -07:00
Junio C Hamano
28afcbfe8b Merge branch 'sp/maint-smart-http-sans-100-continue'
* sp/maint-smart-http-sans-100-continue:
  smart-http: Don't use Expect: 100-Continue
2011-02-27 21:58:29 -08:00
Shawn O. Pearce
206b099d26 smart-http: Don't use Expect: 100-Continue
Some HTTP/1.1 servers or proxies don't correctly implement the
100-Continue feature of HTTP/1.1.  Its a difficult feature to
implement right, and isn't commonly used by browsers, so many
developers may not even be aware that their server (or proxy)
doesn't honor it.

Within the smart HTTP protocol for Git we only use this newer
"Expect: 100-Continue" feature to probe for missing authentication
before uploading a large payload like a pack file during push.
If authentication is necessary, we expect the server to send the
401 Not Authorized response before the bulk data transfer starts,
thus saving the client bandwidth during the retry.

A different method to probe for working authentication is to send an
empty command list (that is just "0000") to $URL/git-receive-pack.
or $URL/git-upload-pack.  All versions of both receive-pack and
upload-pack since the introduction of smart HTTP in Git 1.6.6
cleanly accept just a flush-pkt under --stateless-rpc mode, and
exit with success.

If HTTP level authentication is successful, the backend will return
an empty response, but with HTTP status code 200.  This enables
the client to continue with the transfer.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-15 11:42:20 -08:00
Junio C Hamano
1c80c9b2cb Merge branch 'sp/fix-smart-http-deadlock-on-error'
* sp/fix-smart-http-deadlock-on-error:
  smart-http: Don't deadlock on server failure
2010-08-12 18:27:01 -07:00
Shawn O. Pearce
b4ee10f60f smart-http: Don't deadlock on server failure
If the remote HTTP server fails (e.g. returns 404 or 500) when we
posted the RPC to it, we won't have sent anything to the background
Git process that is supposed to handle the stream.  Because we
didn't send anything, its waiting for input from remote-curl, and
remote-curl cannot read its response payload because doing so would
lead to a deadlock.

Send the background task EOF on its input before we try to read
its response back, that way it will break out of its read loop
and terminate.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-06 15:30:16 -07:00
Junio C Hamano
3cc9caadf7 Merge branch 'rc/maint-curl-helper'
* rc/maint-curl-helper:
  remote-curl: ensure that URLs have a trailing slash
  http: make end_url_with_slash() public
  t5541-http-push: add test for URLs with trailing slash

Conflicts:
	remote-curl.c
2010-05-08 22:37:24 -07:00
Tay Ray Chuan
d8fab07208 remote-curl: ensure that URLs have a trailing slash
Previously, we blindly assumed that URLs passed to the remote-curl
helper did not end with a trailing slash.

Use the convenience function end_url_with_slash() from http.[ch] to
ensure that URLs have a trailing slash on invocation of the remote-curl
helper, and use the URL as one with a trailing slash throughout.

It is possible for users to pass a URL with a trailing slash to
remote-curl, by, say, setting it in remote.<name>.url in their git
config. The resulting requests have an empty path component (//) and may
break implementations of the http git protocol.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-09 21:16:11 -07:00
Scott Chacon
42653c09c8 Prompt for a username when an HTTP request 401s
When an HTTP request returns a 401, Git will currently fail with a
confusing message saying that it got a 401, which is not very
descriptive.

Currently if a user wants to use Git over HTTP, they have to use one
URL with the username in the URL (e.g. "http://user@host.com/repo.git")
for write access and another without the username for unauthenticated
read access (unless they want to be prompted for the password each
time). However, since the HTTP servers will return a 401 if an action
requires authentication, we can prompt for username and password if we
see this, allowing us to use a single URL for both purposes.

This patch changes http_request to prompt for the username and password,
then return HTTP_REAUTH so http_get_strbuf can try again.  If it gets
a 401 even when a user/pass is supplied, http_request will now return
HTTP_NOAUTH which remote_curl can then use to display a more
intelligent error message that is less confusing.

Signed-off-by: Scott Chacon <schacon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-01 23:24:59 -07:00
Junio C Hamano
78d909a494 Merge branch 'tc/http-cleanup'
* tc/http-cleanup:
  remote-curl: init walker only when needed
  remote-curl: use http_fetch_ref() instead of walker wrapper
  http: init and cleanup separately from http-walker
  http-walker: cleanup more thoroughly
  http-push: remove "|| 1" to enable verbose check
  t554[01]-http-push: refactor, add non-ff tests
  t5541-http-push: check that ref is unchanged for non-ff test
2010-03-15 00:58:50 -07:00
Junio C Hamano
a886ba2801 Merge branch 'sp/maint-push-sideband' into maint
* sp/maint-push-sideband:
  receive-pack: Send internal errors over side-band #2
  t5401: Use a bare repository for the remote peer
  receive-pack: Send hook output over side band #2
  receive-pack: Wrap status reports inside side-band-64k
  receive-pack: Refactor how capabilities are shown to the client
  send-pack: demultiplex a sideband stream with status data
  run-command: support custom fd-set in async
  run-command: Allow stderr to be a caller supplied pipe

Conflicts:
	builtin-receive-pack.c
	run-command.c
	t/t5401-update-hooks.sh
2010-03-02 22:54:50 -08:00
Tay Ray Chuan
26e1e0b23a remote-curl: init walker only when needed
Invoke get_http_walker() only when fetching with the dumb protocol.
Additionally, add an invocation to walker_free() after we're done using
the walker.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02 11:10:36 -08:00
Tay Ray Chuan
aec4975602 remote-curl: use http_fetch_ref() instead of walker wrapper
The http-walker implementation of walker->fetch_ref() doesn't do
anything special compared to http_fetch_ref() anyway.

Remove init_walker() invocation before fetching the ref, since we aren't
using the walker wrapper and don't need a walker instance anymore.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02 11:10:36 -08:00
Tay Ray Chuan
888692b733 http: init and cleanup separately from http-walker
Previously, all our http operations were done with http-walker. With the
new remote-curl helper, we find ourselves using http methods outside of
http-walker - for example, fetching info/refs.

Accomodate this by separating http_init() and http_cleanup() invocations
from http-walker.

Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-03-02 11:10:36 -08:00
Junio C Hamano
76d44c8cfd Merge branch 'sp/maint-push-sideband' into sp/push-sideband
* sp/maint-push-sideband:
  receive-pack: Send hook output over side band #2
  receive-pack: Wrap status reports inside side-band-64k
  receive-pack: Refactor how capabilities are shown to the client
  send-pack: demultiplex a sideband stream with status data
  run-command: support custom fd-set in async
  run-command: Allow stderr to be a caller supplied pipe
  Update git fsck --full short description to mention packs

Conflicts:
	run-command.c
2010-02-05 21:08:53 -08:00