2023-04-11 07:41:52 +00:00
|
|
|
#include "git-compat-util.h"
|
2017-06-14 18:07:36 +00:00
|
|
|
#include "config.h"
|
2023-03-21 06:26:03 +00:00
|
|
|
#include "environment.h"
|
2023-04-11 07:41:51 +00:00
|
|
|
#include "git-zlib.h"
|
2023-02-24 00:09:27 +00:00
|
|
|
#include "hex.h"
|
2023-05-16 06:33:59 +00:00
|
|
|
#include "path.h"
|
2018-03-23 17:20:59 +00:00
|
|
|
#include "repository.h"
|
2009-10-31 00:47:32 +00:00
|
|
|
#include "refs.h"
|
|
|
|
#include "pkt-line.h"
|
|
|
|
#include "object.h"
|
|
|
|
#include "tag.h"
|
2018-04-10 21:26:18 +00:00
|
|
|
#include "exec-cmd.h"
|
2009-10-31 00:47:34 +00:00
|
|
|
#include "run-command.h"
|
|
|
|
#include "string-list.h"
|
2010-05-23 09:17:55 +00:00
|
|
|
#include "url.h"
|
2020-07-28 20:23:39 +00:00
|
|
|
#include "strvec.h"
|
2017-08-18 22:20:26 +00:00
|
|
|
#include "packfile.h"
|
2023-05-16 06:34:06 +00:00
|
|
|
#include "object-store-ll.h"
|
2018-03-15 17:31:40 +00:00
|
|
|
#include "protocol.h"
|
date API: create a date.h, split from cache.h
Move the declaration of the date.c functions from cache.h, and adjust
the relevant users to include the new date.h header.
The show_ident_date() function belonged in pretty.h (it's defined in
pretty.c), its two users outside of pretty.c didn't strictly need to
include pretty.h, as they get it indirectly, but let's add it to them
anyway.
Similarly, the change to "builtin/{fast-import,show-branch,tag}.c"
isn't needed as far as the compiler is concerned, but since they all
use the "DATE_MODE()" macro we now define in date.h, let's have them
include it.
We could simply include this new header in "cache.h", but as this
change shows these functions weren't common enough to warrant
including in it in the first place. By moving them out of cache.h
changes to this API will no longer cause a (mostly) full re-build of
the project when "make" is run.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-02-16 08:14:02 +00:00
|
|
|
#include "date.h"
|
2023-03-21 06:26:07 +00:00
|
|
|
#include "write-or-die.h"
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
static const char content_type[] = "Content-Type";
|
|
|
|
static const char content_length[] = "Content-Length";
|
|
|
|
static const char last_modified[] = "Last-Modified";
|
2009-11-05 01:16:37 +00:00
|
|
|
static int getanyfile = 1;
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
static unsigned long max_request_buffer = 10 * 1024 * 1024;
|
2009-10-31 00:47:32 +00:00
|
|
|
|
2009-10-31 00:47:34 +00:00
|
|
|
static struct string_list *query_params;
|
|
|
|
|
|
|
|
struct rpc_service {
|
|
|
|
const char *name;
|
|
|
|
const char *config_name;
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
unsigned buffer_input : 1;
|
2009-10-31 00:47:34 +00:00
|
|
|
signed enabled : 2;
|
|
|
|
};
|
|
|
|
|
|
|
|
static struct rpc_service rpc_service[] = {
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
{ "upload-pack", "uploadpack", 1, 1 },
|
|
|
|
{ "receive-pack", "receivepack", 0, -1 },
|
2009-10-31 00:47:34 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static struct string_list *get_parameters(void)
|
|
|
|
{
|
|
|
|
if (!query_params) {
|
|
|
|
const char *query = getenv("QUERY_STRING");
|
|
|
|
|
2021-03-13 16:17:22 +00:00
|
|
|
CALLOC_ARRAY(query_params, 1);
|
2009-10-31 00:47:34 +00:00
|
|
|
while (query && *query) {
|
2010-05-23 09:17:55 +00:00
|
|
|
char *name = url_decode_parameter_name(&query);
|
|
|
|
char *value = url_decode_parameter_value(&query);
|
2009-10-31 00:47:34 +00:00
|
|
|
struct string_list_item *i;
|
|
|
|
|
2010-06-25 23:41:37 +00:00
|
|
|
i = string_list_lookup(query_params, name);
|
2009-10-31 00:47:34 +00:00
|
|
|
if (!i)
|
2010-06-25 23:41:35 +00:00
|
|
|
i = string_list_insert(query_params, name);
|
2009-10-31 00:47:34 +00:00
|
|
|
else
|
|
|
|
free(i->util);
|
|
|
|
i->util = value;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return query_params;
|
|
|
|
}
|
|
|
|
|
|
|
|
static const char *get_parameter(const char *name)
|
|
|
|
{
|
|
|
|
struct string_list_item *i;
|
2010-06-25 23:41:37 +00:00
|
|
|
i = string_list_lookup(get_parameters(), name);
|
2009-10-31 00:47:34 +00:00
|
|
|
return i ? i->util : NULL;
|
|
|
|
}
|
|
|
|
|
2009-11-14 21:10:58 +00:00
|
|
|
__attribute__((format (printf, 2, 3)))
|
2009-10-31 00:47:32 +00:00
|
|
|
static void format_write(int fd, const char *fmt, ...)
|
|
|
|
{
|
|
|
|
static char buffer[1024];
|
|
|
|
|
|
|
|
va_list args;
|
|
|
|
unsigned n;
|
|
|
|
|
|
|
|
va_start(args, fmt);
|
|
|
|
n = vsnprintf(buffer, sizeof(buffer), fmt, args);
|
|
|
|
va_end(args);
|
|
|
|
if (n >= sizeof(buffer))
|
|
|
|
die("protocol error: impossibly long line");
|
|
|
|
|
2013-02-20 20:01:56 +00:00
|
|
|
write_or_die(fd, buffer, n);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void http_status(struct strbuf *hdr, unsigned code, const char *msg)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
strbuf_addf(hdr, "Status: %u %s\r\n", code, msg);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void hdr_str(struct strbuf *hdr, const char *name, const char *value)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
strbuf_addf(hdr, "%s: %s\r\n", name, value);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void hdr_int(struct strbuf *hdr, const char *name, uintmax_t value)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
strbuf_addf(hdr, "%s: %" PRIuMAX "\r\n", name, value);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2017-04-26 19:29:31 +00:00
|
|
|
static void hdr_date(struct strbuf *hdr, const char *name, timestamp_t when)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
convert "enum date_mode" into a struct
In preparation for adding date modes that may carry extra
information beyond the mode itself, this patch converts the
date_mode enum into a struct.
Most of the conversion is fairly straightforward; we pass
the struct as a pointer and dereference the type field where
necessary. Locations that declare a date_mode can use a "{}"
constructor. However, the tricky case is where we use the
enum labels as constants, like:
show_date(t, tz, DATE_NORMAL);
Ideally we could say:
show_date(t, tz, &{ DATE_NORMAL });
but of course C does not allow that. Likewise, we cannot
cast the constant to a struct, because we need to pass an
actual address. Our options are basically:
1. Manually add a "struct date_mode d = { DATE_NORMAL }"
definition to each caller, and pass "&d". This makes
the callers uglier, because they sometimes do not even
have their own scope (e.g., they are inside a switch
statement).
2. Provide a pre-made global "date_normal" struct that can
be passed by address. We'd also need "date_rfc2822",
"date_iso8601", and so forth. But at least the ugliness
is defined in one place.
3. Provide a wrapper that generates the correct struct on
the fly. The big downside is that we end up pointing to
a single global, which makes our wrapper non-reentrant.
But show_date is already not reentrant, so it does not
matter.
This patch implements 3, along with a minor macro to keep
the size of the callers sane.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-25 16:55:02 +00:00
|
|
|
const char *value = show_date(when, 0, DATE_MODE(RFC2822));
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_str(hdr, name, value);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void hdr_nocache(struct strbuf *hdr)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_str(hdr, "Expires", "Fri, 01 Jan 1980 00:00:00 GMT");
|
|
|
|
hdr_str(hdr, "Pragma", "no-cache");
|
|
|
|
hdr_str(hdr, "Cache-Control", "no-cache, max-age=0, must-revalidate");
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void hdr_cache_forever(struct strbuf *hdr)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2017-04-26 19:29:31 +00:00
|
|
|
timestamp_t now = time(NULL);
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_date(hdr, "Date", now);
|
|
|
|
hdr_date(hdr, "Expires", now + 31536000);
|
|
|
|
hdr_str(hdr, "Cache-Control", "public, max-age=31536000");
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void end_headers(struct strbuf *hdr)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
strbuf_add(hdr, "\r\n", 2);
|
|
|
|
write_or_die(1, hdr->buf, hdr->len);
|
|
|
|
strbuf_release(hdr);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
__attribute__((format (printf, 2, 3)))
|
|
|
|
static NORETURN void not_found(struct strbuf *hdr, const char *err, ...)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
|
|
|
va_list params;
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
http_status(hdr, 404, "Not Found");
|
|
|
|
hdr_nocache(hdr);
|
|
|
|
end_headers(hdr);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
va_start(params, err);
|
|
|
|
if (err && *err)
|
|
|
|
vfprintf(stderr, err, params);
|
|
|
|
va_end(params);
|
|
|
|
exit(0);
|
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
__attribute__((format (printf, 2, 3)))
|
|
|
|
static NORETURN void forbidden(struct strbuf *hdr, const char *err, ...)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
|
|
|
va_list params;
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
http_status(hdr, 403, "Forbidden");
|
|
|
|
hdr_nocache(hdr);
|
|
|
|
end_headers(hdr);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
va_start(params, err);
|
|
|
|
if (err && *err)
|
|
|
|
vfprintf(stderr, err, params);
|
|
|
|
va_end(params);
|
|
|
|
exit(0);
|
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void select_getanyfile(struct strbuf *hdr)
|
2009-11-05 01:16:37 +00:00
|
|
|
{
|
|
|
|
if (!getanyfile)
|
2016-08-09 23:47:31 +00:00
|
|
|
forbidden(hdr, "Unsupported service: getanyfile");
|
2009-11-05 01:16:37 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void send_strbuf(struct strbuf *hdr,
|
|
|
|
const char *type, struct strbuf *buf)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_int(hdr, content_length, buf->len);
|
|
|
|
hdr_str(hdr, content_type, type);
|
|
|
|
end_headers(hdr);
|
2013-02-20 20:01:56 +00:00
|
|
|
write_or_die(1, buf->buf, buf->len);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void send_local_file(struct strbuf *hdr, const char *the_type,
|
|
|
|
const char *name)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2015-08-10 09:35:31 +00:00
|
|
|
char *p = git_pathdup("%s", name);
|
2009-10-31 00:47:32 +00:00
|
|
|
size_t buf_alloc = 8192;
|
|
|
|
char *buf = xmalloc(buf_alloc);
|
|
|
|
int fd;
|
|
|
|
struct stat sb;
|
|
|
|
|
|
|
|
fd = open(p, O_RDONLY);
|
|
|
|
if (fd < 0)
|
2016-08-09 23:47:31 +00:00
|
|
|
not_found(hdr, "Cannot open '%s': %s", p, strerror(errno));
|
2009-10-31 00:47:32 +00:00
|
|
|
if (fstat(fd, &sb) < 0)
|
|
|
|
die_errno("Cannot stat '%s'", p);
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_int(hdr, content_length, sb.st_size);
|
|
|
|
hdr_str(hdr, content_type, the_type);
|
|
|
|
hdr_date(hdr, last_modified, sb.st_mtime);
|
|
|
|
end_headers(hdr);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
2009-11-12 04:42:41 +00:00
|
|
|
for (;;) {
|
2009-10-31 00:47:32 +00:00
|
|
|
ssize_t n = xread(fd, buf, buf_alloc);
|
|
|
|
if (n < 0)
|
|
|
|
die_errno("Cannot read '%s'", p);
|
|
|
|
if (!n)
|
|
|
|
break;
|
2013-02-20 20:01:56 +00:00
|
|
|
write_or_die(1, buf, n);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
close(fd);
|
|
|
|
free(buf);
|
2015-08-10 09:35:31 +00:00
|
|
|
free(p);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void get_text_file(struct strbuf *hdr, char *name)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
|
|
|
hdr_nocache(hdr);
|
|
|
|
send_local_file(hdr, "text/plain", name);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void get_loose_object(struct strbuf *hdr, char *name)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
|
|
|
hdr_cache_forever(hdr);
|
|
|
|
send_local_file(hdr, "application/x-git-loose-object", name);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void get_pack_file(struct strbuf *hdr, char *name)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
|
|
|
hdr_cache_forever(hdr);
|
|
|
|
send_local_file(hdr, "application/x-git-packed-objects", name);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void get_idx_file(struct strbuf *hdr, char *name)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
|
|
|
hdr_cache_forever(hdr);
|
|
|
|
send_local_file(hdr, "application/x-git-packed-objects-toc", name);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
2014-08-07 16:21:17 +00:00
|
|
|
static void http_config(void)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
2014-08-07 16:21:17 +00:00
|
|
|
int i, value = 0;
|
|
|
|
struct strbuf var = STRBUF_INIT;
|
use skip_prefix to avoid magic numbers
It's a common idiom to match a prefix and then skip past it
with a magic number, like:
if (starts_with(foo, "bar"))
foo += 3;
This is easy to get wrong, since you have to count the
prefix string yourself, and there's no compiler check if the
string changes. We can use skip_prefix to avoid the magic
numbers here.
Note that some of these conversions could be much shorter.
For example:
if (starts_with(arg, "--foo=")) {
bar = arg + 6;
continue;
}
could become:
if (skip_prefix(arg, "--foo=", &bar))
continue;
However, I have left it as:
if (skip_prefix(arg, "--foo=", &v)) {
bar = v;
continue;
}
to visually match nearby cases which need to actually
process the string. Like:
if (skip_prefix(arg, "--foo=", &v)) {
bar = atoi(v);
continue;
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-18 19:47:50 +00:00
|
|
|
|
2014-08-07 16:21:17 +00:00
|
|
|
git_config_get_bool("http.getanyfile", &getanyfile);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
git_config_get_ulong("http.maxrequestbuffer", &max_request_buffer);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
2014-08-07 16:21:17 +00:00
|
|
|
for (i = 0; i < ARRAY_SIZE(rpc_service); i++) {
|
|
|
|
struct rpc_service *svc = &rpc_service[i];
|
|
|
|
strbuf_addf(&var, "http.%s", svc->config_name);
|
|
|
|
if (!git_config_get_bool(var.buf, &value))
|
|
|
|
svc->enabled = value;
|
|
|
|
strbuf_reset(&var);
|
2009-11-05 01:16:37 +00:00
|
|
|
}
|
|
|
|
|
2014-08-07 16:21:17 +00:00
|
|
|
strbuf_release(&var);
|
2009-10-31 00:47:34 +00:00
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static struct rpc_service *select_service(struct strbuf *hdr, const char *name)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
use skip_prefix to avoid magic numbers
It's a common idiom to match a prefix and then skip past it
with a magic number, like:
if (starts_with(foo, "bar"))
foo += 3;
This is easy to get wrong, since you have to count the
prefix string yourself, and there's no compiler check if the
string changes. We can use skip_prefix to avoid the magic
numbers here.
Note that some of these conversions could be much shorter.
For example:
if (starts_with(arg, "--foo=")) {
bar = arg + 6;
continue;
}
could become:
if (skip_prefix(arg, "--foo=", &bar))
continue;
However, I have left it as:
if (skip_prefix(arg, "--foo=", &v)) {
bar = v;
continue;
}
to visually match nearby cases which need to actually
process the string. Like:
if (skip_prefix(arg, "--foo=", &v)) {
bar = atoi(v);
continue;
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-18 19:47:50 +00:00
|
|
|
const char *svc_name;
|
2009-10-31 00:47:34 +00:00
|
|
|
struct rpc_service *svc = NULL;
|
|
|
|
int i;
|
|
|
|
|
use skip_prefix to avoid magic numbers
It's a common idiom to match a prefix and then skip past it
with a magic number, like:
if (starts_with(foo, "bar"))
foo += 3;
This is easy to get wrong, since you have to count the
prefix string yourself, and there's no compiler check if the
string changes. We can use skip_prefix to avoid the magic
numbers here.
Note that some of these conversions could be much shorter.
For example:
if (starts_with(arg, "--foo=")) {
bar = arg + 6;
continue;
}
could become:
if (skip_prefix(arg, "--foo=", &bar))
continue;
However, I have left it as:
if (skip_prefix(arg, "--foo=", &v)) {
bar = v;
continue;
}
to visually match nearby cases which need to actually
process the string. Like:
if (skip_prefix(arg, "--foo=", &v)) {
bar = atoi(v);
continue;
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-18 19:47:50 +00:00
|
|
|
if (!skip_prefix(name, "git-", &svc_name))
|
2016-08-09 23:47:31 +00:00
|
|
|
forbidden(hdr, "Unsupported service: '%s'", name);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(rpc_service); i++) {
|
|
|
|
struct rpc_service *s = &rpc_service[i];
|
use skip_prefix to avoid magic numbers
It's a common idiom to match a prefix and then skip past it
with a magic number, like:
if (starts_with(foo, "bar"))
foo += 3;
This is easy to get wrong, since you have to count the
prefix string yourself, and there's no compiler check if the
string changes. We can use skip_prefix to avoid the magic
numbers here.
Note that some of these conversions could be much shorter.
For example:
if (starts_with(arg, "--foo=")) {
bar = arg + 6;
continue;
}
could become:
if (skip_prefix(arg, "--foo=", &bar))
continue;
However, I have left it as:
if (skip_prefix(arg, "--foo=", &v)) {
bar = v;
continue;
}
to visually match nearby cases which need to actually
process the string. Like:
if (skip_prefix(arg, "--foo=", &v)) {
bar = atoi(v);
continue;
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-18 19:47:50 +00:00
|
|
|
if (!strcmp(s->name, svc_name)) {
|
2009-10-31 00:47:34 +00:00
|
|
|
svc = s;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!svc)
|
2016-08-09 23:47:31 +00:00
|
|
|
forbidden(hdr, "Unsupported service: '%s'", name);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
if (svc->enabled < 0) {
|
|
|
|
const char *user = getenv("REMOTE_USER");
|
|
|
|
svc->enabled = (user && *user) ? 1 : 0;
|
|
|
|
}
|
|
|
|
if (!svc->enabled)
|
2016-08-09 23:47:31 +00:00
|
|
|
forbidden(hdr, "Service not enabled: '%s'", svc->name);
|
2009-10-31 00:47:34 +00:00
|
|
|
return svc;
|
|
|
|
}
|
|
|
|
|
2018-06-10 15:05:19 +00:00
|
|
|
static void write_to_child(int out, const unsigned char *buf, ssize_t len, const char *prog_name)
|
|
|
|
{
|
|
|
|
if (write_in_full(out, buf, len) < 0)
|
|
|
|
die("unable to write to '%s'", prog_name);
|
|
|
|
}
|
|
|
|
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
/*
|
|
|
|
* This is basically strbuf_read(), except that if we
|
|
|
|
* hit max_request_buffer we die (we'd rather reject a
|
|
|
|
* maliciously large request than chew up infinite memory).
|
|
|
|
*/
|
2018-06-10 15:05:20 +00:00
|
|
|
static ssize_t read_request_eof(int fd, unsigned char **out)
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
{
|
|
|
|
size_t len = 0, alloc = 8192;
|
|
|
|
unsigned char *buf = xmalloc(alloc);
|
|
|
|
|
|
|
|
if (max_request_buffer < alloc)
|
|
|
|
max_request_buffer = alloc;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
ssize_t cnt;
|
|
|
|
|
|
|
|
cnt = read_in_full(fd, buf + len, alloc - len);
|
|
|
|
if (cnt < 0) {
|
|
|
|
free(buf);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* partial read from read_in_full means we hit EOF */
|
|
|
|
len += cnt;
|
|
|
|
if (len < alloc) {
|
|
|
|
*out = buf;
|
|
|
|
return len;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* otherwise, grow and try again (if we can) */
|
|
|
|
if (alloc == max_request_buffer)
|
|
|
|
die("request was larger than our maximum size (%lu);"
|
|
|
|
" try setting GIT_HTTP_MAX_REQUEST_BUFFER",
|
|
|
|
max_request_buffer);
|
|
|
|
|
|
|
|
alloc = alloc_nr(alloc);
|
|
|
|
if (alloc > max_request_buffer)
|
|
|
|
alloc = max_request_buffer;
|
|
|
|
REALLOC_ARRAY(buf, alloc);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-06-10 15:05:20 +00:00
|
|
|
static ssize_t read_request_fixed_len(int fd, ssize_t req_len, unsigned char **out)
|
|
|
|
{
|
|
|
|
unsigned char *buf = NULL;
|
|
|
|
ssize_t cnt = 0;
|
|
|
|
|
|
|
|
if (max_request_buffer < req_len) {
|
|
|
|
die("request was larger than our maximum size (%lu): "
|
|
|
|
"%" PRIuMAX "; try setting GIT_HTTP_MAX_REQUEST_BUFFER",
|
|
|
|
max_request_buffer, (uintmax_t)req_len);
|
|
|
|
}
|
|
|
|
|
|
|
|
buf = xmalloc(req_len);
|
|
|
|
cnt = read_in_full(fd, buf, req_len);
|
|
|
|
if (cnt < 0) {
|
|
|
|
free(buf);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
*out = buf;
|
|
|
|
return cnt;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t get_content_length(void)
|
|
|
|
{
|
|
|
|
ssize_t val = -1;
|
|
|
|
const char *str = getenv("CONTENT_LENGTH");
|
|
|
|
|
2018-09-07 03:36:07 +00:00
|
|
|
if (str && *str && !git_parse_ssize_t(str, &val))
|
2018-06-10 15:05:20 +00:00
|
|
|
die("failed to parse CONTENT_LENGTH: %s", str);
|
|
|
|
return val;
|
|
|
|
}
|
|
|
|
|
|
|
|
static ssize_t read_request(int fd, unsigned char **out, ssize_t req_len)
|
|
|
|
{
|
|
|
|
if (req_len < 0)
|
|
|
|
return read_request_eof(fd, out);
|
|
|
|
else
|
|
|
|
return read_request_fixed_len(fd, req_len, out);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void inflate_request(const char *prog_name, int out, int buffer_input, ssize_t req_len)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
2011-06-10 18:52:15 +00:00
|
|
|
git_zstream stream;
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
unsigned char *full_request = NULL;
|
2009-10-31 00:47:34 +00:00
|
|
|
unsigned char in_buf[8192];
|
|
|
|
unsigned char out_buf[8192];
|
|
|
|
unsigned long cnt = 0;
|
2018-07-27 03:48:59 +00:00
|
|
|
int req_len_defined = req_len >= 0;
|
|
|
|
size_t req_remaining_len = req_len;
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
memset(&stream, 0, sizeof(stream));
|
2011-06-10 17:45:29 +00:00
|
|
|
git_inflate_init_gzip_only(&stream);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
while (1) {
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
ssize_t n;
|
|
|
|
|
|
|
|
if (buffer_input) {
|
|
|
|
if (full_request)
|
|
|
|
n = 0; /* nothing left to read */
|
|
|
|
else
|
2018-06-10 15:05:20 +00:00
|
|
|
n = read_request(0, &full_request, req_len);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
stream.next_in = full_request;
|
|
|
|
} else {
|
2018-07-27 03:48:59 +00:00
|
|
|
ssize_t buffer_len;
|
|
|
|
if (req_len_defined && req_remaining_len <= sizeof(in_buf))
|
|
|
|
buffer_len = req_remaining_len;
|
|
|
|
else
|
|
|
|
buffer_len = sizeof(in_buf);
|
|
|
|
n = xread(0, in_buf, buffer_len);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
stream.next_in = in_buf;
|
2018-07-27 03:48:59 +00:00
|
|
|
if (req_len_defined && n > 0)
|
|
|
|
req_remaining_len -= n;
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
}
|
|
|
|
|
2009-10-31 00:47:34 +00:00
|
|
|
if (n <= 0)
|
|
|
|
die("request ended in the middle of the gzip stream");
|
|
|
|
stream.avail_in = n;
|
|
|
|
|
|
|
|
while (0 < stream.avail_in) {
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
stream.next_out = out_buf;
|
|
|
|
stream.avail_out = sizeof(out_buf);
|
|
|
|
|
2011-06-10 17:39:27 +00:00
|
|
|
ret = git_inflate(&stream, Z_NO_FLUSH);
|
2009-10-31 00:47:34 +00:00
|
|
|
if (ret != Z_OK && ret != Z_STREAM_END)
|
|
|
|
die("zlib error inflating request, result %d", ret);
|
|
|
|
|
|
|
|
n = stream.total_out - cnt;
|
2018-06-10 15:05:19 +00:00
|
|
|
write_to_child(out, out_buf, stream.total_out - cnt, prog_name);
|
|
|
|
cnt = stream.total_out;
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
if (ret == Z_STREAM_END)
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
2011-06-10 17:39:27 +00:00
|
|
|
git_inflate_end(&stream);
|
2009-10-31 00:47:34 +00:00
|
|
|
close(out);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
free(full_request);
|
|
|
|
}
|
|
|
|
|
2018-06-10 15:05:20 +00:00
|
|
|
static void copy_request(const char *prog_name, int out, ssize_t req_len)
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
{
|
|
|
|
unsigned char *buf;
|
2018-06-10 15:05:20 +00:00
|
|
|
ssize_t n = read_request(0, &buf, req_len);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
if (n < 0)
|
|
|
|
die_errno("error reading request body");
|
2018-06-10 15:05:19 +00:00
|
|
|
write_to_child(out, buf, n, prog_name);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
close(out);
|
|
|
|
free(buf);
|
2009-10-31 00:47:34 +00:00
|
|
|
}
|
|
|
|
|
2018-07-27 03:48:59 +00:00
|
|
|
static void pipe_fixed_length(const char *prog_name, int out, size_t req_len)
|
|
|
|
{
|
|
|
|
unsigned char buf[8192];
|
|
|
|
size_t remaining_len = req_len;
|
|
|
|
|
|
|
|
while (remaining_len > 0) {
|
|
|
|
size_t chunk_length = remaining_len > sizeof(buf) ? sizeof(buf) : remaining_len;
|
|
|
|
ssize_t n = xread(0, buf, chunk_length);
|
|
|
|
if (n < 0)
|
|
|
|
die_errno("Reading request failed");
|
|
|
|
write_to_child(out, buf, n, prog_name);
|
|
|
|
remaining_len -= n;
|
|
|
|
}
|
|
|
|
|
|
|
|
close(out);
|
|
|
|
}
|
|
|
|
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
static void run_service(const char **argv, int buffer_input)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
|
|
|
const char *encoding = getenv("HTTP_CONTENT_ENCODING");
|
|
|
|
const char *user = getenv("REMOTE_USER");
|
|
|
|
const char *host = getenv("REMOTE_ADDR");
|
|
|
|
int gzipped_request = 0;
|
2014-08-19 19:09:35 +00:00
|
|
|
struct child_process cld = CHILD_PROCESS_INIT;
|
2018-06-10 15:05:20 +00:00
|
|
|
ssize_t req_len = get_content_length();
|
2009-10-31 00:47:34 +00:00
|
|
|
|
2021-10-24 16:28:59 +00:00
|
|
|
if (encoding && (!strcmp(encoding, "gzip") || !strcmp(encoding, "x-gzip")))
|
2009-10-31 00:47:34 +00:00
|
|
|
gzipped_request = 1;
|
|
|
|
|
|
|
|
if (!user || !*user)
|
|
|
|
user = "anonymous";
|
|
|
|
if (!host || !*host)
|
|
|
|
host = "(none)";
|
|
|
|
|
2012-03-30 07:01:30 +00:00
|
|
|
if (!getenv("GIT_COMMITTER_NAME"))
|
2022-06-02 09:09:50 +00:00
|
|
|
strvec_pushf(&cld.env, "GIT_COMMITTER_NAME=%s", user);
|
2012-03-30 07:01:30 +00:00
|
|
|
if (!getenv("GIT_COMMITTER_EMAIL"))
|
2022-06-02 09:09:50 +00:00
|
|
|
strvec_pushf(&cld.env,
|
strvec: fix indentation in renamed calls
Code which split an argv_array call across multiple lines, like:
argv_array_pushl(&args, "one argument",
"another argument", "and more",
NULL);
was recently mechanically renamed to use strvec, which results in
mis-matched indentation like:
strvec_pushl(&args, "one argument",
"another argument", "and more",
NULL);
Let's fix these up to align the arguments with the opening paren. I did
this manually by sifting through the results of:
git jump grep 'strvec_.*,$'
and liberally applying my editor's auto-format. Most of the changes are
of the form shown above, though I also normalized a few that had
originally used a single-tab indentation (rather than our usual style of
aligning with the open paren). I also rewrapped a couple of obvious
cases (e.g., where previously too-long lines became short enough to fit
on one), but I wasn't aggressive about it. In cases broken to three or
more lines, the grouping of arguments is sometimes meaningful, and it
wasn't worth my time or reviewer time to ponder each case individually.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28 20:26:31 +00:00
|
|
|
"GIT_COMMITTER_EMAIL=%s@http.%s", user, host);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
2021-11-25 22:52:18 +00:00
|
|
|
strvec_pushv(&cld.args, argv);
|
2018-07-27 03:48:59 +00:00
|
|
|
if (buffer_input || gzipped_request || req_len >= 0)
|
2009-10-31 00:47:34 +00:00
|
|
|
cld.in = -1;
|
|
|
|
cld.git_cmd = 1;
|
2018-11-24 13:48:27 +00:00
|
|
|
cld.clean_on_exit = 1;
|
|
|
|
cld.wait_after_clean = 1;
|
2009-10-31 00:47:34 +00:00
|
|
|
if (start_command(&cld))
|
|
|
|
exit(1);
|
|
|
|
|
|
|
|
close(1);
|
|
|
|
if (gzipped_request)
|
2018-06-10 15:05:20 +00:00
|
|
|
inflate_request(argv[0], cld.in, buffer_input, req_len);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
else if (buffer_input)
|
2018-06-10 15:05:20 +00:00
|
|
|
copy_request(argv[0], cld.in, req_len);
|
2018-07-27 03:48:59 +00:00
|
|
|
else if (req_len >= 0)
|
|
|
|
pipe_fixed_length(argv[0], cld.in, req_len);
|
2009-10-31 00:47:34 +00:00
|
|
|
else
|
|
|
|
close(0);
|
|
|
|
|
|
|
|
if (finish_command(&cld))
|
|
|
|
exit(1);
|
|
|
|
}
|
|
|
|
|
2015-05-25 18:38:55 +00:00
|
|
|
static int show_text_ref(const char *name, const struct object_id *oid,
|
2022-08-25 17:09:48 +00:00
|
|
|
int flag UNUSED, void *cb_data)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2013-04-10 00:55:08 +00:00
|
|
|
const char *name_nons = strip_namespace(name);
|
2009-10-31 00:47:32 +00:00
|
|
|
struct strbuf *buf = cb_data;
|
2018-06-29 01:21:51 +00:00
|
|
|
struct object *o = parse_object(the_repository, oid);
|
2009-10-31 00:47:32 +00:00
|
|
|
if (!o)
|
|
|
|
return 0;
|
|
|
|
|
2015-05-25 18:38:55 +00:00
|
|
|
strbuf_addf(buf, "%s\t%s\n", oid_to_hex(oid), name_nons);
|
2009-10-31 00:47:32 +00:00
|
|
|
if (o->type == OBJ_TAG) {
|
2018-06-29 01:22:05 +00:00
|
|
|
o = deref_tag(the_repository, o, name, 0);
|
2009-10-31 00:47:32 +00:00
|
|
|
if (!o)
|
|
|
|
return 0;
|
2015-11-10 02:22:28 +00:00
|
|
|
strbuf_addf(buf, "%s\t%s^{}\n", oid_to_hex(&o->oid),
|
2013-04-10 00:55:08 +00:00
|
|
|
name_nons);
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-02-24 06:38:43 +00:00
|
|
|
static void get_info_refs(struct strbuf *hdr, char *arg UNUSED)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
2009-10-31 00:47:34 +00:00
|
|
|
const char *service_name = get_parameter("service");
|
2009-10-31 00:47:32 +00:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_nocache(hdr);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
if (service_name) {
|
|
|
|
const char *argv[] = {NULL /* service name */,
|
2021-08-05 01:25:43 +00:00
|
|
|
"--http-backend-info-refs",
|
2009-10-31 00:47:34 +00:00
|
|
|
".", NULL};
|
2016-08-09 23:47:31 +00:00
|
|
|
struct rpc_service *svc = select_service(hdr, service_name);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
strbuf_addf(&buf, "application/x-git-%s-advertisement",
|
|
|
|
svc->name);
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_str(hdr, content_type, buf.buf);
|
|
|
|
end_headers(hdr);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
2018-03-15 17:31:40 +00:00
|
|
|
|
|
|
|
if (determine_protocol_version_server() != protocol_v2) {
|
|
|
|
packet_write_fmt(1, "# service=git-%s\n", svc->name);
|
|
|
|
packet_flush(1);
|
|
|
|
}
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
argv[0] = svc->name;
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
run_service(argv, 0);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
} else {
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
2023-07-10 21:12:36 +00:00
|
|
|
for_each_namespaced_ref(NULL, show_text_ref, &buf);
|
2016-08-09 23:47:31 +00:00
|
|
|
send_strbuf(hdr, "text/plain", &buf);
|
2009-10-31 00:47:34 +00:00
|
|
|
}
|
2009-10-31 00:47:32 +00:00
|
|
|
strbuf_release(&buf);
|
|
|
|
}
|
|
|
|
|
2015-05-25 18:38:55 +00:00
|
|
|
static int show_head_ref(const char *refname, const struct object_id *oid,
|
|
|
|
int flag, void *cb_data)
|
2013-04-10 00:55:08 +00:00
|
|
|
{
|
|
|
|
struct strbuf *buf = cb_data;
|
|
|
|
|
|
|
|
if (flag & REF_ISSYMREF) {
|
2014-07-15 19:59:36 +00:00
|
|
|
const char *target = resolve_ref_unsafe(refname,
|
|
|
|
RESOLVE_REF_READING,
|
2017-09-23 09:45:04 +00:00
|
|
|
NULL, NULL);
|
2013-04-10 00:55:08 +00:00
|
|
|
|
2016-04-07 19:03:09 +00:00
|
|
|
if (target)
|
|
|
|
strbuf_addf(buf, "ref: %s\n", strip_namespace(target));
|
2013-04-10 00:55:08 +00:00
|
|
|
} else {
|
2015-05-25 18:38:55 +00:00
|
|
|
strbuf_addf(buf, "%s\n", oid_to_hex(oid));
|
2013-04-10 00:55:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-02-24 06:38:43 +00:00
|
|
|
static void get_head(struct strbuf *hdr, char *arg UNUSED)
|
2013-04-10 00:55:08 +00:00
|
|
|
{
|
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
2013-04-10 00:55:08 +00:00
|
|
|
head_ref_namespaced(show_head_ref, &buf);
|
2016-08-09 23:47:31 +00:00
|
|
|
send_strbuf(hdr, "text/plain", &buf);
|
2013-04-10 00:55:08 +00:00
|
|
|
strbuf_release(&buf);
|
|
|
|
}
|
|
|
|
|
2023-02-24 06:38:43 +00:00
|
|
|
static void get_info_packs(struct strbuf *hdr, char *arg UNUSED)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
|
|
|
size_t objdirlen = strlen(get_object_directory());
|
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
struct packed_git *p;
|
|
|
|
size_t cnt = 0;
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
select_getanyfile(hdr);
|
2018-08-20 16:52:04 +00:00
|
|
|
for (p = get_all_packs(the_repository); p; p = p->next) {
|
2009-10-31 00:47:32 +00:00
|
|
|
if (p->pack_local)
|
|
|
|
cnt++;
|
|
|
|
}
|
|
|
|
|
|
|
|
strbuf_grow(&buf, cnt * 53 + 2);
|
2018-08-20 16:52:04 +00:00
|
|
|
for (p = get_all_packs(the_repository); p; p = p->next) {
|
2009-10-31 00:47:32 +00:00
|
|
|
if (p->pack_local)
|
|
|
|
strbuf_addf(&buf, "P %s\n", p->pack_name + objdirlen + 6);
|
|
|
|
}
|
|
|
|
strbuf_addch(&buf, '\n');
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_nocache(hdr);
|
|
|
|
send_strbuf(hdr, "text/plain; charset=utf-8", &buf);
|
2009-10-31 00:47:32 +00:00
|
|
|
strbuf_release(&buf);
|
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void check_content_type(struct strbuf *hdr, const char *accepted_type)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
|
|
|
const char *actual_type = getenv("CONTENT_TYPE");
|
|
|
|
|
|
|
|
if (!actual_type)
|
|
|
|
actual_type = "";
|
|
|
|
|
|
|
|
if (strcmp(actual_type, accepted_type)) {
|
2016-08-09 23:47:31 +00:00
|
|
|
http_status(hdr, 415, "Unsupported Media Type");
|
|
|
|
hdr_nocache(hdr);
|
|
|
|
end_headers(hdr);
|
2009-10-31 00:47:34 +00:00
|
|
|
format_write(1,
|
|
|
|
"Expected POST with Content-Type '%s',"
|
|
|
|
" but received '%s' instead.\n",
|
|
|
|
accepted_type, actual_type);
|
|
|
|
exit(0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static void service_rpc(struct strbuf *hdr, char *service_name)
|
2009-10-31 00:47:34 +00:00
|
|
|
{
|
|
|
|
const char *argv[] = {NULL, "--stateless-rpc", ".", NULL};
|
2016-08-09 23:47:31 +00:00
|
|
|
struct rpc_service *svc = select_service(hdr, service_name);
|
2009-10-31 00:47:34 +00:00
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
|
|
|
|
strbuf_reset(&buf);
|
|
|
|
strbuf_addf(&buf, "application/x-git-%s-request", svc->name);
|
2016-08-09 23:47:31 +00:00
|
|
|
check_content_type(hdr, buf.buf);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_nocache(hdr);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
strbuf_reset(&buf);
|
|
|
|
strbuf_addf(&buf, "application/x-git-%s-result", svc->name);
|
2016-08-09 23:47:31 +00:00
|
|
|
hdr_str(hdr, content_type, buf.buf);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
end_headers(hdr);
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
argv[0] = svc->name;
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
run_service(argv, svc->buffer_input);
|
2009-10-31 00:47:34 +00:00
|
|
|
strbuf_release(&buf);
|
|
|
|
}
|
|
|
|
|
http-backend: fix die recursion with custom handler
When we die() in http-backend, we call a custom handler that
writes an HTTP 500 response to stdout, then reports the
error to stderr. Our routines for writing out the HTTP
response may themselves die, leading to us entering die()
again.
When it was originally written, that was OK; our custom
handler keeps a variable to notice this and does not
recurse. However, since cd163d4 (usage.c: detect recursion
in die routines and bail out immediately, 2012-11-14), the
main die() implementation detects recursion before we even
get to our custom handler, and bails without printing
anything useful.
We can handle this case by doing two things:
1. Installing a custom die_is_recursing handler that
allows us to enter up to one level of recursion. Only
the first call to our custom handler will try to write
out the error response. So if we die again, that is OK.
If we end up dying more than that, it is a sign that we
are in an infinite recursion.
2. Reporting the error to stderr before trying to write
out the HTTP response. In the current code, if we do
die() trying to write out the response, we'll exit
immediately from this second die(), and never get a
chance to output the original error (which is almost
certainly the more interesting one; the second die is
just going to be along the lines of "I tried to write
to stdout but it was closed").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-15 06:29:27 +00:00
|
|
|
static int dead;
|
2009-10-31 00:47:32 +00:00
|
|
|
static NORETURN void die_webcgi(const char *err, va_list params)
|
|
|
|
{
|
http-backend: fix die recursion with custom handler
When we die() in http-backend, we call a custom handler that
writes an HTTP 500 response to stdout, then reports the
error to stderr. Our routines for writing out the HTTP
response may themselves die, leading to us entering die()
again.
When it was originally written, that was OK; our custom
handler keeps a variable to notice this and does not
recurse. However, since cd163d4 (usage.c: detect recursion
in die routines and bail out immediately, 2012-11-14), the
main die() implementation detects recursion before we even
get to our custom handler, and bails without printing
anything useful.
We can handle this case by doing two things:
1. Installing a custom die_is_recursing handler that
allows us to enter up to one level of recursion. Only
the first call to our custom handler will try to write
out the error response. So if we die again, that is OK.
If we end up dying more than that, it is a sign that we
are in an infinite recursion.
2. Reporting the error to stderr before trying to write
out the HTTP response. In the current code, if we do
die() trying to write out the response, we'll exit
immediately from this second die(), and never get a
chance to output the original error (which is almost
certainly the more interesting one; the second die is
just going to be along the lines of "I tried to write
to stdout but it was closed").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-15 06:29:27 +00:00
|
|
|
if (dead <= 1) {
|
2016-08-09 23:47:31 +00:00
|
|
|
struct strbuf hdr = STRBUF_INIT;
|
2021-12-07 18:26:30 +00:00
|
|
|
report_fn die_message_fn = get_die_message_routine();
|
2016-08-09 23:47:31 +00:00
|
|
|
|
2021-12-07 18:26:30 +00:00
|
|
|
die_message_fn(err, params);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
http_status(&hdr, 500, "Internal Server Error");
|
|
|
|
hdr_nocache(&hdr);
|
|
|
|
end_headers(&hdr);
|
2010-03-22 14:22:04 +00:00
|
|
|
}
|
|
|
|
exit(0); /* we successfully reported a failure ;-) */
|
2009-10-31 00:47:32 +00:00
|
|
|
}
|
|
|
|
|
http-backend: fix die recursion with custom handler
When we die() in http-backend, we call a custom handler that
writes an HTTP 500 response to stdout, then reports the
error to stderr. Our routines for writing out the HTTP
response may themselves die, leading to us entering die()
again.
When it was originally written, that was OK; our custom
handler keeps a variable to notice this and does not
recurse. However, since cd163d4 (usage.c: detect recursion
in die routines and bail out immediately, 2012-11-14), the
main die() implementation detects recursion before we even
get to our custom handler, and bails without printing
anything useful.
We can handle this case by doing two things:
1. Installing a custom die_is_recursing handler that
allows us to enter up to one level of recursion. Only
the first call to our custom handler will try to write
out the error response. So if we die again, that is OK.
If we end up dying more than that, it is a sign that we
are in an infinite recursion.
2. Reporting the error to stderr before trying to write
out the HTTP response. In the current code, if we do
die() trying to write out the response, we'll exit
immediately from this second die(), and never get a
chance to output the original error (which is almost
certainly the more interesting one; the second die is
just going to be along the lines of "I tried to write
to stdout but it was closed").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-15 06:29:27 +00:00
|
|
|
static int die_webcgi_recursing(void)
|
|
|
|
{
|
|
|
|
return dead++ > 1;
|
|
|
|
}
|
|
|
|
|
2009-10-31 00:47:35 +00:00
|
|
|
static char* getdir(void)
|
|
|
|
{
|
|
|
|
struct strbuf buf = STRBUF_INIT;
|
|
|
|
char *pathinfo = getenv("PATH_INFO");
|
|
|
|
char *root = getenv("GIT_PROJECT_ROOT");
|
|
|
|
char *path = getenv("PATH_TRANSLATED");
|
|
|
|
|
|
|
|
if (root && *root) {
|
|
|
|
if (!pathinfo || !*pathinfo)
|
|
|
|
die("GIT_PROJECT_ROOT is set but PATH_INFO is not");
|
2009-11-09 19:26:43 +00:00
|
|
|
if (daemon_avoid_alias(pathinfo))
|
|
|
|
die("'%s': aliased", pathinfo);
|
2010-11-25 08:21:06 +00:00
|
|
|
end_url_with_slash(&buf, root);
|
2009-11-09 19:26:43 +00:00
|
|
|
if (pathinfo[0] == '/')
|
|
|
|
pathinfo++;
|
2009-10-31 00:47:35 +00:00
|
|
|
strbuf_addstr(&buf, pathinfo);
|
|
|
|
return strbuf_detach(&buf, NULL);
|
|
|
|
} else if (path && *path) {
|
|
|
|
return xstrdup(path);
|
|
|
|
} else
|
|
|
|
die("No GIT_PROJECT_ROOT or PATH_TRANSLATED from server");
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2009-10-31 00:47:32 +00:00
|
|
|
static struct service_cmd {
|
|
|
|
const char *method;
|
|
|
|
const char *pattern;
|
2016-08-09 23:47:31 +00:00
|
|
|
void (*imp)(struct strbuf *, char *);
|
2009-10-31 00:47:32 +00:00
|
|
|
} services[] = {
|
2013-04-10 00:55:08 +00:00
|
|
|
{"GET", "/HEAD$", get_head},
|
2009-10-31 00:47:32 +00:00
|
|
|
{"GET", "/info/refs$", get_info_refs},
|
|
|
|
{"GET", "/objects/info/alternates$", get_text_file},
|
|
|
|
{"GET", "/objects/info/http-alternates$", get_text_file},
|
|
|
|
{"GET", "/objects/info/packs$", get_info_packs},
|
|
|
|
{"GET", "/objects/[0-9a-f]{2}/[0-9a-f]{38}$", get_loose_object},
|
2019-02-19 00:05:10 +00:00
|
|
|
{"GET", "/objects/[0-9a-f]{2}/[0-9a-f]{62}$", get_loose_object},
|
2009-10-31 00:47:32 +00:00
|
|
|
{"GET", "/objects/pack/pack-[0-9a-f]{40}\\.pack$", get_pack_file},
|
2019-02-19 00:05:10 +00:00
|
|
|
{"GET", "/objects/pack/pack-[0-9a-f]{64}\\.pack$", get_pack_file},
|
2009-10-31 00:47:34 +00:00
|
|
|
{"GET", "/objects/pack/pack-[0-9a-f]{40}\\.idx$", get_idx_file},
|
2019-02-19 00:05:10 +00:00
|
|
|
{"GET", "/objects/pack/pack-[0-9a-f]{64}\\.idx$", get_idx_file},
|
2009-10-31 00:47:34 +00:00
|
|
|
|
|
|
|
{"POST", "/git-upload-pack$", service_rpc},
|
|
|
|
{"POST", "/git-receive-pack$", service_rpc}
|
2009-10-31 00:47:32 +00:00
|
|
|
};
|
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
static int bad_request(struct strbuf *hdr, const struct service_cmd *c)
|
|
|
|
{
|
|
|
|
const char *proto = getenv("SERVER_PROTOCOL");
|
|
|
|
|
|
|
|
if (proto && !strcmp(proto, "HTTP/1.1")) {
|
|
|
|
http_status(hdr, 405, "Method Not Allowed");
|
|
|
|
hdr_str(hdr, "Allow",
|
|
|
|
!strcmp(c->method, "GET") ? "GET, HEAD" : c->method);
|
|
|
|
} else
|
|
|
|
http_status(hdr, 400, "Bad Request");
|
|
|
|
hdr_nocache(hdr);
|
|
|
|
end_headers(hdr);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2023-02-24 06:38:35 +00:00
|
|
|
int cmd_main(int argc UNUSED, const char **argv UNUSED)
|
2009-10-31 00:47:32 +00:00
|
|
|
{
|
|
|
|
char *method = getenv("REQUEST_METHOD");
|
2021-09-10 14:05:45 +00:00
|
|
|
const char *proto_header;
|
2009-10-31 00:47:35 +00:00
|
|
|
char *dir;
|
2009-10-31 00:47:32 +00:00
|
|
|
struct service_cmd *cmd = NULL;
|
|
|
|
char *cmd_arg = NULL;
|
|
|
|
int i;
|
2016-08-09 23:47:31 +00:00
|
|
|
struct strbuf hdr = STRBUF_INIT;
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
set_die_routine(die_webcgi);
|
http-backend: fix die recursion with custom handler
When we die() in http-backend, we call a custom handler that
writes an HTTP 500 response to stdout, then reports the
error to stderr. Our routines for writing out the HTTP
response may themselves die, leading to us entering die()
again.
When it was originally written, that was OK; our custom
handler keeps a variable to notice this and does not
recurse. However, since cd163d4 (usage.c: detect recursion
in die routines and bail out immediately, 2012-11-14), the
main die() implementation detects recursion before we even
get to our custom handler, and bails without printing
anything useful.
We can handle this case by doing two things:
1. Installing a custom die_is_recursing handler that
allows us to enter up to one level of recursion. Only
the first call to our custom handler will try to write
out the error response. So if we die again, that is OK.
If we end up dying more than that, it is a sign that we
are in an infinite recursion.
2. Reporting the error to stderr before trying to write
out the HTTP response. In the current code, if we do
die() trying to write out the response, we'll exit
immediately from this second die(), and never get a
chance to output the original error (which is almost
certainly the more interesting one; the second die is
just going to be along the lines of "I tried to write
to stdout but it was closed").
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-15 06:29:27 +00:00
|
|
|
set_die_is_recursing_routine(die_webcgi_recursing);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
if (!method)
|
|
|
|
die("No REQUEST_METHOD from server");
|
|
|
|
if (!strcmp(method, "HEAD"))
|
|
|
|
method = "GET";
|
2009-10-31 00:47:35 +00:00
|
|
|
dir = getdir();
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(services); i++) {
|
|
|
|
struct service_cmd *c = &services[i];
|
|
|
|
regex_t re;
|
|
|
|
regmatch_t out[1];
|
2023-02-06 23:07:45 +00:00
|
|
|
int ret;
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
if (regcomp(&re, c->pattern, REG_EXTENDED))
|
|
|
|
die("Bogus regex in service table: %s", c->pattern);
|
2023-02-06 23:07:45 +00:00
|
|
|
ret = regexec(&re, dir, 1, out, 0);
|
|
|
|
regfree(&re);
|
|
|
|
|
|
|
|
if (!ret) {
|
2009-11-14 21:10:57 +00:00
|
|
|
size_t n;
|
2009-10-31 00:47:32 +00:00
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
if (strcmp(method, c->method))
|
|
|
|
return bad_request(&hdr, c);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
cmd = c;
|
2009-11-14 21:10:57 +00:00
|
|
|
n = out[0].rm_eo - out[0].rm_so;
|
2014-07-19 15:35:34 +00:00
|
|
|
cmd_arg = xmemdupz(dir + out[0].rm_so + 1, n - 1);
|
2009-10-31 00:47:32 +00:00
|
|
|
dir[out[0].rm_so] = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!cmd)
|
2016-08-09 23:47:31 +00:00
|
|
|
not_found(&hdr, "Request not supported: '%s'", dir);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
|
|
|
setup_path();
|
|
|
|
if (!enter_repo(dir, 0))
|
2016-08-09 23:47:31 +00:00
|
|
|
not_found(&hdr, "Not a git repository: '%s'", dir);
|
2009-12-28 21:49:00 +00:00
|
|
|
if (!getenv("GIT_HTTP_EXPORT_ALL") &&
|
|
|
|
access("git-daemon-export-ok", F_OK) )
|
2016-08-09 23:47:31 +00:00
|
|
|
not_found(&hdr, "Repository not exported: '%s'", dir);
|
2023-02-06 23:07:44 +00:00
|
|
|
free(dir);
|
2009-10-31 00:47:32 +00:00
|
|
|
|
2014-08-07 16:21:17 +00:00
|
|
|
http_config();
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
max_request_buffer = git_env_ulong("GIT_HTTP_MAX_REQUEST_BUFFER",
|
|
|
|
max_request_buffer);
|
2021-09-10 14:05:45 +00:00
|
|
|
proto_header = getenv("HTTP_GIT_PROTOCOL");
|
|
|
|
if (proto_header)
|
|
|
|
setenv(GIT_PROTOCOL_ENVIRONMENT, proto_header, 0);
|
http-backend: spool ref negotiation requests to buffer
When http-backend spawns "upload-pack" to do ref
negotiation, it streams the http request body to
upload-pack, who then streams the http response back to the
client as it reads. In theory, git can go full-duplex; the
client can consume our response while it is still sending
the request. In practice, however, HTTP is a half-duplex
protocol. Even if our client is ready to read and write
simultaneously, we may have other HTTP infrastructure in the
way, including the webserver that spawns our CGI, or any
intermediate proxies.
In at least one documented case[1], this leads to deadlock
when trying a fetch over http. What happens is basically:
1. Apache proxies the request to the CGI, http-backend.
2. http-backend gzip-inflates the data and sends
the result to upload-pack.
3. upload-pack acts on the data and generates output over
the pipe back to Apache. Apache isn't reading because
it's busy writing (step 1).
This works fine most of the time, because the upload-pack
output ends up in a system pipe buffer, and Apache reads
it as soon as it finishes writing. But if both the request
and the response exceed the system pipe buffer size, then we
deadlock (Apache blocks writing to http-backend,
http-backend blocks writing to upload-pack, and upload-pack
blocks writing to Apache).
We need to break the deadlock by spooling either the input
or the output. In this case, it's ideal to spool the input,
because Apache does not start reading either stdout _or_
stderr until we have consumed all of the input. So until we
do so, we cannot even get an error message out to the
client.
The solution is fairly straight-forward: we read the request
body into an in-memory buffer in http-backend, freeing up
Apache, and then feed the data ourselves to upload-pack. But
there are a few important things to note:
1. We limit the in-memory buffer to prevent an obvious
denial-of-service attack. This is a new hard limit on
requests, but it's unlikely to come into play. The
default value is 10MB, which covers even the ridiculous
100,000-ref negotation in the included test (that
actually caps out just over 5MB). But it's configurable
on the off chance that you don't mind spending some
extra memory to make even ridiculous requests work.
2. We must take care only to buffer when we have to. For
pushes, the incoming packfile may be of arbitrary
size, and we should connect the input directly to
receive-pack. There's no deadlock problem here, though,
because we do not produce any output until the whole
packfile has been read.
For upload-pack's initial ref advertisement, we
similarly do not need to buffer. Even though we may
generate a lot of output, there is no request body at
all (i.e., it is a GET, not a POST).
[1] http://article.gmane.org/gmane.comp.version-control.git/269020
Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-20 07:37:09 +00:00
|
|
|
|
2016-08-09 23:47:31 +00:00
|
|
|
cmd->imp(&hdr, cmd_arg);
|
2023-02-06 23:07:44 +00:00
|
|
|
free(cmd_arg);
|
2009-10-31 00:47:32 +00:00
|
|
|
return 0;
|
|
|
|
}
|