2008-07-07 20:22:15 +00:00
|
|
|
ServerName dummy
|
2008-02-27 19:28:45 +00:00
|
|
|
PidFile httpd.pid
|
|
|
|
DocumentRoot www
|
2009-01-17 15:11:51 +00:00
|
|
|
LogFormat "%h %l %u %t \"%r\" %>s %b" common
|
|
|
|
CustomLog access.log common
|
2008-02-27 19:28:45 +00:00
|
|
|
ErrorLog error.log
|
2009-03-11 11:47:06 +00:00
|
|
|
<IfModule !mod_log_config.c>
|
|
|
|
LoadModule log_config_module modules/mod_log_config.so
|
|
|
|
</IfModule>
|
2009-10-31 00:47:46 +00:00
|
|
|
<IfModule !mod_alias.c>
|
|
|
|
LoadModule alias_module modules/mod_alias.so
|
|
|
|
</IfModule>
|
2009-10-31 00:47:47 +00:00
|
|
|
<IfModule !mod_cgi.c>
|
|
|
|
LoadModule cgi_module modules/mod_cgi.so
|
|
|
|
</IfModule>
|
|
|
|
<IfModule !mod_env.c>
|
|
|
|
LoadModule env_module modules/mod_env.so
|
|
|
|
</IfModule>
|
2010-09-25 04:20:35 +00:00
|
|
|
<IfModule !mod_rewrite.c>
|
|
|
|
LoadModule rewrite_module modules/mod_rewrite.so
|
|
|
|
</IFModule>
|
2010-11-14 01:51:14 +00:00
|
|
|
<IfModule !mod_version.c>
|
|
|
|
LoadModule version_module modules/mod_version.so
|
|
|
|
</IfModule>
|
2013-07-23 22:40:17 +00:00
|
|
|
<IfModule !mod_headers.c>
|
|
|
|
LoadModule headers_module modules/mod_headers.so
|
|
|
|
</IfModule>
|
2017-12-31 02:32:34 +00:00
|
|
|
<IfModule !mod_setenvif.c>
|
|
|
|
LoadModule setenvif_module modules/mod_setenvif.so
|
|
|
|
</IfModule>
|
2010-11-14 01:51:14 +00:00
|
|
|
|
t: run t5551 tests with both HTTP and HTTP/2
We have occasionally seen bugs that affect Git running only against an
HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http:
match headers case-insensitively when redacting, 2021-09-22). But since
we have no test coverage using HTTP/2, we only uncover these bugs in the
wild.
That commit gives a recipe for converting our Apache setup to support
HTTP/2, but:
- it's not necessarily portable
- we don't want to just test HTTP/2; we really want to do a variety of
basic tests for _both_ protocols
This patch handles both problems by running a duplicate of t5551
(labeled as t5559 here) with an alternate-universe setup that enables
HTTP/2. So we'll continue to run t5551 as before, but run the same
battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given
platform, then t5559 should bail during the webserver setup, and
gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from
"auto" to "yes", where the point is to complain when webserver setup
fails).
In theory other http-related test scripts could benefit from the same
duplication, but doing t5551 should give us a reasonable check of basic
functionality, and would have caught both bugs we've seen in the wild
with HTTP/2.
A few notes on the implementation:
- a script enables the server side config by calling enable_http2
before starting the webserver. This avoids even trying to load any
HTTP/2 config for t5551 (which is what lets it keep working with
regular HTTP even on systems that don't support it). This also sets
a prereq which can be used by individual tests.
- As discussed in b66c77a64e, the http2 module isn't compatible with
the "prefork" mpm, so we need to pick something else. I chose
"event" here, which works on my Debian system, but it's possible
there are platforms which would prefer something else. We can adjust
that later if somebody finds such a platform.
- The test "large fetch-pack requests can be sent using chunked
encoding" makes sure we use a chunked transfer-encoding by looking
for that header in the trace. But since HTTP/2 has its own streaming
mechanisms, we won't find such a header. We could skip the test
entirely by marking it with !HTTP2. But there's some value in making
sure that the fetch itself succeeded. So instead, we'll confirm that
either we're using HTTP2 _or_ we saw the expected chunked header.
- the redaction tests fail under HTTP/2 with recent versions of curl.
This is a bug! I've marked them with !HTTP2 here to skip them under
t5559 for the moment. Using test_expect_failure would be more
appropriate, but would require a bunch of boilerplate. Since we'll
be fixing them momentarily, let's just skip them for now to keep the
test suite bisectable, and we can re-enable them in the commit that
fixes the bug.
- one alternative layout would be to push most of t5551 into a
lib-t5551.sh script, then source it from both t5551 and t5559.
Keeping t5551 intact seemed a little simpler, as its one less level
of indirection for people fixing bugs/regressions in the non-HTTP/2
tests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 22:35:05 +00:00
|
|
|
<IfDefine HTTP2>
|
|
|
|
LoadModule http2_module modules/mod_http2.so
|
2023-02-23 11:06:44 +00:00
|
|
|
Protocols h2 h2c
|
t: run t5551 tests with both HTTP and HTTP/2
We have occasionally seen bugs that affect Git running only against an
HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http:
match headers case-insensitively when redacting, 2021-09-22). But since
we have no test coverage using HTTP/2, we only uncover these bugs in the
wild.
That commit gives a recipe for converting our Apache setup to support
HTTP/2, but:
- it's not necessarily portable
- we don't want to just test HTTP/2; we really want to do a variety of
basic tests for _both_ protocols
This patch handles both problems by running a duplicate of t5551
(labeled as t5559 here) with an alternate-universe setup that enables
HTTP/2. So we'll continue to run t5551 as before, but run the same
battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given
platform, then t5559 should bail during the webserver setup, and
gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from
"auto" to "yes", where the point is to complain when webserver setup
fails).
In theory other http-related test scripts could benefit from the same
duplication, but doing t5551 should give us a reasonable check of basic
functionality, and would have caught both bugs we've seen in the wild
with HTTP/2.
A few notes on the implementation:
- a script enables the server side config by calling enable_http2
before starting the webserver. This avoids even trying to load any
HTTP/2 config for t5551 (which is what lets it keep working with
regular HTTP even on systems that don't support it). This also sets
a prereq which can be used by individual tests.
- As discussed in b66c77a64e, the http2 module isn't compatible with
the "prefork" mpm, so we need to pick something else. I chose
"event" here, which works on my Debian system, but it's possible
there are platforms which would prefer something else. We can adjust
that later if somebody finds such a platform.
- The test "large fetch-pack requests can be sent using chunked
encoding" makes sure we use a chunked transfer-encoding by looking
for that header in the trace. But since HTTP/2 has its own streaming
mechanisms, we won't find such a header. We could skip the test
entirely by marking it with !HTTP2. But there's some value in making
sure that the fetch itself succeeded. So instead, we'll confirm that
either we're using HTTP2 _or_ we saw the expected chunked header.
- the redaction tests fail under HTTP/2 with recent versions of curl.
This is a bug! I've marked them with !HTTP2 here to skip them under
t5559 for the moment. Using test_expect_failure would be more
appropriate, but would require a bunch of boilerplate. Since we'll
be fixing them momentarily, let's just skip them for now to keep the
test suite bisectable, and we can re-enable them in the commit that
fixes the bug.
- one alternative layout would be to push most of t5551 into a
lib-t5551.sh script, then source it from both t5551 and t5559.
Keeping t5551 intact seemed a little simpler, as its one less level
of indirection for people fixing bugs/regressions in the non-HTTP/2
tests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 22:35:05 +00:00
|
|
|
</IfDefine>
|
|
|
|
|
2010-11-14 01:51:14 +00:00
|
|
|
<IfModule !mod_auth_basic.c>
|
|
|
|
LoadModule auth_basic_module modules/mod_auth_basic.so
|
|
|
|
</IfModule>
|
|
|
|
<IfModule !mod_authn_file.c>
|
|
|
|
LoadModule authn_file_module modules/mod_authn_file.so
|
|
|
|
</IfModule>
|
|
|
|
<IfModule !mod_authz_user.c>
|
|
|
|
LoadModule authz_user_module modules/mod_authz_user.so
|
|
|
|
</IfModule>
|
2013-04-13 03:33:36 +00:00
|
|
|
<IfModule !mod_authz_host.c>
|
|
|
|
LoadModule authz_host_module modules/mod_authz_host.so
|
|
|
|
</IfModule>
|
2009-10-31 00:47:46 +00:00
|
|
|
|
add basic http proxy tests
We do not test our http proxy functionality at all in the test suite, so
this is a pretty big blind spot. Let's at least add a basic check that
we can go through an authenticating proxy to perform a clone.
A few notes on the implementation:
- I'm using a single apache instance to proxy to itself. This seems to
work fine in practice, and we can check with a test that this rather
unusual setup is doing what we expect.
- I've put the proxy tests into their own script, and it's the only
one which loads the apache proxy config. If any platform can't
handle this (e.g., doesn't have the right modules), the start_httpd
step should fail and gracefully skip the rest of the script (but all
the other http tests in existing scripts will continue to run).
- I used a separate passwd file to make sure we don't ever get
confused between proxy and regular auth credentials. It's using the
antiquated crypt() format. This is a terrible choice security-wise
in the modern age, but it's what our existing passwd file uses, and
should be portable. It would probably be reasonable to switch both
of these to bcrypt, but we can do that in a separate patch.
- On the client side, we test two situations with credentials: when
they are present in the url, and when the username is present but we
prompt for the password. I think we should be able to handle the
case that _neither_ is present, but an HTTP 407 causes us to prompt
for them. However, this doesn't seem to work. That's either a bug,
or at the very least an opportunity for a feature, but I punted on
it for now. The point of this patch is just getting basic coverage,
and we can explore possible deficiencies later.
- this doesn't work with LIB_HTTPD_SSL. This probably would be
valuable to have, as https over an http proxy is totally different
(it uses CONNECT to tunnel the session). But adding in
mod_proxy_connect and some basic config didn't seem to work for me,
so I punted for now. Much of the rest of the test suite does not
currently work with LIB_HTTPD_SSL either, so we shouldn't be making
anything much worse here.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-16 20:56:32 +00:00
|
|
|
<IfDefine PROXY>
|
|
|
|
<IfModule !mod_proxy.c>
|
|
|
|
LoadModule proxy_module modules/mod_proxy.so
|
|
|
|
</IfModule>
|
|
|
|
<IfModule !mod_proxy_http.c>
|
|
|
|
LoadModule proxy_http_module modules/mod_proxy_http.so
|
|
|
|
</IfModule>
|
|
|
|
ProxyRequests On
|
|
|
|
<Proxy "*">
|
|
|
|
AuthType Basic
|
|
|
|
AuthName "proxy-auth"
|
|
|
|
AuthUserFile proxy-passwd
|
|
|
|
Require valid-user
|
|
|
|
</Proxy>
|
|
|
|
</IfDefine>
|
|
|
|
|
2013-06-09 08:08:22 +00:00
|
|
|
<IfModule !mod_authn_core.c>
|
|
|
|
LoadModule authn_core_module modules/mod_authn_core.so
|
|
|
|
</IfModule>
|
|
|
|
<IfModule !mod_authz_core.c>
|
|
|
|
LoadModule authz_core_module modules/mod_authz_core.so
|
|
|
|
</IfModule>
|
2013-06-09 08:08:45 +00:00
|
|
|
<IfModule !mod_access_compat.c>
|
|
|
|
LoadModule access_compat_module modules/mod_access_compat.so
|
|
|
|
</IfModule>
|
2015-05-11 11:54:17 +00:00
|
|
|
<IfModule !mod_unixd.c>
|
|
|
|
LoadModule unixd_module modules/mod_unixd.so
|
|
|
|
</IfModule>
|
t: run t5551 tests with both HTTP and HTTP/2
We have occasionally seen bugs that affect Git running only against an
HTTP/2 web server, not an HTTP one. For instance, b66c77a64e (http:
match headers case-insensitively when redacting, 2021-09-22). But since
we have no test coverage using HTTP/2, we only uncover these bugs in the
wild.
That commit gives a recipe for converting our Apache setup to support
HTTP/2, but:
- it's not necessarily portable
- we don't want to just test HTTP/2; we really want to do a variety of
basic tests for _both_ protocols
This patch handles both problems by running a duplicate of t5551
(labeled as t5559 here) with an alternate-universe setup that enables
HTTP/2. So we'll continue to run t5551 as before, but run the same
battery of tests again with HTTP/2. If HTTP/2 isn't supported on a given
platform, then t5559 should bail during the webserver setup, and
gracefully skip all tests (unless GIT_TEST_HTTPD has been changed from
"auto" to "yes", where the point is to complain when webserver setup
fails).
In theory other http-related test scripts could benefit from the same
duplication, but doing t5551 should give us a reasonable check of basic
functionality, and would have caught both bugs we've seen in the wild
with HTTP/2.
A few notes on the implementation:
- a script enables the server side config by calling enable_http2
before starting the webserver. This avoids even trying to load any
HTTP/2 config for t5551 (which is what lets it keep working with
regular HTTP even on systems that don't support it). This also sets
a prereq which can be used by individual tests.
- As discussed in b66c77a64e, the http2 module isn't compatible with
the "prefork" mpm, so we need to pick something else. I chose
"event" here, which works on my Debian system, but it's possible
there are platforms which would prefer something else. We can adjust
that later if somebody finds such a platform.
- The test "large fetch-pack requests can be sent using chunked
encoding" makes sure we use a chunked transfer-encoding by looking
for that header in the trace. But since HTTP/2 has its own streaming
mechanisms, we won't find such a header. We could skip the test
entirely by marking it with !HTTP2. But there's some value in making
sure that the fetch itself succeeded. So instead, we'll confirm that
either we're using HTTP2 _or_ we saw the expected chunked header.
- the redaction tests fail under HTTP/2 with recent versions of curl.
This is a bug! I've marked them with !HTTP2 here to skip them under
t5559 for the moment. Using test_expect_failure would be more
appropriate, but would require a bunch of boilerplate. Since we'll
be fixing them momentarily, let's just skip them for now to keep the
test suite bisectable, and we can re-enable them in the commit that
fixes the bug.
- one alternative layout would be to push most of t5551 into a
lib-t5551.sh script, then source it from both t5551 and t5559.
Keeping t5551 intact seemed a little simpler, as its one less level
of indirection for people fixing bugs/regressions in the non-HTTP/2
tests.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-11 22:35:05 +00:00
|
|
|
|
|
|
|
<IfDefine HTTP2>
|
|
|
|
<IfModule !mod_mpm_event.c>
|
|
|
|
LoadModule mpm_event_module modules/mod_mpm_event.so
|
|
|
|
</IfModule>
|
|
|
|
</IfDefine>
|
|
|
|
<IfDefine !HTTP2>
|
|
|
|
<IfModule !mod_mpm_prefork.c>
|
|
|
|
LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
|
|
|
|
</IfModule>
|
|
|
|
</IfDefine>
|
2013-06-09 08:08:22 +00:00
|
|
|
|
2012-07-24 13:43:59 +00:00
|
|
|
PassEnv GIT_VALGRIND
|
|
|
|
PassEnv GIT_VALGRIND_OPTIONS
|
2014-09-15 21:59:00 +00:00
|
|
|
PassEnv GNUPGHOME
|
t: support clang/gcc AddressSanitizer
When git is compiled with "-fsanitize=address" (using clang
or gcc >= 4.8), all invocations of git will check for buffer
overflows. This is similar to running with valgrind, except
that it is more thorough (because of the compiler support,
function-local buffers can be checked, too) and runs much
faster (making it much less painful to run the whole test
suite with the checks turned on).
Unlike valgrind, the magic happens at compile-time, so we
don't need the same infrastructure in the test suite that we
did to support --valgrind. But there are two things we can
help with:
1. On some platforms, the leak-detector is on by default,
and causes every invocation of "git init" (and thus
every test script) to fail. Since running git with
the leak detector is pointless, let's shut it off
automatically in the tests, unless the user has already
configured it.
2. When apache runs a CGI, it clears the environment of
unknown variables. This means that the $ASAN_OPTIONS
config doesn't make it to git-http-backend, and it
dies due to the leak detector. Let's mark the variable
as OK for apache to pass.
With these two changes, running
make CC=clang CFLAGS=-fsanitize=address test
works out of the box.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-08 07:47:06 +00:00
|
|
|
PassEnv ASAN_OPTIONS
|
2019-05-07 22:30:46 +00:00
|
|
|
PassEnv LSAN_OPTIONS
|
2015-03-13 04:51:15 +00:00
|
|
|
PassEnv GIT_TRACE
|
2016-03-16 00:56:52 +00:00
|
|
|
PassEnv GIT_CONFIG_NOSYSTEM
|
2019-01-16 19:28:15 +00:00
|
|
|
PassEnv GIT_TEST_SIDEBAND_ALL
|
2022-10-06 15:33:07 +00:00
|
|
|
PassEnv LANG
|
|
|
|
PassEnv LC_ALL
|
2012-07-24 13:43:59 +00:00
|
|
|
|
2009-10-31 00:47:46 +00:00
|
|
|
Alias /dumb/ www/
|
2012-08-27 13:24:42 +00:00
|
|
|
Alias /auth/dumb/ www/auth/dumb/
|
2008-02-27 19:28:45 +00:00
|
|
|
|
2012-08-27 13:25:21 +00:00
|
|
|
<LocationMatch /smart/>
|
2009-10-31 00:47:47 +00:00
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
2009-12-28 21:49:00 +00:00
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
2012-08-27 13:25:21 +00:00
|
|
|
</LocationMatch>
|
|
|
|
<LocationMatch /smart_noexport/>
|
2009-12-28 21:49:00 +00:00
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
2012-08-27 13:25:21 +00:00
|
|
|
</LocationMatch>
|
|
|
|
<LocationMatch /smart_custom_env/>
|
2012-03-30 07:01:30 +00:00
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
|
|
|
SetEnv GIT_COMMITTER_NAME "Custom User"
|
|
|
|
SetEnv GIT_COMMITTER_EMAIL custom@example.com
|
2012-08-27 13:25:21 +00:00
|
|
|
</LocationMatch>
|
2013-04-10 00:55:08 +00:00
|
|
|
<LocationMatch /smart_namespace/>
|
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
|
|
|
SetEnv GIT_NAMESPACE ns
|
|
|
|
</LocationMatch>
|
2013-07-23 22:40:17 +00:00
|
|
|
<LocationMatch /smart_cookies/>
|
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
|
|
|
Header set Set-Cookie name=value
|
|
|
|
</LocationMatch>
|
2016-04-27 12:20:37 +00:00
|
|
|
<LocationMatch /smart_headers/>
|
t/lib-httpd: bump required apache version to 2.4
Apache 2.4 has been out since early 2012, almost 11 years. And its
predecessor, 2.2, has been out of support since its last release in
2017, over 5 years ago. The last mention on the mailing list was from
around the same time, in this thread:
https://lore.kernel.org/git/20171231023234.21215-1-tmz@pobox.com/
We can probably assume that 2.4 is available everywhere. And the stakes
are fairly low, as the worst case is that such a platform would skip the
http tests.
This lets us clean up a few minor version checks in the config file, but
also revert f1f2b45be0 (tests: adjust the configuration for Apache 2.2,
2016-05-09). Its technique isn't _too_ bad, but certainly required a bit
more explanation than the 2.4 version it replaced. I manually confirmed
that the test in t5551 still behaves as expected (if you replace
"cadabra" with "foo", the server correctly rejects the request).
It will also help future patches which will no longer have to deal with
conditional config for this old version.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-01 11:38:24 +00:00
|
|
|
<RequireAll>
|
|
|
|
Require expr %{HTTP:x-magic-one} == 'abra'
|
|
|
|
Require expr %{HTTP:x-magic-two} == 'cadabra'
|
|
|
|
</RequireAll>
|
2016-04-27 12:20:37 +00:00
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
|
|
|
</LocationMatch>
|
t/lib-httpd: avoid using macOS' sed
Among other differences relative to GNU sed, macOS' sed always ends its
output with a trailing newline, even if the input did not have such a
trailing newline.
Surprisingly, this makes three httpd-based tests fail on macOS: t5616,
t5702 and t5703. ("Surprisingly" because those tests have been around
for some time, but apparently nobody runs them on macOS with a working
Apache2 setup.)
The reason is that we use `sed` in those tests to filter the response of
the web server. Apart from the fact that we use GNU constructs (such as
using a space after the `c` command instead of a backslash and a
newline), we have another problem: macOS' sed LF-only newlines while
webservers are supposed to use CR/LF ones.
Even worse, t5616 uses `sed` to replace a binary part of the response
with a new binary part (kind of hoping that the replaced binary part
does not contain a 0x0a byte which would be interpreted as a newline).
To that end, it calls on Perl to read the binary pack file and
hex-encode it, then calls on `sed` to prefix every hex digit pair with a
`\x` in order to construct the text that the `c` statement of the `sed`
invocation is supposed to insert. So we call Perl and sed to construct a
sed statement. The final nail in the coffin is that macOS' sed does not
even interpret those `\x<hex>` constructs.
Let's just replace all of that by Perl snippets. With Perl, at least, we
do not have to deal with GNU vs macOS semantics, we do not have to worry
about unwanted trailing newlines, and we do not have to spawn commands
to construct arguments for other commands to be spawned (i.e. we can
avoid a whole lot of shell scripting complexity).
The upshot is that this fixes t5616, t5702 and t5703 on macOS with
Apache2.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 13:23:11 +00:00
|
|
|
<LocationMatch /one_time_perl/>
|
2018-06-27 22:30:18 +00:00
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
|
|
|
</LocationMatch>
|
2021-09-10 14:04:42 +00:00
|
|
|
<LocationMatch /smart_v0/>
|
|
|
|
SetEnv GIT_EXEC_PATH ${GIT_EXEC_PATH}
|
|
|
|
SetEnv GIT_HTTP_EXPORT_ALL
|
|
|
|
SetEnv GIT_PROTOCOL
|
|
|
|
</LocationMatch>
|
2020-05-19 10:53:58 +00:00
|
|
|
ScriptAlias /smart/incomplete_length/git-upload-pack incomplete-length-upload-pack-v2-http.sh/
|
|
|
|
ScriptAlias /smart/incomplete_body/git-upload-pack incomplete-body-upload-pack-v2-http.sh/
|
send-pack: complain about "expecting report" with --helper-status
When pushing to a server which erroneously omits the final ref-status
report, the client side should complain about the refs for which we
didn't receive the status (because we can't just assume they were
updated). This works over most transports like ssh, but for http we'll
print a very misleading "Everything up-to-date".
It works for ssh because send-pack internally sets the status of each
ref to REF_STATUS_EXPECTING_REPORT, and then if the server doesn't tell
us about a particular ref, it will stay at that value. When we print the
final status table, we'll see that we're still on EXPECTING_REPORT and
complain then.
But for http, we go through remote-curl, which invokes send-pack with
"--stateless-rpc --helper-status". The latter option causes send-pack to
return a machine-readable list of ref statuses to the remote helper. But
ever since its inception in de1a2fdd38 (Smart push over HTTP: client
side, 2009-10-30), the send-pack code has simply omitted mention of any
ref which ended up in EXPECTING_REPORT.
In the remote helper, we then take the absence of any status report
from send-pack to mean that the ref was not even something we tried to
send, and thus it prints "Everything up-to-date". Fortunately it does
detect the eventual non-zero exit from send-pack, and propagates that in
its own non-zero exit code. So at least a careful script invoking "git
push" would notice the failure. But sending the misleading message on
stderr is certainly confusing for humans (not to mention the
machine-readable "push --porcelain" output, though again, any careful
script should be checking the exit code from push, too).
Nobody seems to have noticed because the server in this instance has to
be misbehaving: it has promised to support the ref-status capability
(otherwise the client will not set EXPECTING_REPORT at all), but didn't
send us any. If the connection were simply cut, then send-pack would
complain about getting EOF while trying to read the status. But if the
server actually sends a flush packet (i.e., saying "now you have all of
the ref statuses" without actually sending any), then the client ends up
in this confused situation.
The fix is simple: we should return an error message from "send-pack
--helper-status", just like we would for any other error per-ref error
condition (in the test I included, the server simply omits all ref
status responses, but a more insidious version of this would skip only
some of them).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-18 19:43:47 +00:00
|
|
|
ScriptAlias /smart/no_report/git-receive-pack error-no-report.sh/
|
2019-01-10 19:33:50 +00:00
|
|
|
ScriptAliasMatch /error_git_upload_pack/(.*)/git-upload-pack error.sh/
|
2012-08-27 13:25:21 +00:00
|
|
|
ScriptAliasMatch /smart_*[^/]*/(.*) ${GIT_EXEC_PATH}/git-http-backend/$1
|
2013-01-31 21:02:07 +00:00
|
|
|
ScriptAlias /broken_smart/ broken-smart-http.sh/
|
2019-02-06 19:19:10 +00:00
|
|
|
ScriptAlias /error_smart/ error-smart-http.sh/
|
2014-05-22 09:29:03 +00:00
|
|
|
ScriptAlias /error/ error.sh/
|
t/lib-httpd: avoid using macOS' sed
Among other differences relative to GNU sed, macOS' sed always ends its
output with a trailing newline, even if the input did not have such a
trailing newline.
Surprisingly, this makes three httpd-based tests fail on macOS: t5616,
t5702 and t5703. ("Surprisingly" because those tests have been around
for some time, but apparently nobody runs them on macOS with a working
Apache2 setup.)
The reason is that we use `sed` in those tests to filter the response of
the web server. Apart from the fact that we use GNU constructs (such as
using a space after the `c` command instead of a backslash and a
newline), we have another problem: macOS' sed LF-only newlines while
webservers are supposed to use CR/LF ones.
Even worse, t5616 uses `sed` to replace a binary part of the response
with a new binary part (kind of hoping that the replaced binary part
does not contain a 0x0a byte which would be interpreted as a newline).
To that end, it calls on Perl to read the binary pack file and
hex-encode it, then calls on `sed` to prefix every hex digit pair with a
`\x` in order to construct the text that the `c` statement of the `sed`
invocation is supposed to insert. So we call Perl and sed to construct a
sed statement. The final nail in the coffin is that macOS' sed does not
even interpret those `\x<hex>` constructs.
Let's just replace all of that by Perl snippets. With Perl, at least, we
do not have to deal with GNU vs macOS semantics, we do not have to worry
about unwanted trailing newlines, and we do not have to spawn commands
to construct arguments for other commands to be spawned (i.e. we can
avoid a whole lot of shell scripting complexity).
The upshot is that this fixes t5616, t5702 and t5703 on macOS with
Apache2.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 13:23:11 +00:00
|
|
|
ScriptAliasMatch /one_time_perl/(.*) apply-one-time-perl.sh/$1
|
2009-10-31 00:47:47 +00:00
|
|
|
<Directory ${GIT_EXEC_PATH}>
|
2012-07-24 13:43:59 +00:00
|
|
|
Options FollowSymlinks
|
2009-10-31 00:47:47 +00:00
|
|
|
</Directory>
|
2020-05-19 10:53:58 +00:00
|
|
|
<Files incomplete-length-upload-pack-v2-http.sh>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
|
|
|
<Files incomplete-body-upload-pack-v2-http.sh>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
send-pack: complain about "expecting report" with --helper-status
When pushing to a server which erroneously omits the final ref-status
report, the client side should complain about the refs for which we
didn't receive the status (because we can't just assume they were
updated). This works over most transports like ssh, but for http we'll
print a very misleading "Everything up-to-date".
It works for ssh because send-pack internally sets the status of each
ref to REF_STATUS_EXPECTING_REPORT, and then if the server doesn't tell
us about a particular ref, it will stay at that value. When we print the
final status table, we'll see that we're still on EXPECTING_REPORT and
complain then.
But for http, we go through remote-curl, which invokes send-pack with
"--stateless-rpc --helper-status". The latter option causes send-pack to
return a machine-readable list of ref statuses to the remote helper. But
ever since its inception in de1a2fdd38 (Smart push over HTTP: client
side, 2009-10-30), the send-pack code has simply omitted mention of any
ref which ended up in EXPECTING_REPORT.
In the remote helper, we then take the absence of any status report
from send-pack to mean that the ref was not even something we tried to
send, and thus it prints "Everything up-to-date". Fortunately it does
detect the eventual non-zero exit from send-pack, and propagates that in
its own non-zero exit code. So at least a careful script invoking "git
push" would notice the failure. But sending the misleading message on
stderr is certainly confusing for humans (not to mention the
machine-readable "push --porcelain" output, though again, any careful
script should be checking the exit code from push, too).
Nobody seems to have noticed because the server in this instance has to
be misbehaving: it has promised to support the ref-status capability
(otherwise the client will not set EXPECTING_REPORT at all), but didn't
send us any. If the connection were simply cut, then send-pack would
complain about getting EOF while trying to read the status. But if the
server actually sends a flush packet (i.e., saying "now you have all of
the ref statuses" without actually sending any), then the client ends up
in this confused situation.
The fix is simple: we should return an error message from "send-pack
--helper-status", just like we would for any other error per-ref error
condition (in the test I included, the server simply omits all ref
status responses, but a more insidious version of this would skip only
some of them).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-10-18 19:43:47 +00:00
|
|
|
<Files error-no-report.sh>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
2013-01-31 21:02:07 +00:00
|
|
|
<Files broken-smart-http.sh>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
2019-02-06 19:19:10 +00:00
|
|
|
<Files error-smart-http.sh>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
2014-05-22 09:29:03 +00:00
|
|
|
<Files error.sh>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
t/lib-httpd: avoid using macOS' sed
Among other differences relative to GNU sed, macOS' sed always ends its
output with a trailing newline, even if the input did not have such a
trailing newline.
Surprisingly, this makes three httpd-based tests fail on macOS: t5616,
t5702 and t5703. ("Surprisingly" because those tests have been around
for some time, but apparently nobody runs them on macOS with a working
Apache2 setup.)
The reason is that we use `sed` in those tests to filter the response of
the web server. Apart from the fact that we use GNU constructs (such as
using a space after the `c` command instead of a backslash and a
newline), we have another problem: macOS' sed LF-only newlines while
webservers are supposed to use CR/LF ones.
Even worse, t5616 uses `sed` to replace a binary part of the response
with a new binary part (kind of hoping that the replaced binary part
does not contain a 0x0a byte which would be interpreted as a newline).
To that end, it calls on Perl to read the binary pack file and
hex-encode it, then calls on `sed` to prefix every hex digit pair with a
`\x` in order to construct the text that the `c` statement of the `sed`
invocation is supposed to insert. So we call Perl and sed to construct a
sed statement. The final nail in the coffin is that macOS' sed does not
even interpret those `\x<hex>` constructs.
Let's just replace all of that by Perl snippets. With Perl, at least, we
do not have to deal with GNU vs macOS semantics, we do not have to worry
about unwanted trailing newlines, and we do not have to spawn commands
to construct arguments for other commands to be spawned (i.e. we can
avoid a whole lot of shell scripting complexity).
The upshot is that this fixes t5616, t5702 and t5703 on macOS with
Apache2.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-27 13:23:11 +00:00
|
|
|
<Files apply-one-time-perl.sh>
|
2018-06-27 22:30:18 +00:00
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
2009-10-31 00:47:47 +00:00
|
|
|
<Files ${GIT_EXEC_PATH}/git-http-backend>
|
|
|
|
Options ExecCGI
|
|
|
|
</Files>
|
|
|
|
|
2010-09-25 04:20:35 +00:00
|
|
|
RewriteEngine on
|
http: make redirects more obvious
We instruct curl to always follow HTTP redirects. This is
convenient, but it creates opportunities for malicious
servers to create confusing situations. For instance,
imagine Alice is a git user with access to a private
repository on Bob's server. Mallory runs her own server and
wants to access objects from Bob's repository.
Mallory may try a few tricks that involve asking Alice to
clone from her, build on top, and then push the result:
1. Mallory may simply redirect all fetch requests to Bob's
server. Git will transparently follow those redirects
and fetch Bob's history, which Alice may believe she
got from Mallory. The subsequent push seems like it is
just feeding Mallory back her own objects, but is
actually leaking Bob's objects. There is nothing in
git's output to indicate that Bob's repository was
involved at all.
The downside (for Mallory) of this attack is that Alice
will have received Bob's entire repository, and is
likely to notice that when building on top of it.
2. If Mallory happens to know the sha1 of some object X in
Bob's repository, she can instead build her own history
that references that object. She then runs a dumb http
server, and Alice's client will fetch each object
individually. When it asks for X, Mallory redirects her
to Bob's server. The end result is that Alice obtains
objects from Bob, but they may be buried deep in
history. Alice is less likely to notice.
Both of these attacks are fairly hard to pull off. There's a
social component in getting Mallory to convince Alice to
work with her. Alice may be prompted for credentials in
accessing Bob's repository (but not always, if she is using
a credential helper that caches). Attack (1) requires a
certain amount of obliviousness on Alice's part while making
a new commit. Attack (2) requires that Mallory knows a sha1
in Bob's repository, that Bob's server supports dumb http,
and that the object in question is loose on Bob's server.
But we can probably make things a bit more obvious without
any loss of functionality. This patch does two things to
that end.
First, when we encounter a whole-repo redirect during the
initial ref discovery, we now inform the user on stderr,
making attack (1) much more obvious.
Second, the decision to follow redirects is now
configurable. The truly paranoid can set the new
http.followRedirects to false to avoid any redirection
entirely. But for a more practical default, we will disallow
redirects only after the initial ref discovery. This is
enough to thwart attacks similar to (2), while still
allowing the common use of redirects at the repository
level. Since c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28) we re-root all further requests from
the redirect destination, which should generally mean that
no further redirection is necessary.
As an escape hatch, in case there really is a server that
needs to redirect individual requests, the user can set
http.followRedirects to "true" (and this can be done on a
per-server basis via http.*.followRedirects config).
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 18:24:41 +00:00
|
|
|
RewriteRule ^/dumb-redir/(.*)$ /dumb/$1 [R=301]
|
2010-09-25 04:20:35 +00:00
|
|
|
RewriteRule ^/smart-redir-perm/(.*)$ /smart/$1 [R=301]
|
|
|
|
RewriteRule ^/smart-redir-temp/(.*)$ /smart/$1 [R=302]
|
remote-curl: rewrite base url from info/refs redirects
For efficiency and security reasons, an earlier commit in
this series taught http_get_* to re-write the base url based
on redirections we saw while making a specific request.
This commit wires that option into the info/refs request,
meaning that a redirect from
http://example.com/foo.git/info/refs
to
https://example.com/bar.git/info/refs
will behave as if "https://example.com/bar.git" had been
provided to git in the first place.
The tests bear some explanation. We introduce two new
hierearchies into the httpd test config:
1. Requests to /smart-redir-limited will work only for the
initial info/refs request, but not any subsequent
requests. As a result, we can confirm whether the
client is re-rooting its requests after the initial
contact, since otherwise it will fail (it will ask for
"repo.git/git-upload-pack", which is not redirected).
2. Requests to smart-redir-auth will redirect, and require
auth after the redirection. Since we are using the
redirected base for further requests, we also update
the credential struct, in order not to mislead the user
(or credential helpers) about which credential is
needed. We can therefore check the GIT_ASKPASS prompts
to make sure we are prompting for the new location.
Because we have neither multiple servers nor https
support in our test setup, we can only redirect between
paths, meaning we need to turn on
credential.useHttpPath to see the difference.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-28 08:35:35 +00:00
|
|
|
RewriteRule ^/smart-redir-auth/(.*)$ /auth/smart/$1 [R=301]
|
|
|
|
RewriteRule ^/smart-redir-limited/(.*)/info/refs$ /smart/$1/info/refs [R=301]
|
http: limit redirection to protocol-whitelist
Previously, libcurl would follow redirection to any protocol
it was compiled for support with. This is desirable to allow
redirection from HTTP to HTTPS. However, it would even
successfully allow redirection from HTTP to SFTP, a protocol
that git does not otherwise support at all. Furthermore
git's new protocol-whitelisting could be bypassed by
following a redirect within the remote helper, as it was
only enforced at transport selection time.
This patch limits redirects within libcurl to HTTP, HTTPS,
FTP and FTPS. If there is a protocol-whitelist present, this
list is limited to those also allowed by the whitelist. As
redirection happens from within libcurl, it is impossible
for an HTTP redirect to a protocol implemented within
another remote helper.
When the curl version git was compiled with is too old to
support restrictions on protocol redirection, we warn the
user if GIT_ALLOW_PROTOCOL restrictions were requested. This
is a little inaccurate, as even without that variable in the
environment, we would still restrict SFTP, etc, and we do
not warn in that case. But anything else means we would
literally warn every time git accesses an http remote.
This commit includes a test, but it is not as robust as we
would hope. It redirects an http request to ftp, and checks
that curl complained about the protocol, which means that we
are relying on curl's specific error message to know what
happened. Ideally we would redirect to a working ftp server
and confirm that we can clone without protocol restrictions,
and not with them. But we do not have a portable way of
providing an ftp server, nor any other protocol that curl
supports (https is the closest, but we would have to deal
with certificates).
[jk: added test and version warning]
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-22 22:06:04 +00:00
|
|
|
RewriteRule ^/ftp-redir/(.*)$ ftp://localhost:1000/$1 [R=302]
|
2010-09-25 04:20:35 +00:00
|
|
|
|
2015-09-22 22:06:20 +00:00
|
|
|
RewriteRule ^/loop-redir/x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-(.*) /$1 [R=302]
|
|
|
|
RewriteRule ^/loop-redir/(.*)$ /loop-redir/x-$1 [R=302]
|
|
|
|
|
http: attempt updating base URL only if no error
http.c supports HTTP redirects of the form
http://foo/info/refs?service=git-upload-pack
-> http://anything
-> http://bar/info/refs?service=git-upload-pack
(that is to say, as long as the Git part of the path and the query
string is preserved in the final redirect destination, the intermediate
steps can have any URL). However, if one of the intermediate steps
results in an HTTP exception, a confusing "unable to update url base
from redirection" message is printed instead of a Curl error message
with the HTTP exception code.
This was introduced by 2 commits. Commit c93c92f ("http: update base
URLs when we see redirects", 2013-09-28) introduced a best-effort
optimization that required checking if only the "base" part of the URL
differed between the initial request and the final redirect destination,
but it performed the check before any HTTP status checking was done. If
something went wrong, the normal code path was still followed, so this
did not cause any confusing error messages until commit 6628eb4 ("http:
always update the base URL for redirects", 2016-12-06), which taught
http to die if the non-"base" part of the URL differed.
Therefore, teach http to check the HTTP status before attempting to
check if only the "base" part of the URL differed. This commit teaches
http_request_reauth to return early without updating options->base_url
upon an error; the only invoker of this function that passes a non-NULL
"options" is remote-curl.c (through "http_get_strbuf"), which only uses
options->base_url for an informational message in the situations that
this commit cares about (that is, when the return value is not HTTP_OK).
The included test checks that the redirect scheme at the beginning of
this commit message works, and that returning a 502 in the middle of the
redirect scheme produces the correct result. Note that this is different
from the test in commit 6628eb4 ("http: always update the base URL for
redirects", 2016-12-06) in that this commit tests that a Git-shaped URL
(http://.../info/refs?service=git-upload-pack) works, whereas commit
6628eb4 tests that a non-Git-shaped URL
(http://.../info/refs/foo?service=git-upload-pack) does not work (even
though Git is processing that URL) and is an error that is fatal, not
silently swallowed.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-28 02:53:11 +00:00
|
|
|
# redir-to/502/x?y -> really-redir-to?path=502/x&qs=y which returns 502
|
|
|
|
# redir-to/x?y -> really-redir-to?path=x&qs=y -> x?y
|
|
|
|
RewriteCond %{QUERY_STRING} ^(.*)$
|
|
|
|
RewriteRule ^/redir-to/(.*)$ /really-redir-to?path=$1&qs=%1 [R=302]
|
|
|
|
RewriteCond %{QUERY_STRING} ^path=502/(.*)&qs=(.*)$
|
|
|
|
RewriteRule ^/really-redir-to$ - [R=502,L]
|
|
|
|
RewriteCond %{QUERY_STRING} ^path=(.*)&qs=(.*)$
|
|
|
|
RewriteRule ^/really-redir-to$ /%1?%2 [R=302]
|
|
|
|
|
http: always update the base URL for redirects
If a malicious server redirects the initial ref
advertisement, it may be able to leak sha1s from other,
unrelated servers that the client has access to. For
example, imagine that Alice is a git user, she has access to
a private repository on a server hosted by Bob, and Mallory
runs a malicious server and wants to find out about Bob's
private repository.
Mallory asks Alice to clone an unrelated repository from her
over HTTP. When Alice's client contacts Mallory's server for
the initial ref advertisement, the server issues an HTTP
redirect for Bob's server. Alice contacts Bob's server and
gets the ref advertisement for the private repository. If
there is anything to fetch, she then follows up by asking
the server for one or more sha1 objects. But who is the
server?
If it is still Mallory's server, then Alice will leak the
existence of those sha1s to her.
Since commit c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28), the client usually rewrites the base
URL such that all further requests will go to Bob's server.
But this is done by textually matching the URL. If we were
originally looking for "http://mallory/repo.git/info/refs",
and we got pointed at "http://bob/other.git/info/refs", then
we know that the right root is "http://bob/other.git".
If the redirect appears to change more than just the root,
we punt and continue to use the original server. E.g.,
imagine the redirect adds a URL component that Bob's server
will ignore, like "http://bob/other.git/info/refs?dummy=1".
We can solve this by aborting in this case rather than
silently continuing to use Mallory's server. In addition to
protecting from sha1 leakage, it's arguably safer and more
sane to refuse a confusing redirect like that in general.
For example, part of the motivation in c93c92f30 is
avoiding accidentally sending credentials over clear http,
just to get a response that says "try again over https". So
even in a non-malicious case, we'd prefer to err on the side
of caution.
The downside is that it's possible this will break a
legitimate but complicated server-side redirection scheme.
The setup given in the newly added test does work, but it's
convoluted enough that we don't need to care about it. A
more plausible case would be a server which redirects a
request for "info/refs?service=git-upload-pack" to just
"info/refs" (because it does not do smart HTTP, and for some
reason really dislikes query parameters). Right now we
would transparently downgrade to dumb-http, but with this
patch, we'd complain (and the user would have to set
GIT_SMART_HTTP=0 to fetch).
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 18:24:35 +00:00
|
|
|
# The first rule issues a client-side redirect to something
|
|
|
|
# that _doesn't_ look like a git repo. The second rule is a
|
|
|
|
# server-side rewrite, so that it turns out the odd-looking
|
|
|
|
# thing _is_ a git repo. The "[PT]" tells Apache to match
|
|
|
|
# the usual ScriptAlias rules for /smart.
|
|
|
|
RewriteRule ^/insane-redir/(.*)$ /intern-redir/$1/foo [R=301]
|
|
|
|
RewriteRule ^/intern-redir/(.*)/foo$ /smart/$1 [PT]
|
|
|
|
|
http: make redirects more obvious
We instruct curl to always follow HTTP redirects. This is
convenient, but it creates opportunities for malicious
servers to create confusing situations. For instance,
imagine Alice is a git user with access to a private
repository on Bob's server. Mallory runs her own server and
wants to access objects from Bob's repository.
Mallory may try a few tricks that involve asking Alice to
clone from her, build on top, and then push the result:
1. Mallory may simply redirect all fetch requests to Bob's
server. Git will transparently follow those redirects
and fetch Bob's history, which Alice may believe she
got from Mallory. The subsequent push seems like it is
just feeding Mallory back her own objects, but is
actually leaking Bob's objects. There is nothing in
git's output to indicate that Bob's repository was
involved at all.
The downside (for Mallory) of this attack is that Alice
will have received Bob's entire repository, and is
likely to notice that when building on top of it.
2. If Mallory happens to know the sha1 of some object X in
Bob's repository, she can instead build her own history
that references that object. She then runs a dumb http
server, and Alice's client will fetch each object
individually. When it asks for X, Mallory redirects her
to Bob's server. The end result is that Alice obtains
objects from Bob, but they may be buried deep in
history. Alice is less likely to notice.
Both of these attacks are fairly hard to pull off. There's a
social component in getting Mallory to convince Alice to
work with her. Alice may be prompted for credentials in
accessing Bob's repository (but not always, if she is using
a credential helper that caches). Attack (1) requires a
certain amount of obliviousness on Alice's part while making
a new commit. Attack (2) requires that Mallory knows a sha1
in Bob's repository, that Bob's server supports dumb http,
and that the object in question is loose on Bob's server.
But we can probably make things a bit more obvious without
any loss of functionality. This patch does two things to
that end.
First, when we encounter a whole-repo redirect during the
initial ref discovery, we now inform the user on stderr,
making attack (1) much more obvious.
Second, the decision to follow redirects is now
configurable. The truly paranoid can set the new
http.followRedirects to false to avoid any redirection
entirely. But for a more practical default, we will disallow
redirects only after the initial ref discovery. This is
enough to thwart attacks similar to (2), while still
allowing the common use of redirects at the repository
level. Since c93c92f30 (http: update base URLs when we see
redirects, 2013-09-28) we re-root all further requests from
the redirect destination, which should generally mean that
no further redirection is necessary.
As an escape hatch, in case there really is a server that
needs to redirect individual requests, the user can set
http.followRedirects to "true" (and this can be done on a
per-server basis via http.*.followRedirects config).
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 18:24:41 +00:00
|
|
|
# Serve info/refs internally without redirecting, but
|
|
|
|
# issue a redirect for any object requests.
|
|
|
|
RewriteRule ^/redir-objects/(.*/info/refs)$ /dumb/$1 [PT]
|
|
|
|
RewriteRule ^/redir-objects/(.*/objects/.*)$ /dumb/$1 [R=301]
|
|
|
|
|
2008-02-27 19:28:45 +00:00
|
|
|
<IfDefine SSL>
|
|
|
|
LoadModule ssl_module modules/mod_ssl.so
|
|
|
|
|
|
|
|
SSLCertificateFile httpd.pem
|
|
|
|
SSLCertificateKeyFile httpd.pem
|
|
|
|
SSLRandomSeed startup file:/dev/urandom 512
|
|
|
|
SSLRandomSeed connect file:/dev/urandom 512
|
|
|
|
SSLSessionCache none
|
|
|
|
SSLEngine On
|
|
|
|
</IfDefine>
|
|
|
|
|
2010-11-14 01:51:14 +00:00
|
|
|
<Location /auth/>
|
|
|
|
AuthType Basic
|
|
|
|
AuthName "git-auth"
|
|
|
|
AuthUserFile passwd
|
|
|
|
Require valid-user
|
|
|
|
</Location>
|
|
|
|
|
2012-08-27 13:25:53 +00:00
|
|
|
<LocationMatch "^/auth-push/.*/git-receive-pack$">
|
|
|
|
AuthType Basic
|
|
|
|
AuthName "git-auth"
|
|
|
|
AuthUserFile passwd
|
|
|
|
Require valid-user
|
|
|
|
</LocationMatch>
|
|
|
|
|
remote-curl: retry failed requests for auth even with gzip
Commit b81401c taught the post_rpc function to retry the
http request after prompting for credentials. However, it
did not handle two cases:
1. If we have a large request, we do not retry. That's OK,
since we would have sent a probe (with retry) already.
2. If we are gzipping the request, we do not retry. That
was considered OK, because the intended use was for
push (e.g., listing refs is OK, but actually pushing
objects is not), and we never gzip on push.
This patch teaches post_rpc to retry even a gzipped request.
This has two advantages:
1. It is possible to configure a "half-auth" state for
fetching, where the set of refs and their sha1s are
advertised, but one cannot actually fetch objects.
This is not a recommended configuration, as it leaks
some information about what is in the repository (e.g.,
an attacker can try brute-forcing possible content in
your repository and checking whether it matches your
branch sha1). However, it can be slightly more
convenient, since a no-op fetch will not require a
password at all.
2. It future-proofs us should we decide to ever gzip more
requests.
Signed-off-by: Jeff King <peff@peff.net>
2012-10-31 11:29:16 +00:00
|
|
|
<LocationMatch "^/auth-fetch/.*/git-upload-pack$">
|
|
|
|
AuthType Basic
|
|
|
|
AuthName "git-auth"
|
|
|
|
AuthUserFile passwd
|
|
|
|
Require valid-user
|
|
|
|
</LocationMatch>
|
|
|
|
|
2013-04-13 03:33:36 +00:00
|
|
|
RewriteCond %{QUERY_STRING} service=git-receive-pack [OR]
|
|
|
|
RewriteCond %{REQUEST_URI} /git-receive-pack$
|
|
|
|
RewriteRule ^/half-auth-complete/ - [E=AUTHREQUIRED:yes]
|
|
|
|
|
|
|
|
<Location /half-auth-complete/>
|
|
|
|
Order Deny,Allow
|
|
|
|
Deny from env=AUTHREQUIRED
|
|
|
|
|
|
|
|
AuthType Basic
|
|
|
|
AuthName "Git Access"
|
|
|
|
AuthUserFile passwd
|
|
|
|
Require valid-user
|
|
|
|
Satisfy Any
|
|
|
|
</Location>
|
|
|
|
|
2008-02-27 19:28:45 +00:00
|
|
|
<IfDefine DAV>
|
|
|
|
LoadModule dav_module modules/mod_dav.so
|
|
|
|
LoadModule dav_fs_module modules/mod_dav_fs.so
|
|
|
|
|
|
|
|
DAVLockDB DAVLock
|
2009-10-31 00:47:46 +00:00
|
|
|
<Location /dumb/>
|
2008-02-27 19:28:45 +00:00
|
|
|
Dav on
|
|
|
|
</Location>
|
2011-12-13 20:17:04 +00:00
|
|
|
<Location /auth/dumb>
|
|
|
|
Dav on
|
|
|
|
</Location>
|
2008-02-27 19:28:45 +00:00
|
|
|
</IfDefine>
|
|
|
|
|
|
|
|
<IfDefine SVN>
|
|
|
|
LoadModule dav_svn_module modules/mod_dav_svn.so
|
|
|
|
|
2016-07-23 04:26:08 +00:00
|
|
|
<Location /${LIB_HTTPD_SVN}>
|
2008-02-27 19:28:45 +00:00
|
|
|
DAV svn
|
2016-07-23 04:26:08 +00:00
|
|
|
SVNPath "${LIB_HTTPD_SVNPATH}"
|
2008-02-27 19:28:45 +00:00
|
|
|
</Location>
|
|
|
|
</IfDefine>
|