Convert most instances where the sha1 member of struct object is
dereferenced to use get_object_hash. Most instances that are passed to
functions that have versions taking struct object_id, such as
get_sha1_hex/get_oid_hex, or instances that can be trivially converted
to use struct object_id instead, are not converted.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
Convert ref_newer and its caller to use struct object_id instead of
unsigned char *.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
Use struct object_id in three fields in struct ref and convert all the
necessary places that use it.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Jeff King <peff@peff.net>
In some cases where we strcpy() the result of sha1_to_hex(),
there's no need; the result goes directly into a printf
statement, and we can simply pass the return value from
sha1_to_hex() directly.
When this code was originally written, sha1_to_hex used a
single buffer, and it was not safe to use it twice within a
single expression. That changed as of dcb3450 (sha1_to_hex()
usage cleanup, 2006-05-03), but this code was never updated.
History-dug-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This drops the magic number for the fixed-size argv arrays,
so we do not have to wonder if we are overflowing it. We can
also drop some confusing sha1_to_hex memory allocation
(which seems to predate the ring of buffers allowing
multiple calls), and get rid of an unchecked sprintf call.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The http-push code defines an fwrite_buffer function for use
as a curl callback; it just writes to a strbuf. There's no
reason we need to use it ourselves, as we know we have a
strbuf. This lets us format directly into it, rather than
dealing with an extra temporary buffer (which required
manual length computation).
While we're here, let's also remove the literal tabs from
the source in favor of "\t", which is more visually obvious.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We account for these strcats in our initial allocation, but
the code is confusing to follow and verify. Let's remember
our original allocation length, and then xsnprintf can
verify that we don't exceed it.
Note that we can't just use xstrfmt here (which would be
even cleaner) because the code tries to grow the buffer only
when necessary.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We generally use 32-byte buffers to format git's "type size"
header fields. These should not generally overflow unless
you can produce some truly gigantic objects (and our types
come from our internal array of constant strings). But it is
a good idea to use xsnprintf to make sure this is the case.
Note that we slightly modify the interface to
write_sha1_file_prepare, which nows uses "hdrlen" as an "in"
parameter as well as an "out" (on the way in it stores the
allocated size of the header, and on the way out it returns
the ultimate size of the header).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We sometimes sprintf into fixed-size buffers when we know
that the buffer is large enough to fit the input (either
because it's a constant, or because it's numeric input that
is bounded in size). Likewise with strcpy of constant
strings.
However, these sites make it hard to audit sprintf and
strcpy calls for buffer overflows, as a reader has to
cross-reference the size of the array with the input. Let's
use xsnprintf instead, which communicates to a reader that
we don't expect this to overflow (and catches the mistake in
case we do).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* sb/leaks:
http: release the memory of a http pack request as well
read-cache: fix memleak
add_to_index(): free unused cache-entry
commit.c: fix a memory leak
http-push: remove unneeded cleanup
merge-recursive: fix memleaks
merge-blobs.c: fix a memleak
builtin/apply.c: fix a memleak
update-index: fix a memleak
read-cache: free cache entry in add_to_index in case of early return
preq is NULL as the condition the line before dictates. And the cleanup
function release_http_pack_request is not null pointer safe.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Clear the git_zstream variable at the start of git_deflate_init() etc.
so that callers don't have to do that.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we fetch a symbolic ref file from the remote, we get
the whole string "ref: refs/heads/master\n", recognize it by
skipping past the "ref: ", and store the rest. We should
chomp the trailing newline.
This bug was introduced in ae021d8 (use skip_prefix to avoid
magic numbers, 2014-06-18), which did not notice that the
length computation fed to xmemdupz was quietly tweaked by 1
to account for this.
We can solve it by explicitly trimming the newline, which is
more obvious. Note that we use strbuf_rtrim here, which will
actually cut off any trailing whitespace, not just a single
newline. This is a good thing, though, as it makes our
parsing more liberal (and spaces are not valid in refnames
anyway).
Signed-off-by: Jeff King <peff@peff.net>
Tested-by: Kyle J. McKay <mackyle@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
An ancient rewrite passed a wrong pointer to a curl library
function in a rarely used code path.
* ah/fix-http-push:
http-push.c: make CURLOPT_IOCTLDATA a usable pointer
Fixes a small bug affecting push to remotes which use some sort of
multi-pass authentication. In particular the bug affected SabreDAV as
configured by Box.com [1].
It must be a weird server configuration for the bug to have survived
this long. Someone should write a test for it.
[1] http://marc.info/?l=git&m=140460482604482
Signed-off-by: Abbaad Haider <abbaad@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/xstrfmt:
setup_git_env(): introduce git_path_from_env() helper
unique_path: fix unlikely heap overflow
walker_fetch: fix minor memory leak
merge: use argv_array when spawning merge strategy
sequencer: use argv_array_pushf
setup_git_env: use git_pathdup instead of xmalloc + sprintf
use xstrfmt to replace xmalloc + strcpy/strcat
use xstrfmt to replace xmalloc + sprintf
use xstrdup instead of xmalloc + strcpy
use xstrfmt in favor of manual size calculations
strbuf: add xstrfmt helper
We get loose object names like "objects/??/..." from the
remote side, and need to convert them to their hex
representation.
The code to do so is rather hard to follow, as it uses some
calculated lengths whose origins are hard to understand and
verify (e.g., the path must be exactly 49 characters long.
why? Why doesn't the strcpy overflow obj_hex, which is the
same length as path?).
We can simplify this a bit by using skip_prefix, using standard
40- and 20-character buffers for hex and binary sha1s, and
adding some comments.
We also drop a totally bogus comment that claims strlcpy
cannot be used because "path" is not NUL-terminated. Right
between a call to strlen(path) and strcpy(path).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It's a common idiom to match a prefix and then skip past it
with a magic number, like:
if (starts_with(foo, "bar"))
foo += 3;
This is easy to get wrong, since you have to count the
prefix string yourself, and there's no compiler check if the
string changes. We can use skip_prefix to avoid the magic
numbers here.
Note that some of these conversions could be much shorter.
For example:
if (starts_with(arg, "--foo=")) {
bar = arg + 6;
continue;
}
could become:
if (skip_prefix(arg, "--foo=", &bar))
continue;
However, I have left it as:
if (skip_prefix(arg, "--foo=", &v)) {
bar = v;
continue;
}
to visually match nearby cases which need to actually
process the string. Like:
if (skip_prefix(arg, "--foo=", &v)) {
bar = atoi(v);
continue;
}
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is one line shorter, and makes sure the length in the
malloc and sprintf steps match.
These conversions are very straightforward; we can drop the
malloc entirely, and replace the sprintf with xstrfmt.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is one line shorter, and makes sure the length in the
malloc and copy steps match.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
xcalloc() takes two arguments: the number of elements and their size.
http-push passes the arguments in reverse order, passing the size
of a repo, followed by the number to allocate.
Rearrange them so they are in the correct order.
Signed-off-by: Brian Gesiak <modocache@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While the field "flags" is mainly used by the revision walker, it is
also used in many other places. Centralize the whole flag allocation to
one place for a better overview (and easier to move flags if we have
too).
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leaving only the function definitions and declarations so that any
new topic in flight can still make use of the old functions, replace
existing uses of the prefixcmp() and suffixcmp() with new API
functions.
The change can be recreated by mechanically applying this:
$ git grep -l -e prefixcmp -e suffixcmp -- \*.c |
grep -v strbuf\\.c |
xargs perl -pi -e '
s|!prefixcmp\(|starts_with\(|g;
s|prefixcmp\(|!starts_with\(|g;
s|!suffixcmp\(|ends_with\(|g;
s|suffixcmp\(|!ends_with\(|g;
'
on the result of preparatory changes in this series.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Handle the case where http transport gets redirected during the
authorization request better.
* jk/http-auth-redirects:
http.c: Spell the null pointer as NULL
remote-curl: rewrite base url from info/refs redirects
remote-curl: store url as a strbuf
remote-curl: make refs_url a strbuf
http: update base URLs when we see redirects
http: provide effective url to callers
http: hoist credential request out of handle_curl_result
http: refactor options to http_get_*
http_request: factor out curlinfo_strbuf
http_get_file: style fixes
Over time, the http_get_strbuf function has grown several
optional parameters. We now have a bitfield with multiple
boolean options, as well as an optional strbuf for returning
the content-type of the response. And a future patch in this
series is going to add another strbuf option.
Treating these as separate arguments has a few downsides:
1. Most call sites need to add extra NULLs and 0s for the
options they aren't interested in.
2. The http_get_* functions are actually wrappers around
2 layers of low-level implementation functions. We have
to pass these options through individually.
3. The http_get_strbuf wrapper learned these options, but
nobody bothered to do so for http_get_file, even though
it is backed by the same function that does understand
the options.
Let's consolidate the options into a single struct. For the
common case of the default options, we'll allow callers to
simply pass a NULL for the options struct.
The resulting code is often a few lines longer, but it ends
up being easier to read (and to change as we add new
options, since we do not need to update each call site).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
When there is no sufficient overlap between old and new history
during a fetch into a shallow repository, we unnecessarily sent
objects the sending side knows the receiving end has.
* nd/fetch-into-shallow:
Add testcase for needless objects during a shallow fetch
list-objects: mark more commits as edges in mark_edges_uninteresting
list-objects: reduce one argument in mark_edges_uninteresting
upload-pack: delegate rev walking in shallow fetch to pack-objects
shallow: add setup_temporary_shallow()
shallow: only add shallow graft points to new shallow file
move setup_alternate_shallow and write_shallow_commits to shallow.c
mark_edges_uninteresting() is always called with this form
mark_edges_uninteresting(revs->commits, revs, ...);
Remove the first argument and let mark_edges_uninteresting figure that
out by itself. It helps answer the question "are this commit list and
revs related in any way?" when looking at mark_edges_uninteresting
implementation.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
That pointer will be assigned to new memory via
request = xmalloc(sizeof(*request));
20 lines later unconditionally anyway, so it's safe to not assign it
to an arbitrary variable.
Signed-off-by: Stefan Beller <stefanbeller@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many code paths will free a tree object's buffer and set it
to NULL after finishing with it in order to keep memory
usage down during a traversal. However, out of 8 sites that
do this, only one actually unsets the "parsed" flag back.
Those sites that don't are setting a trap for later users of
the tree object; even after calling parse_tree, the buffer
will remain NULL, causing potential segfaults.
It is not known whether this is triggerable in the current
code. Most commands do not do an in-memory traversal
followed by actually using the objects again. However, it
does not hurt to be safe for future callers.
In most cases, we can abstract this out to a
"free_tree_buffer" helper. However, there are two
exceptions:
1. The fsck code relies on the parsed flag to know that we
were able to parse the object at one point. We can
switch this to using a flag in the "flags" field.
2. The index-pack code sets the buffer to NULL but does
not free it (it is freed by a caller). We should still
unset the parsed flag here, but we cannot use our
helper, as we do not want to free the buffer.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This function is a single-liner and is only called from one
place. Just inline it, which makes the code more obvious.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This helper function should really be a one-liner that
prints an error message, but it has ended up unnecessarily
complicated:
1. We call error() directly when we fail to start the curl
request, so we must later avoid printing a duplicate
error in http_error().
It would be much simpler in this case to just stuff the
error message into our usual curl_errorstr buffer
rather than printing it ourselves. This means that
http_error does not even have to care about curl's exit
value (the interesting part is in the errorstr buffer
already).
2. We return the "ret" value passed in to us, but none of
the callers actually cares about our return value. We
can just drop this entirely.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
expat 1.1 and 1.2 provide xmlparse.h instead of expat.h. Include the
former on systems that define the EXPAT_NEEDS_XMLPARSE_H variable and
define that variable on QNX systems, which ship with expat 1.1.
Signed-off-by: Matt Kraai <matt.kraai@amo.abbott.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The smart HTTP clients forgot to verify the content-type that comes
back from the server side to make sure that the request is being
handled properly.
* sp/smart-http-content-type-check:
http_request: reset "type" strbuf before adding
t5551: fix expected error output
Verify Content-Type from smart HTTP servers
Before parsing a suspected smart-HTTP response verify the returned
Content-Type matches the standard. This protects a client from
attempting to process a payload that smells like a smart-HTTP
server response.
JGit has been doing this check on all responses since the dawn of
time. I mistakenly failed to include it in git-core when smart HTTP
was introduced. At the time I didn't know how to get the Content-Type
from libcurl. I punted, meant to circle back and fix this, and just
plain forgot about it.
Signed-off-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Optimise the "merge-base" computation a bit, and also update its
users that do not need the full merge-base information to call a
cheaper subset.
* jc/merge-bases:
reduce_heads(): reimplement on top of remove_redundant()
merge-base: "--is-ancestor A B"
get_merge_bases_many(): walk from many tips in parallel
in_merge_bases(): use paint_down_to_common()
merge_bases_many(): split out the logic to paint history
in_merge_bases(): omit unnecessary redundant common ancestor reduction
http-push: use in_merge_bases() for fast-forward check
receive-pack: use in_merge_bases() for fast-forward check
in_merge_bases(): support only one "other" commit
The original computed merge-base between HEAD and the remote ref and
checked if the remote ref is a merge base between them, in order to
make sure that we are fast-forwarding.
Instead, call in_merge_bases(remote, HEAD) which does the same.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
By calling the ident_default_email accessor, we can be sure
that the default value is actually filled-in.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The error handling routines add a newline. Remove
the duplicate ones in error messages.
Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Before commit 986bbc08, git was proactive about asking for
http passwords. It assumed that if you had a username in
your URL, you would also want a password, and asked for it
before making any http requests.
However, this could interfere with the use of .netrc (see
986bbc08 for details). And it was also unnecessary, since
the http fetching code had learned to recognize an HTTP 401
and prompt the user then. Furthermore, the proactive prompt
could interfere with the usage of .netrc (see 986bbc08 for
details).
Unfortunately, the http push-over-DAV code never learned to
recognize HTTP 401, and so was broken by this change. This
patch does a quick fix of re-enabling the "proactive auth"
strategy only for http-push, leaving the dumb http fetch and
smart-http as-is.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Change the skeleton implementation of i18n in Git to one that can show
localized strings to users for our C, Shell and Perl programs using
either GNU libintl or the Solaris gettext implementation.
This new internationalization support is enabled by default. If
gettext isn't available, or if Git is compiled with
NO_GETTEXT=YesPlease, Git falls back on its current behavior of
showing interface messages in English. When using the autoconf script
we'll auto-detect if the gettext libraries are installed and act
appropriately.
This change is somewhat large because as well as adding a C, Shell and
Perl i18n interface we're adding a lot of tests for them, and for
those tests to work we need a skeleton PO file to actually test
translations. A minimal Icelandic translation is included for this
purpose. Icelandic includes multi-byte characters which makes it easy
to test various edge cases, and it's a language I happen to
understand.
The rest of the commit message goes into detail about various
sub-parts of this commit.
= Installation
Gettext .mo files will be installed and looked for in the standard
$(prefix)/share/locale path. GIT_TEXTDOMAINDIR can also be set to
override that, but that's only intended to be used to test Git itself.
= Perl
Perl code that's to be localized should use the new Git::I18n
module. It imports a __ function into the caller's package by default.
Instead of using the high level Locale::TextDomain interface I've
opted to use the low-level (equivalent to the C interface)
Locale::Messages module, which Locale::TextDomain itself uses.
Locale::TextDomain does a lot of redundant work we don't need, and
some of it would potentially introduce bugs. It tries to set the
$TEXTDOMAIN based on package of the caller, and has its own
hardcoded paths where it'll search for messages.
I found it easier just to completely avoid it rather than try to
circumvent its behavior. In any case, this is an issue wholly
internal Git::I18N. Its guts can be changed later if that's deemed
necessary.
See <AANLkTilYD_NyIZMyj9dHtVk-ylVBfvyxpCC7982LWnVd@mail.gmail.com> for
a further elaboration on this topic.
= Shell
Shell code that's to be localized should use the git-sh-i18n
library. It's basically just a wrapper for the system's gettext.sh.
If gettext.sh isn't available we'll fall back on gettext(1) if it's
available. The latter is available without the former on Solaris,
which has its own non-GNU gettext implementation. We also need to
emulate eval_gettext() there.
If neither are present we'll use a dumb printf(1) fall-through
wrapper.
= About libcharset.h and langinfo.h
We use libcharset to query the character set of the current locale if
it's available. I.e. we'll use it instead of nl_langinfo if
HAVE_LIBCHARSET_H is set.
The GNU gettext manual recommends using langinfo.h's
nl_langinfo(CODESET) to acquire the current character set, but on
systems that have libcharset.h's locale_charset() using the latter is
either saner, or the only option on those systems.
GNU and Solaris have a nl_langinfo(CODESET), FreeBSD can use either,
but MinGW and some others need to use libcharset.h's locale_charset()
instead.
=Credits
This patch is based on work by Jeff Epler <jepler@unpythonic.net> who
did the initial Makefile / C work, and a lot of comments from the Git
mailing list, including Jonathan Nieder, Jakub Narebski, Johannes
Sixt, Erik Faye-Lund, Peter Krefting, Junio C Hamano, Thomas Rast and
others.
[jc: squashed a small Makefile fix from Ramsay]
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jk/http-auth:
http_init: accept separate URL parameter
http: use hostname in credential description
http: retry authentication failures for all http requests
remote-curl: don't retry auth failures with dumb protocol
improve httpd auth tests
url: decode buffers that are not NUL-terminated
The http_init function takes a "struct remote". Part of its
initialization procedure is to look at the remote's url and
grab some auth-related parameters. However, using the url
included in the remote is:
- wrong; the remote-curl helper may have a separate,
unrelated URL (e.g., from remote.*.pushurl). Looking at
the remote's configured url is incorrect.
- incomplete; http-fetch doesn't have a remote, so passes
NULL. So http_init never gets to see the URL we are
actually going to use.
- cumbersome; http-push has a similar problem to
http-fetch, but actually builds a fake remote just to
pass in the URL.
Instead, let's just add a separate URL parameter to
http_init, and all three callsites can pass in the
appropriate information.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* nd/maint-autofix-tag-in-head:
Accept tags in HEAD or MERGE_HEAD
merge: remove global variable head[]
merge: use return value of resolve_ref() to determine if HEAD is invalid
merge: keep stash[] a local variable
Conflicts:
builtin/merge.c
HEAD and MERGE_HEAD (among other branch tips) should never hold a
tag. That can only be caused by broken tools and is cumbersome to fix
by an end user with:
$ git update-ref HEAD $(git rev-parse HEAD^{commit})
which may look like a magic to a new person.
Be easy, warn users (so broken tools can be fixed if they bother to
report) and move on.
Be robust, if the given SHA-1 cannot be resolved to a commit object,
die (therefore return value is always valid).
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Yes, there is a warning that says the function is only used by push in big
red letters in front of this function, but it didn't say a more important
thing it should have said: what the function is for and what it does.
Rename it and document it to avoid future confusion.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* jc/zlib-wrap:
zlib: allow feeding more than 4GB in one go
zlib: zlib can only process 4GB at a time
zlib: wrap deflateBound() too
zlib: wrap deflate side of the API
zlib: wrap inflateInit2 used to accept only for gzip format
zlib: wrap remaining calls to direct inflate/inflateEnd
zlib wrapper: refactor error message formatter
* jc/zlib-wrap:
zlib: allow feeding more than 4GB in one go
zlib: zlib can only process 4GB at a time
zlib: wrap deflateBound() too
zlib: wrap deflate side of the API
zlib: wrap inflateInit2 used to accept only for gzip format
zlib: wrap remaining calls to direct inflate/inflateEnd
zlib wrapper: refactor error message formatter
Conflicts:
sha1_file.c
The size of objects we read from the repository and data we try to put
into the repository are represented in "unsigned long", so that on larger
architectures we can handle objects that weigh more than 4GB.
But the interface defined in zlib.h to communicate with inflate/deflate
limits avail_in (how many bytes of input are we calling zlib with) and
avail_out (how many bytes of output from zlib are we ready to accept)
fields effectively to 4GB by defining their type to be uInt.
In many places in our code, we allocate a large buffer (e.g. mmap'ing a
large loose object file) and tell zlib its size by assigning the size to
avail_in field of the stream, but that will truncate the high octets of
the real size. The worst part of this story is that we often pass around
z_stream (the state object used by zlib) to keep track of the number of
used bytes in input/output buffer by inspecting these two fields, which
practically limits our callchain to the same 4GB limit.
Wrap z_stream in another structure git_zstream that can express avail_in
and avail_out in unsigned long. For now, just die() when the caller gives
a size that cannot be given to a single zlib call. In later patches in the
series, we would make git_inflate() and git_deflate() internally loop to
give callers an illusion that our "improved" version of zlib interface can
operate on a buffer larger than 4GB in one go.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Wrap deflateInit, deflate, and deflateEnd for everybody, and the sole use
of deflateInit2 in remote-curl.c to tell the library to use gzip header
and trailer in git_deflate_init_gzip().
There is only one caller that cares about the status from deflateEnd().
Introduce git_deflate_end_gently() to let that sole caller retrieve the
status and act on it (i.e. die) for now, but we would probably want to
make inflate_end/deflate_end die when they ran out of memory and get
rid of the _gently() kind.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We were doing (nearly) the same thing all over the place, in slightly
different orders, different variable names, etc. Refactor most calls
into two helper functions, one for GET and one for everything else, that
do the heavy lifting leaving most callsites a lot cleaner in the
process.
Note that the setting of CURLOPT_PUT at the callsites of
curl_setup_http() which previously didn't do it (eg.
locking_available(), remote_ls()) is safe, since that
option is deprecated in libcurl in place of, and has the same effect as,
CURLOPT_UPLOAD.
Signed-off-by: Dan McGee <dpmcgee@gmail.com>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a struct definitions, unlike functions, the prevailing style is for
the opening brace to go on the same line as the struct name, like so:
struct foo {
int bar;
char *baz;
};
Indeed, grepping for 'struct [a-z_]* {$' yields about 5 times as many
matches as 'struct [a-z_]*$'.
Linus sayeth:
Heretic people all over the world have claimed that this inconsistency
is ... well ... inconsistent, but all right-thinking people know that
(a) K&R are _right_ and (b) K&R are right.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
That way, we don't have to update repo->path and repo->path_len again
after adding the trailing slash.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We use path_len to skip the base url/path, but we do not know for sure
if path does indeed contain the base url/path. Check if this is so.
Helped-by: Johnathan Nieder <jrnieder@gmail.com>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Fix a bug when pushing to WebDAV servers which do not use a trailing
slash for collection names. The previous implementation fails to see
that the requested resource "refs/" is the same resource as "refs"
and loads every reference twice (once for refs/ and once for refs).
This implementation normalises every collection name by appending a
trailing slash if necessary.
This can be tested with old versions of Apache (such as the WebDAV
server of GMX, Apache 2.0.63).
Based-on-patch-by: Gabriel Corona <gabriel.corona@enst-bretagne.fr>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Without this patch at least IBM VisualAge C 5.0 (I have 5.0.2) on AIX
5.1 fails to compile git.
enum style is inconsistent already, with some enums declared on one
line, some over 3 lines with the enum values all on the middle line,
sometimes with 1 enum value per line... and independently of that the
trailing comma is sometimes present and other times absent, often
mixing with/without trailing comma styles in a single file, and
sometimes in consecutive enum declarations.
Clearly, omitting the comma is the more portable style, and this patch
changes all enum declarations to use the portable omitted dangling
comma style consistently.
Signed-off-by: Gary V. Vaughan <gary@thewrittenword.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This works around a bug in curl versions up to 7.19.4, where disabling the
CURLOPT_NOBODY option sets the internal state incorrectly considering that
CURLOPT_PUT was enabled earlier.
The bug is discussed at http://curl.haxx.se/bug/view.cgi?id=2727981 and is
corrected in the latest version of curl in CVS.
This bug usually has no impact on git, but may surface if using multi-pass
authentication methods.
Signed-off-by: Martin Storsjo <martin@martin.st>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* sp/smart-http: (37 commits)
http-backend: Let gcc check the format of more printf-type functions.
http-backend: Fix access beyond end of string.
http-backend: Fix bad treatment of uintmax_t in Content-Length
t5551-http-fetch: Work around broken Accept header in libcurl
t5551-http-fetch: Work around some libcurl versions
http-backend: Protect GIT_PROJECT_ROOT from /../ requests
Git-aware CGI to provide dumb HTTP transport
http-backend: Test configuration options
http-backend: Use http.getanyfile to disable dumb HTTP serving
test smart http fetch and push
http tests: use /dumb/ URL prefix
set httpd port before sourcing lib-httpd
t5540-http-push: remove redundant fetches
Smart HTTP fetch: gzip requests
Smart fetch over HTTP: client side
Smart push over HTTP: client side
Discover refs via smart HTTP server when available
http-backend: more explict LocationMatch
http-backend: add example for gitweb on same URL
http-backend: use mod_alias instead of mod_rewrite
...
Conflicts:
.gitignore
remote-curl.c
http-push already knows how to dump usage if it is given no options, but
it interprets '-h' as the URL to a remote repository:
$ git http-push -h
error: Cannot access URL -h/, return code 6
Dump usage instead. Humans wanting to pass the URL -h/ to curl for some
reason can use 'git http-push -h/' explicitly. Scripts expecting to
access an HTTP repository at URL '-h' will break, though.
Also delay finding a git directory until after option parsing, so
"http-push -h" can be used outside any git repository.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The remote helper interface now supports the push capability,
which can be used to ask the implementation to push one or more
specs to the remote repository. For remote-curl we implement this
by calling the existing WebDAV based git-http-push executable.
Internally the helper interface uses the push_refs transport hook
so that the complexity of the refspec parsing and matching can be
reused between remote implementations. When possible however the
helper protocol uses source ref name rather than the source SHA-1,
thereby allowing the helper to access this name if it is useful.
>From Clemens Buchacher <drizzd@aon.at>:
update http tests according to remote-curl capabilities
o Pushing packed refs is now fixed.
o The transport helper fails if refs are already up-to-date. Add
a test for that.
o The transport helper will notice if refs are already
up-to-date. We therefore need to update server info in the
unpacked-refs test.
o The transport helper will purge deleted branches automatically.
o Use a variable ($ORIG_HEAD) instead of full SHA-1 name.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
CC: Daniel Barkalow <barkalow@iabervon.org>
CC: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Check that http.c::finish_http_pack_request() returns 0 (for success).
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
xml_entities() in http-push.c did not properly stop at the end of the
string being examined, which would occasionally cause nonsense to be
appended to escaped URL strings and result in failed DAV XML queries
Signed-off-by: Seth Hunter <hunter@ll.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are a few remaining ones, but this fixes the trivial ones. It boils
down to two main issues that sparse complains about:
- warning: Using plain integer as NULL pointer
Sparse doesn't like you using '0' instead of 'NULL'. For various good
reasons, not the least of which is just the visual confusion. A NULL
pointer is not an integer, and that whole "0 works as NULL" is a
historical accident and not very pretty.
A few of these remain: zlib is a total mess, and Z_NULL is just a 0.
I didn't touch those.
- warning: symbol 'xyz' was not declared. Should it be static?
Sparse wants to see declarations for any functions you export. A lack
of a declaration tends to mean that you should either add one, or you
should mark the function 'static' to show that it's in file scope.
A few of these remain: I only did the ones that should obviously just
be made static.
That 'wt_status_submodule_summary' one is debatable. It has a few related
flags (like 'wt_status_use_color') which _are_ declared, and are used by
builtin-commit.c. So maybe we'd like to export it at some point, but it's
not declared now, and not used outside of that file, so 'static' it is in
this patch.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* rc/http-push: (22 commits)
http*: add helper methods for fetching objects (loose)
http*: add helper methods for fetching packs
http: use new http API in fetch_index()
http*: add http_get_info_packs
http-push.c::fetch_symref(): use the new http API
http-push.c::remote_exists(): use the new http API
http.c::http_fetch_ref(): use the new http API
transport.c::get_refs_via_curl(): use the new http API
http.c: new functions for the http API
http: create function end_url_with_slash
http*: move common variables and macros to http.[ch]
transport.c::get_refs_via_curl(): do not leak refs_url
Don't expect verify_pack() callers to set pack_size
http-push: do not SEGV after fetching a bad pack idx file
http*: copy string returned by sha1_to_hex
http-walker: verify remote packs
http-push, http-walker: style fixes
t5550-http-fetch: test fetching of packed objects
http-push: fix missing "#ifdef USE_CURL_MULTI" around "is_running_queue"
http-push: send out fetch requests on queue
...
The code handling the fetching of loose objects in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(object_http_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_object_request
- process_http_object_request
- finish_http_object_request
- abort_http_object_request
- release_http_object_request
and the new struct is http_object_request.
RANGER_HEADER_SIZE and no_pragma_header is no longer made available
outside of http.c, since after the above changes, there are no other
instances of usage outside of http.c.
Remove members of the transfer_request struct in http-push.c and
http-walker.c, including filename, real_sha1 and zret, as they are used
no longer used.
Move the methods append_remote_object_url() and get_remote_object_url()
from http-push.c to http.c. Additionally, get_remote_object_url() is no
longer defined only when USE_CURL_MULTI is defined, since
non-USE_CURL_MULTI code in http.c uses it (namely, in
new_http_object_request()).
Refactor code from http-push.c::start_fetch_loose() and
http-walker.c::start_object_fetch_request() that deals with the details
of coming up with the filename to store the retrieved object, resuming
a previously aborted request, and making a new curl request, into a new
function, new_http_object_request().
Refactor code from http-walker.c::process_object_request() into the
function, process_http_object_request().
Refactor code from http-push.c::finish_request() and
http-walker.c::finish_object_request() into a new function,
finish_http_object_request(). It returns the result of the
move_temp_to_file() invocation.
Add a function, release_http_object_request(), which cleans up object
request data. http-push.c and http-walker.c invoke this function
separately; http-push.c::release_request() and
http-walker.c::release_object_request() do not invoke this function.
Add a function, abort_http_object_request(), which unlink()s the object
file and invokes release_http_object_request(). Update
http-walker.c::abort_object_request() to use this.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The code handling the fetching of packs in http-push.c and
http-walker.c have been refactored into new methods and a new struct
(http_pack_request) in http.c. They are not meant to be invoked
elsewhere.
The new methods in http.c are
- new_http_pack_request
- finish_http_pack_request
- release_http_pack_request
and the new struct is http_pack_request.
Add a function, new_http_pack_request(), that deals with the details of
coming up with the filename to store the retrieved packfile, resuming a
previously aborted request, and making a new curl request. Update
http-push.c::start_fetch_packed() and http-walker.c::fetch_pack() to
use this.
Add a function, finish_http_pack_request(), that deals with renaming
the pack, advancing the pack list, and installing the pack. Update
http-push.c::finish_request() and http-walker.c::fetch_pack to use
this.
Update release_request() in http-push.c and http-walker.c to invoke
release_http_pack_request() to clean up pack request helper data.
The local_stream member of the transfer_request struct in http-push.c
has been removed, as the packfile pointer will be managed in the struct
http_pack_request.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
http-push.c and http-walker.c no longer have to use fetch_index or
setup_index; they simply need to use http_get_info_packs, a new http
method, in their fetch_indices implementations.
Move fetch_index() and rename to fetch_pack_index() in http.c; this
method is not meant to be used outside of http.c. It invokes
end_url_with_slash with base_url; apart from that change, the code is
identical.
Move setup_index() and rename to fetch_and_setup_pack_index() in
http.c; this method is not meant to be used outside of http.c.
Do not immediately set ret to 0 in http-walker.c::fetch_indices();
instead do it in the HTTP_MISSING_TARGET case, to make it clear that
the HTTP_OK and HTTP_MISSING_TARGET cases both return 0.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Move RANGE_HEADER_SIZE to http.h.
Create no_pragma_header, the curl header list containing the header
"Pragma:" in http.[ch]. It is allocated in http_init, and freed in
http_cleanup. This replaces the no_pragma_header in http-push.c, and
the no_pragma_header member in walker_data in http-walker.c.
Create http_is_verbose. It is to be used by methods in http.c, and is
modified at the entry points of http.c's users, namely http-push.c
(when parsing options) and http-walker.c (in get_http_walker).
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In a70c232 ("http-fetch: do not SEGV after fetching a bad pack idx
file"), changes were made to the setup_index method in http-fetch.c
(known in its present form as http-walker.c after 30ae764 ("Modularize
commit-walker")). Since http-push.c has similar similar code for
processing index files, these changes should apply to http-push.c's
implementation of setup_index as well.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the fetch_index implementations in http-push.c and http-walker.c,
the string returned by sha1_to_hex is assumed to stay immutable.
This patch ensures that hex stays immutable by copying the string
returned by sha1_to_hex (via xstrdup) and frees it subsequently. It
also refactors free()'s and fclose()'s with labels.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
- Use tabs to indent, instead of spaces.
- Do not use curly-braces around a single statement body in
if/while statement;
- Do not start multi-line comment with description on the first
line after "/*", i.e.
/*
* We prefer this over...
*/
/* comments like
* this (notice the first line)
*/
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As it is breaking the build when USE_CURL_MULTI is not defined.
Signed-off-by: Christian Couder <chriscool@tuxfamily.org>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Previously, requests for remote files were simply added to the queue
(pointed to by request_queue_head) and no transfer actually takes
place (the fill function add_fill_function() is not added until line
2441), even though code that followed may rely on these remote files to
be present (eg. the setup_revisions invocation).
The code that sends out the requests on the request queue is refactored
into the method run_request_queue.
After the get_dav_remote_heads invocation (ie. after fetch requests are
added to the queue), the requests on the queue are sent out through an
invocation to run_request_queue.
This invocation to run_request_queue entails adding a fill function
before pushing checks take place, which may lead to accidental,
unwanted pushes previously.
The flag is_running_queue is introduced to prevent this from occurring.
fill_active_slot is made to check the flag is_running_queue before
the sending of the requests proceeds.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Set slot->local to NULL after doing a fclose() on the file it points
to. This prevents the passing of a FILE* pointer to a fclose()'d file
to ftell() in http.c::run_active_slot().
This issue was raised by Clemens Buchacher on 30th May 2009:
http://www.spinics.net/lists/git/msg104623.html
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Avoid code duplication by moving list tail search to match_refs().
This does not change the semantics, except for http-push, which now inserts
to the front of the ref list in order to get rid of the global remote_tail.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
* ar/unlink-err:
print unlink(2) errno in copy_or_link_directory
replace direct calls to unlink(2) with unlink_or_warn
Introduce an unlink(2) wrapper which gives warning if unlink failed
Noticed and reported by Serhat Şevki Dinçer.
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Acked-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
It is convention that argv should be terminated with NULL, even if
argc is used to specify the size of argv. setup_revisions() requires
this and may segfault otherwise.
This patch makes sure that all argv (that I can find) is NULL terminated.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This helps to notice when something's going wrong, especially on
systems which lock open files.
I used the following criteria when selecting the code for replacement:
- it was already printing a warning for the unlink failures
- it is in a function which already printing something or is
called from such a function
- it is in a static function, returning void and the function is only
called from a builtin main function (cmd_)
- it is in a function which handles emergency exit (signal handlers)
- it is in a function which is obvously cleaning up the lockfiles
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If the repo url or the user email contain XML special characters, the
remote DAV server is likely to reject the LOCK requests because the XML
is then malformed.
Signed-off-by: Mike Hommey <mh@glandium.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>