Since the tempfile code recently relaxed the rule that
tempfile structs (and thus locks) need to hang around
forever, we no longer have to leak our lock_file structs.
In fact, we don't even need to heap-allocate them anymore,
since their lifetime can just match that of the surrounding
ref_lock (and if we forget to delete a lock, the effect is
the same as before: it will eventually go away at program
exit).
Note that there is a check in unlock_ref() to only rollback
a lock file if it has been allocated. We don't need that
check anymore; we zero the ref_lock (and thus the
lock_file), so at worst we pass a NULL pointer to
delete_tempfile(), which considers that a noop.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that the tempfile system we rely on has loosened the
lifetime requirements for storage, we can adjust our
documentation to match.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The previous commit taught the tempfile code to give up
ownership over tempfiles that have been renamed or deleted.
That makes it possible to use a stack variable like this:
struct tempfile t;
create_tempfile(&t, ...);
...
if (!err)
rename_tempfile(&t, ...);
else
delete_tempfile(&t);
But doing it this way has a high potential for creating
memory errors. The tempfile we pass to create_tempfile()
ends up on a global linked list, and it's not safe for it to
go out of scope until we've called one of those two
deactivation functions.
Imagine that we add an early return from the function that
forgets to call delete_tempfile(). With a static or heap
tempfile variable, the worst case is that the tempfile hangs
around until the program exits (and some functions like
setup_shallow_temporary rely on this intentionally, creating
a tempfile and then leaving it for later cleanup).
But with a stack variable as above, this is a serious memory
error: the variable goes out of scope and may be filled with
garbage by the time the tempfile code looks at it. Let's
see if we can make it harder to get this wrong.
Since many callers need to allocate arbitrary numbers of
tempfiles, we can't rely on static storage as a general
solution. So we need to turn to the heap. We could just ask
all callers to pass us a heap variable, but that puts the
burden on them to call free() at the right time.
Instead, let's have the tempfile code handle the heap
allocation _and_ the deallocation (when the tempfile is
deactivated and removed from the list).
This changes the return value of all of the creation
functions. For the cleanup functions (delete and rename),
we'll add one extra bit of safety: instead of taking a
tempfile pointer, we'll take a pointer-to-pointer and set it
to NULL after freeing the object. This makes it safe to
double-call functions like delete_tempfile(), as the second
call treats the NULL input as a noop. Several callsites
follow this pattern.
The resulting patch does have a fair bit of noise, as each
caller needs to be converted to handle:
1. Storing a pointer instead of the struct itself.
2. Passing the pointer instead of taking the struct
address.
3. Handling a "struct tempfile *" return instead of a file
descriptor.
We could play games to make this less noisy. For example, by
defining the tempfile like this:
struct tempfile {
struct heap_allocated_part_of_tempfile {
int fd;
...etc
} *actual_data;
}
Callers would continue to have a "struct tempfile", and it
would be "active" only when the inner pointer was non-NULL.
But that just makes things more awkward in the long run.
There aren't that many callers, so we can simply bite
the bullet and adjust all of them. And the compiler makes it
easy for us to find them all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Once a "struct tempfile" is added to the global cleanup
list, it is never removed. This means that its storage must
remain valid for the lifetime of the program. For single-use
tempfiles and locks, this isn't a big deal: we just declare
the struct static. But for library code which may take
multiple simultaneous locks (like the ref code), they're
forced to allocate a struct on the heap and leak it.
This is mostly OK in practice. The size of the leak is
bounded by the number of refs, and most programs exit after
operating on a fixed number of refs (and allocate
simultaneous memory proportional to the number of ref
updates in the first place). But:
1. It isn't hard to imagine a real leak: a program which
runs for a long time taking a series of ref update
instructions and fulfilling them one by one. I don't
think we have such a program now, but it's certainly
plausible.
2. The leaked entries appear as false positives to
tools like valgrind.
Let's relax this rule by keeping only "active" tempfiles on
the list. We can do this easily by moving the list-add
operation from prepare_tempfile_object to activate_tempfile,
and adding a deletion in deactivate_tempfile.
Existing callers do not need to be updated immediately.
They'll continue to leak any tempfile objects they may have
allocated, but that's no different than the status quo. We
can clean them up individually.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tempfile API keeps to-be-cleaned tempfiles in a
singly-linked list and never removes items from the list. A
future patch would like to start removing items, but removal
from a singly linked list is O(n), as we have to walk the
list to find the predecessor element. This means that a
process which takes "n" simultaneous lockfiles (for example,
an atomic transaction on "n" refs) may end up quadratic in
"n".
Before we start allowing items to be removed, it would be
nice to have a way to cover this case in linear time.
The simplest solution is to make an assumption about the
order in which tempfiles are added and removed from the
list. If both operations iterate over the tempfiles in the
same order, then by putting new items at the end of the list
our removal search will always find its items at the
beginning of the list. And indeed, that would work for the
case of refs. But it creates a hidden dependency between
unrelated parts of the code. If anybody changes the ref code
(or if we add a new caller that opens multiple simultaneous
tempfiles) they may unknowingly introduce a performance
regression.
Another solution is to use a better data structure. A
doubly-linked list works fine, and we already have an
implementation in list.h. But there's one snag: the elements
of "struct tempfile" are all marked as "volatile", since a
signal handler may interrupt us and iterate over the list at
any moment (even if we were in the middle of adding a new
entry).
We can declare a "volatile struct list_head", but we can't
actually use it with the normal list functions. The compiler
complains about passing a pointer-to-volatile via a regular
pointer argument. And rightfully so, as the sub-function
would potentially need different code to deal with the
volatile case.
That leaves us with a few options:
1. Drop the "volatile" modifier for the list items.
This is probably a bad idea. I checked the assembly
output from "gcc -O2", and the "volatile" really does
impact the order in which it updates memory.
2. Use macros instead of inline functions. The irony here
is that list.h is entirely implemented as trivial
inline functions. So we basically are already
generating custom code for each call. But sadly there's no
way in C to declare the inline function to take a more
generic type.
We could do so by switching the inline functions to
macros, but it does make the end result harder to read.
And it doesn't fully solve the problem (for instance,
the declaration of list_head needs to change so that
its "prev" and "next" pointers point to other volatile
structs).
3. Don't use list.h, and just make our own ad-hoc
doubly-linked list. It's not that much code to
implement the basics that we need here. But if we're
going to do so, why not add the few extra lines
required to model it after the actual list.h interface?
We can even reuse a few of the macro helpers.
So this patch takes option 3, but actually implements a
parallel "volatile list" interface in list.h, where it could
potentially be reused by other code. This implements just
enough for tempfile.c's use, though we could easily port
other functions later if need be.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When a tempfile is deactivated, we reset its strbuf to the
empty string, which means we hold onto the memory for later
reuse.
Since we'd like to move to a system where tempfile structs
can actually be freed, deactivating one should drop all
resources it is currently using. And thus "release" rather
than "reset" is the appropriate function to call.
In theory the reset may have saved a malloc() when a
tempfile (or a lockfile) is reused multiple times. But in
practice this happened rarely. Most of our tempfiles are
single-use, since in cases where we might actually use many
(like ref locking) we xcalloc() a fresh one for each ref. In
fact, we leak those locks (to appease the rule that tempfile
storage can never be freed). Which means that using reset is
actively hurting us: instead of leaking just the tempfile
struct, we're leaking the extra heap chunk for the filename,
too.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We may call remove_tempfiles() from an atexit handler, or
from a signal handler. In the latter case we must take care
to avoid functions which may deadlock if the process is in
an unknown state, including looking at any stdio handles
(which may be in the middle of doing I/O and locked) or
calling malloc() or free().
The current implementation calls delete_tempfile(). We unset
the tempfile's stdio handle (if any) to avoid deadlocking
there. But delete_tempfile() still calls unlink_or_warn(),
which can deadlock writing to stderr if the unlink fails.
Since delete_tempfile() isn't very long, let's just
open-code our own simple conservative version of the same
thing. Notably:
1. The "skip_fclose" flag is now called "in_signal_handler",
because it should inform more decisions than just the
fclose handling.
2. We can replace close_tempfile() with just close(fd).
That skips the fclose() question altogether. This is
fine for the atexit() case, too; there's no point
flushing data to a file which we're about to delete
anyway.
3. We can choose between unlink/unlink_or_warn based on
whether it's safe to use stderr.
4. We can replace the deactivate_tempfile() call with a
simple setting of the active flag. There's no need to
do any further cleanup since we know the program is
exiting. And even though the current deactivation code
is safe in a signal handler, this frees us up in future
patches to make non-signal deactivation more
complicated (e.g., by freeing resources).
5. There's no need to remove items from the tempfile_list.
The "active" flag is the ultimate answer to whether an
entry has been handled or not. Manipulating the list
just introduces more chance of recursive signals
stomping on each other, and the whole list will go away
when the program exits anyway. Less is more.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When we deactivate a tempfile, we also have to clean up the
"filename" strbuf. Let's pull this out into its own function
to keep the logic in one place (which will become more
important when a future patch makes it more complicated).
Note that we can use the same function when deactivating an
object that _isn't_ actually active yet (like when we hit an
error creating a tempfile). These callsites don't currently
reset the "active" flag to 0, but it's OK to do so (it's
just a noop for these cases).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
There are a few steps required to "activate" a tempfile
struct. Let's pull these out into a function. That saves a
few repeated lines now, but more importantly will make it
easier to change the activation scheme later.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Compared to die(), using BUG() triggers abort(). That may
give us an actual coredump, which should make it easier to
get a stack trace. And since the programming error for these
assertions is not in the functions themselves but in their
callers, such a stack trace is needed to actually find the
source of the bug.
In addition, abort() raises SIGABRT, which is more likely to
be caught by our test suite.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tempfile functions all take pointers to tempfile
objects, but do not check whether the argument is NULL.
This isn't a big deal in practice, since the lifetime of any
tempfile object is defined to last for the whole program. So
even if we try to call delete_tempfile() on an
already-deleted tempfile, our "active" check will tell us
that it's a noop.
In preparation for transitioning to a new system that
loosens the "tempfile objects can never be freed" rule,
let's tighten up our active checks:
1. A NULL pointer is now defined as "inactive" (so it will
BUG for most functions, but works as a silent noop for
things like delete_tempfile).
2. Functions should always do the "active" check before
looking at any of the struct fields.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tempfile code keeps an "active" flag, and we have a
number of assertions to make sure that the objects are being
used in the right order. Most of these directly check
"active" rather than using the is_tempfile_active()
accessor.
Let's prefer using the accessor, in preparation for it
growing more complicated logic (like checking for NULL).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Since the lockfile code is based on the tempfile code, it
has some of the same problems, including that close_lock_file()
erases the tempfile's filename buf, making it hard for the
caller to write a good error message.
In practice this comes up less for lockfiles than for
straight tempfiles, since we usually just report the
refname. But there is at least one buggy case in
write_ref_to_lockfile(). Besides, given the coupling between
the lockfile and tempfile modules, it's less confusing if
their close() functions have the same semantics.
Just as the previous commit did for close_tempfile(), let's
teach close_lock_file() and its wrapper close_ref() not to
rollback on error. And just as before, we'll give them new
"gently" names to catch any new callers that are added.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When close_tempfile() fails, we delete the tempfile and
reset the fields of the tempfile struct. This makes it
easier for callers to return without cleaning up, but it
also makes this common pattern:
if (close_tempfile(tempfile))
return error_errno("error closing %s", tempfile->filename.buf);
wrong, because the "filename" field has been reset after the
failed close. And it's not easy to fix, as in many cases we
don't have another copy of the filename (e.g., if it was
created via one of the mks_tempfile functions, and we just
have the original template string).
Let's drop the feature that a failed close automatically
deletes the file. This puts the burden on the caller to do
the deletion themselves, but this isn't that big a deal.
Callers which do:
if (write(...) || close_tempfile(...)) {
delete_tempfile(...);
return -1;
}
already had to call delete when the write() failed, and so
aren't affected. Likewise, any caller which just calls die()
in the error path is OK; we'll delete the tempfile during
the atexit handler.
Because this patch changes the semantics of close_tempfile()
without changing its signature, all callers need to be
manually checked and converted to the new scheme. This patch
covers all in-tree callers, but there may be others for
not-yet-merged topics. To catch these, we rename the
function to close_tempfile_gently(), which will attract
compile-time attention to new callers. (Technically the
original could be considered "gentle" already in that it
didn't die() on errors, but this one is even more so).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If close_tempfile() encounters an error, then it deletes the
tempfile and resets the "struct tempfile". But many code
paths ignore the return value and continue to use the
tempfile. Instead, we should generally treat this the same
as a write() error.
Note that in the postimage of some of these cases our error
message will be bogus after a failed close because we look
at tempfile->filename (either directly or via get_tempfile_path).
But after the failed close resets the tempfile object, this
is guaranteed to be the empty string. That will be addressed
in a future patch (because there are many more cases of the
same problem than just these instances).
Note also in the hunk in gpg-interface.c that it's fine to
call delete_tempfile() in the error path, even if
close_tempfile() failed and already deleted the file. The
tempfile code is smart enough to know the second deletion is
a noop.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We do a manual close() on the descriptor provided to us by
mks_tempfile. But this runs contrary to the advice in
tempfile.h, which notes that you should always use
close_tempfile(). Otherwise the descriptor may be reused
without the tempfile object knowing it, and the later call
to delete_tempfile() could close a random descriptor.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The setup_temporary_shallow() function creates a temporary
file, but we never access the tempfile struct outside of the
function. This is OK, since it means we'll just clean up the
tempfile on exit. But we can simplify the code a bit by
moving the global tempfile struct to the only function in
which it's used.
Note that it must remain "static" due to tempfile.c's
requirement that tempfile storage never goes away until
program exit.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When there are no shallow entries to write, we skip creating
the tempfile entirely and try to return the empty string.
But we do so by calling get_tempfile_path() on the inactive
tempfile object. This will trigger an assertion that kills
the program. The bug was introduced by 6e122b449b
(setup_temporary_shallow(): use tempfile module,
2015-08-10). But nobody seems to have noticed since then
because we do not end up calling this function at all when
there are no shallow items. In other words, this code path
is completely unexercised.
Since the tempfile object is a static global, it _is_
possible that we call the function twice, writing out
shallow info the first time and then "reusing" our tempfile
object the second time. But:
1. It seems unlikely that this was the intent, as hitting
this code path would imply somebody clearing the
shallow_info list between calls.
And if somebody _did_ call the function multiple times
without clearing the shallow_info list, we'd hit a
different BUG for trying to reuse an already-active
tempfile.
2. I verified by code inspection that the function is only
called once per program. And also replacing this code
with a BUG() and running the test suite demonstrates
that it is not triggered there.
So we could probably just replace this with an assertion and
confirm that it's never called. However, the original intent
does seem to be that you _could_ call it when the
shallow_info is empty. And that's easy enough to do; since
the return value doesn't need to point to a writable buffer,
we can just return a string literal.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
If we failed to write our new index file, we rollback our
lockfile to remove the temporary index. But if we fail
before we even get to the write step (because reading the
old index failed), we leave the lockfile in place, which
makes no sense.
In practice this hasn't been a big deal because failing at
write_index_as_tree() typically results in the whole program
exiting (and thus the tempfile handler kicking in and
cleaning up the files). But this function should
consistently take responsibility for the resources it
allocates.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git archive" did not work well with pathspecs and the
export-ignore attribute.
* rs/archive-excluded-directory:
archive: don't queue excluded directories
archive: factor out helper functions for handling attributes
t5001: add tests for export-ignore attributes and exclude pathspecs
Conversion from uchar[20] to struct object_id continues; this is to
ensure that we do not assume sizeof(struct object_id) is the same
as the length of SHA-1 hash (or length of longest hash we support).
* po/read-graft-line:
commit: rewrite read_graft_line
commit: allocate array using object_id size
commit: replace the raw buffer with strbuf in read_graft_line
sha1_file: fix definition of null_sha1
"branch --set-upstream" that has been deprecated in Git 1.8 has
finally been retired.
* ks/branch-set-upstream:
branch: quote branch/ref names to improve readability
builtin/branch: stop supporting the "--set-upstream" option
t3200: cleanup cruft of a test
Killing "git merge --edit" before the editor returns control left
the repository in a state with MERGE_MSG but without MERGE_HEAD,
which incorrectly tells the subsequent "git commit" that there was
a squash merge in progress. This has been fixed.
* mg/killed-merge:
merge: save merge state earlier
merge: split write_merge_state in two
merge: clarify call chain
Documentation/git-merge: explain --continue
The code to acquire a lock on a reference (e.g. while accepting a
push from a client) used to immediately fail when the reference is
already locked---now it waits for a very short while and retries,
which can make it succeed if the lock holder was holding it during
a read-only operation.
* mh/ref-lock-entry:
refs: retry acquiring reference locks for 100ms
"[gc] rerereResolved = 5.days" used to be invalid, as the variable
is defined to take an integer counting the number of days. It now
is allowed.
* jc/cutoff-config:
rerere: allow approxidate in gc.rerereResolved/gc.rerereUnresolved
rerere: represent time duration in timestamp_t internally
t4200: parameterize "rerere gc" custom expiry test
t4200: gather "rerere gc" together
t4200: make "rerere gc" test more robust
t4200: give us a clean slate after "rerere gc" tests
We used to spend more than necessary cycles allocating and freeing
piece of memory while writing each index entry out. This has been
optimized.
* kw/write-index-reduce-alloc:
read-cache: avoid allocating every ondisk entry when writing
read-cache: fix memory leak in do_write_index
perf: add test for writing the index
Code clean-up to avoid mixing values read from the .gitmodules file
and values read from the .git/config file.
* bw/submodule-config-cleanup:
submodule: remove gitmodules_config
unpack-trees: improve loading of .gitmodules
submodule-config: lazy-load a repository's .gitmodules file
submodule-config: move submodule-config functions to submodule-config.c
submodule-config: remove support for overlaying repository config
diff: stop allowing diff to have submodules configured in .git/config
submodule: remove submodule_config callback routine
unpack-trees: don't respect submodule.update
submodule: don't rely on overlayed config when setting diffopts
fetch: don't overlay config with submodule-config
submodule--helper: don't overlay config in update-clone
submodule--helper: don't overlay config in remote_submodule_branch
add, reset: ensure submodules can be added or reset
submodule: don't use submodule_from_name
t7411: check configuration parsing errors
"gitweb" shows a link to visit the 'raw' contents of blbos in the
history overview page.
* js/gitweb-raw-blob-link-in-history:
gitweb: add 'raw' blob_plain link in history overview
"git apply" that is used as a better "patch -p1" failed to apply a
taken from a file with CRLF line endings to a file with CRLF line
endings. The root cause was because it misused convert_to_git()
that tried to do "safe-crlf" processing by looking at the index
entry at the same path, which is a nonsense---in that mode, "apply"
is not working on the data in (or derived from) the index at all.
This has been fixed.
* tb/apply-with-crlf:
apply: file commited with CRLF should roundtrip diff and apply
convert: add SAFE_CRLF_KEEP_CRLF
Test update to improve coverage for "git stash" operations.
* jt/stash-tests:
stash: add a test for stashing in a detached state
stash: add a test for when apply fails during stash branch
stash: add a test for stash create with no files
"git interpret-trailers" has been taught a "--parse" and a few
other options to make it easier for scripts to grab existing
trailer lines from a commit log message.
* jk/trailers-parse:
doc/interpret-trailers: fix "the this" typo
pretty: support normalization options for %(trailers)
t4205: refactor %(trailers) tests
pretty: move trailer formatting to trailer.c
interpret-trailers: add --parse convenience option
interpret-trailers: add an option to unfold values
interpret-trailers: add an option to show only existing trailers
interpret-trailers: add an option to show only the trailers
trailer: put process_trailers() options into a struct
"git interpret-trailers" learned to take the trailer specifications
from the command line that overrides the configured values.
* pb/trailers-from-command-line:
interpret-trailers: fix documentation typo
interpret-trailers: add options for actions
trailers: introduce struct new_trailer_item
trailers: export action enums and corresponding lookup functions
A handful of bugfixes and an improvement to "diff --color-moved".
* jt/diff-color-move-fix:
diff: define block by number of alphanumeric chars
diff: respect MIN_BLOCK_LENGTH for last block
diff: avoid redundantly clearing a flag
"git diff" has been taught to optionally paint new lines that are
the same as deleted lines elsewhere differently from genuinely new
lines.
* sb/diff-color-move: (25 commits)
diff: document the new --color-moved setting
diff.c: add dimming to moved line detection
diff.c: color moved lines differently, plain mode
diff.c: color moved lines differently
diff.c: buffer all output if asked to
diff.c: emit_diff_symbol learns about DIFF_SYMBOL_SUMMARY
diff.c: emit_diff_symbol learns about DIFF_SYMBOL_STAT_SEP
diff.c: convert word diffing to use emit_diff_symbol
diff.c: convert show_stats to use emit_diff_symbol
diff.c: convert emit_binary_diff_body to use emit_diff_symbol
submodule.c: migrate diff output to use emit_diff_symbol
diff.c: emit_diff_symbol learns DIFF_SYMBOL_REWRITE_DIFF
diff.c: emit_diff_symbol learns about DIFF_SYMBOL_BINARY_FILES
diff.c: emit_diff_symbol learns DIFF_SYMBOL_HEADER
diff.c: emit_diff_symbol learns DIFF_SYMBOL_FILEPAIR_{PLUS, MINUS}
diff.c: emit_diff_symbol learns DIFF_SYMBOL_CONTEXT_INCOMPLETE
diff.c: emit_diff_symbol learns DIFF_SYMBOL_WORDS[_PORCELAIN]
diff.c: migrate emit_line_checked to use emit_diff_symbol
diff.c: emit_diff_symbol learns DIFF_SYMBOL_NO_LF_EOF
diff.c: emit_diff_symbol learns DIFF_SYMBOL_CONTEXT_FRAGINFO
...
Updates to the HTTP layer we made recently unconditionally used
features of libCurl without checking the existence of them, causing
compilation errors, which has been fixed. Also migrate the code to
check feature macros, not version numbers, to cope better with
libCurl that vendor ships with backported features.
* tc/curl-with-backports:
http: use a feature check to enable GSSAPI delegation control
http: fix handling of missing CURLPROTO_*
When handshake with a subprocess filter notices that the process
asked for an unknown capability, Git did not report what program
the offending subprocess was running. This has been corrected.
* cc/subprocess-handshake-missing-capabilities:
sub-process: print the cmd when a capability is unsupported