In preparation for implementing narrow/partial clone, the object
walking machinery has been taught a way to tell it to "filter" some
objects from enumeration.
* jh/object-filtering:
rev-list: support --no-filter argument
list-objects-filter-options: support --no-filter
list-objects-filter-options: fix 'keword' typo in comment
pack-objects: add list-objects filtering
rev-list: add list-objects filtering support
list-objects: filter objects in traverse_commit_list
oidset: add iterator methods to oidset
oidmap: add oidmap iterator methods
dir: allow exclusions from blob in addition to file
The way "git worktree add" determines what branch to create from
where and checkout in the new worktree has been updated a bit.
* tg/worktree-create-tracking:
add worktree.guessRemote config option
worktree: add --guess-remote flag to add subcommand
worktree: make add <path> <branch> dwim
worktree: add --[no-]track option to the add subcommand
worktree: add can be created from any commit-ish
checkout: factor out functions to new lib file
A new mechanism to upgrade the wire protocol in place is proposed
and demonstrated that it works with the older versions of Git
without harming them.
* bw/protocol-v1:
Documentation: document Extra Parameters
ssh: introduce a 'simple' ssh variant
i5700: add interop test for protocol transition
http: tell server that the client understands v1
connect: tell server that the client understands v1
connect: teach client to recognize v1 server response
upload-pack, receive-pack: introduce protocol version 1
daemon: recognize hidden request arguments
protocol: introduce protocol extension mechanisms
pkt-line: add packet_write function
connect: in ref advertisement, shallows are last
Factor the functions out, so they can be re-used from other places. In
particular these functions will be re-used in builtin/worktree.c to make
git worktree add dwim more.
While there add some docs to the function.
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Create traverse_commit_list_filtered() and add filtering
interface to allow certain objects to be omitted from the
traversal.
Update traverse_commit_list() to be a wrapper for the above
with a null filter to minimize the number of callers that
needed to be changed.
Object filtering will be used in a future commit by rev-list
and pack-objects for partial clone and fetch to omit unwanted
objects from the result.
traverse_bitmap_commit_list() does not work with filtering.
If a packfile bitmap is present, it will not be used. It
should be possible to extend such support in the future (at
least to simple filters that do not require object pathnames),
but that is beyond the scope of this patch series.
Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com>
Reviewed-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We learned to talk to watchman to speed up "git status" and other
operations that need to see which paths have been modified.
* bp/fsmonitor:
fsmonitor: preserve utf8 filenames in fsmonitor-watchman log
fsmonitor: read entirety of watchman output
fsmonitor: MINGW support for watchman integration
fsmonitor: add a performance test
fsmonitor: add a sample integration script for Watchman
fsmonitor: add test cases for fsmonitor extension
split-index: disable the fsmonitor extension when running the split index test
fsmonitor: add a test tool to dump the index extension
update-index: add fsmonitor support to update-index
ls-files: Add support in ls-files to display the fsmonitor valid bit
fsmonitor: add documentation for the fsmonitor extension.
fsmonitor: teach git to optionally utilize a file system monitor to speed up detecting new or changed files.
update-index: add a new --force-write-index option
preload-index: add override to enable testing preload-index
bswap: add 64 bit endianness helper get_be64
Create protocol.{c,h} and provide functions which future servers and
clients can use to determine which protocol to use or is being used.
Also introduce the 'GIT_PROTOCOL' environment variable which will be
used to communicate a colon separated list of keys with optional values
to a server. Unknown keys and values must be tolerated. This mechanism
is used to communicate which version of the wire protocol a client would
like to use with a server.
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Operations that do not touch (majority of) packed refs have been
optimized by making accesses to packed-refs file lazy; we no longer
pre-parse everything, and an access to a single ref in the
packed-refs does not touch majority of irrelevant refs, either.
* mh/mmap-packed-refs: (21 commits)
packed-backend.c: rename a bunch of things and update comments
mmapped_ref_iterator: inline into `packed_ref_iterator`
ref_cache: remove support for storing peeled values
packed_ref_store: get rid of the `ref_cache` entirely
ref_store: implement `refs_peel_ref()` generically
packed_read_raw_ref(): read the reference from the mmapped buffer
packed_ref_iterator_begin(): iterate using `mmapped_ref_iterator`
read_packed_refs(): ensure that references are ordered when read
packed_ref_cache: keep the `packed-refs` file mmapped if possible
packed-backend.c: reorder some definitions
mmapped_ref_iterator_advance(): no peeled value for broken refs
mmapped_ref_iterator: add iterator over a packed-refs file
packed_ref_cache: remember the file-wide peeling state
read_packed_refs(): read references with minimal copying
read_packed_refs(): make parsing of the header line more robust
read_packed_refs(): only check for a header at the top of the file
read_packed_refs(): use mmap to read the `packed-refs` file
die_unterminated_line(), die_invalid_line(): new functions
packed_ref_cache: add a backlink to the associated `packed_ref_store`
prefix_ref_iterator: break when we leave the prefix
...
Add a test utility (test-drop-caches) that flushes all changes to disk
then drops file system cache on Windows, Linux, and OSX.
Add a perf test (p7519-fsmonitor.sh) for fsmonitor.
By default, the performance test will utilize the Watchman file system
monitor if it is installed. If Watchman is not installed, it will use a
dummy integration script that does not report any new or modified files.
The dummy script has very little overhead which provides optimistic results.
The performance test will also use the untracked cache feature if it is
available as fsmonitor uses it to speed up scanning for untracked files.
There are 4 environment variables that can be used to alter the default
behavior of the performance test:
GIT_PERF_7519_UNTRACKED_CACHE: used to configure core.untrackedCache
GIT_PERF_7519_SPLIT_INDEX: used to configure core.splitIndex
GIT_PERF_7519_FSMONITOR: used to configure core.fsmonitor
GIT_PERF_7519_DROP_CACHE: if set, the OS caches are dropped between tests
The big win for using fsmonitor is the elimination of the need to scan the
working directory looking for changed and untracked files. If the file
information is all cached in RAM, the benefits are reduced.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add a test utility (test-dump-fsmonitor) that will dump the fsmonitor
index extension.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When the index is read from disk, the fsmonitor index extension is used
to flag the last known potentially dirty index entries. The registered
core.fsmonitor command is called with the time the index was last
updated and returns the list of files changed since that time. This list
is used to flag any additional dirty cache entries and untracked cache
directories.
We can then use this valid state to speed up preload_index(),
ie_match_stat(), and refresh_cache_ent() as they do not need to lstat()
files to detect potential changes for those entries marked
CE_FSMONITOR_VALID.
In addition, if the untracked cache is turned on valid_cached_dir() can
skip checking directories for new or changed files as fsmonitor will
invalidate the cache only for those directories that have been
identified as having potential changes.
To keep the CE_FSMONITOR_VALID state accurate during git operations;
when git updates a cache entry to match the current state on disk,
it will now set the CE_FSMONITOR_VALID bit.
Inversely, anytime git changes a cache entry, the CE_FSMONITOR_VALID bit
is cleared and the corresponding untracked cache directory is marked
invalid.
Signed-off-by: Ben Peart <benpeart@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This is similar to using the hashmap in hashmap.c, but with an
easier-to-use API. In particular, custom entry comparisons no longer
need to be written, and lookups can be done without constructing a
temporary entry structure.
This is implemented as a thin wrapper over the hashmap API. In
particular, this means that there is an additional 4-byte overhead due
to the fact that the first 4 bytes of the hash is redundantly stored.
For now, I'm taking the simpler approach, but if need be, we can
reimplement oidmap without affecting the callers significantly.
oidset has been updated to use oidmap.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Keep a copy of the `packed-refs` file contents in memory for as long
as a `packed_ref_cache` object is in use:
* If the system allows it, keep the `packed-refs` file mmapped.
* If not (either because the system doesn't support `mmap()` at all,
or because a file that is currently mmapped cannot be replaced via
`rename()`), then make a copy of the file's contents in
heap-allocated space, and keep that around instead.
We base the choice of behavior on a new build-time switch,
`MMAP_PREVENTS_DELETE`. By default, this switch is set for Windows
variants.
After this commit, `MMAP_NONE` and `MMAP_TEMPORARY` are still handled
identically. But the next commit will introduce a difference.
This whole change is still pointless, because we only read the
`packed-refs` file contents immediately after instantiating the
`packed_ref_cache`. But that will soon change.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Many of our programs consider that it is OK to release dynamic
storage that is used throughout the life of the program by simply
exiting, but this makes it harder to leak detection tools to avoid
reporting false positives. Plug many existing leaks and introduce
a mechanism for developers to mark that the region of memory
pointed by a pointer is not lost/leaking to help these tools.
* jk/leak-checkers:
add UNLEAK annotation for reducing leak false positives
set_git_dir: handle feeding gitdir to itself
repository: free fields before overwriting them
reset: free allocated tree buffers
reset: make tree counting less confusing
config: plug user_config leak
update-index: fix cache entry leak in add_one_file()
add: free leaked pathspec after add_files_to_cache()
test-lib: set LSAN_OPTIONS to abort by default
test-lib: --valgrind should not override --verbose-log
Platforms that ship with a separate sha1 with collision detection
library can link to it instead of using the copy we ship as part of
our source tree.
* ti/external-sha1dc:
sha1dc: allow building with the external sha1dc library
sha1dc: build git plumbing code more explicitly
It's a common pattern in git commands to allocate some
memory that should last for the lifetime of the program and
then not bother to free it, relying on the OS to throw it
away.
This keeps the code simple, and it's fast (we don't waste
time traversing structures or calling free at the end of the
program). But it also triggers warnings from memory-leak
checkers like valgrind or LSAN. They know that the memory
was still allocated at program exit, but they don't know
_when_ the leaked memory stopped being useful. If it was
early in the program, then it's probably a real and
important leak. But if it was used right up until program
exit, it's not an interesting leak and we'd like to suppress
it so that we can see the real leaks.
This patch introduces an UNLEAK() macro that lets us do so.
To understand its design, let's first look at some of the
alternatives.
Unfortunately the suppression systems offered by
leak-checking tools don't quite do what we want. A
leak-checker basically knows two things:
1. Which blocks were allocated via malloc, and the
callstack during the allocation.
2. Which blocks were left un-freed at the end of the
program (and which are unreachable, but more on that
later).
Their suppressions work by mentioning the function or
callstack of a particular allocation, and marking it as OK
to leak. So imagine you have code like this:
int cmd_foo(...)
{
/* this allocates some memory */
char *p = some_function();
printf("%s", p);
return 0;
}
You can say "ignore allocations from some_function(),
they're not leaks". But that's not right. That function may
be called elsewhere, too, and we would potentially want to
know about those leaks.
So you can say "ignore the callstack when main calls
some_function". That works, but your annotations are
brittle. In this case it's only two functions, but you can
imagine that the actual allocation is much deeper. If any of
the intermediate code changes, you have to update the
suppression.
What we _really_ want to say is that "the value assigned to
p at the end of the function is not a real leak". But
leak-checkers can't understand that; they don't know about
"p" in the first place.
However, we can do something a little bit tricky if we make
some assumptions about how leak-checkers work. They
generally don't just report all un-freed blocks. That would
report even globals which are still accessible when the
leak-check is run. Instead they take some set of memory
(like BSS) as a root and mark it as "reachable". Then they
scan the reachable blocks for anything that looks like a
pointer to a malloc'd block, and consider that block
reachable. And then they scan those blocks, and so on,
transitively marking anything reachable from a global as
"not leaked" (or at least leaked in a different category).
So we can mark the value of "p" as reachable by putting it
into a variable with program lifetime. One way to do that is
to just mark "p" as static. But that actually affects the
run-time behavior if the function is called twice (you
aren't likely to call main() twice, but some of our cmd_*()
functions are called from other commands).
Instead, we can trick the leak-checker by putting the value
into _any_ reachable bytes. This patch keeps a global
linked-list of bytes copied from "unleaked" variables. That
list is reachable even at program exit, which confers
recursive reachability on whatever values we unleak.
In other words, you can do:
int cmd_foo(...)
{
char *p = some_function();
printf("%s", p);
UNLEAK(p);
return 0;
}
to annotate "p" and suppress the leak report.
But wait, couldn't we just say "free(p)"? In this toy
example, yes. But UNLEAK()'s byte-copying strategy has
several advantages over actually freeing the memory:
1. It's recursive across structures. In many cases our "p"
is not just a pointer, but a complex struct whose
fields may have been allocated by a sub-function. And
in some cases (e.g., dir_struct) we don't even have a
function which knows how to free all of the struct
members.
By marking the struct itself as reachable, that confers
reachability on any pointers it contains (including those
found in embedded structs, or reachable by walking
heap blocks recursively.
2. It works on cases where we're not sure if the value is
allocated or not. For example:
char *p = argc > 1 ? argv[1] : some_function();
It's safe to use UNLEAK(p) here, because it's not
freeing any memory. In the case that we're pointing to
argv here, the reachability checker will just ignore
our bytes.
3. Likewise, it works even if the variable has _already_
been freed. We're just copying the pointer bytes. If
the block has been freed, the leak-checker will skip
over those bytes as uninteresting.
4. Because it's not actually freeing memory, you can
UNLEAK() before we are finished accessing the variable.
This is helpful in cases like this:
char *p = some_function();
return another_function(p);
Writing this with free() requires:
int ret;
char *p = some_function();
ret = another_function(p);
free(p);
return ret;
But with unleak we can just write:
char *p = some_function();
UNLEAK(p);
return another_function(p);
This patch adds the UNLEAK() macro and enables it
automatically when Git is compiled with SANITIZE=leak. In
normal builds it's a noop, so we pay no runtime cost.
It also adds some UNLEAK() annotations to show off how the
feature works. On top of other recent leak fixes, these are
enough to get t0000 and t0001 to pass when compiled with
LSAN.
Note the case in commit.c which actually converts a
strbuf_release() into an UNLEAK. This code was already
non-leaky, but the free didn't do anything useful, since
we're exiting. Converting it to an annotation means that
non-leak-checking builds pay no runtime cost. The cost is
minimal enough that it's probably not worth going on a
crusade to convert these kinds of frees to UNLEAKS. I did it
here for consistency with the "sb" leak (though it would
have been equally correct to go the other way, and turn them
both into strbuf_release() calls).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We used to spend more than necessary cycles allocating and freeing
piece of memory while writing each index entry out. This has been
optimized.
* kw/write-index-reduce-alloc:
read-cache: avoid allocating every ondisk entry when writing
read-cache: fix memory leak in do_write_index
perf: add test for writing the index
Currently, sha1_file.c and cache.h contain many functions, both related
to and unrelated to packfiles. This makes both files very large and
causes an unclear separation of concerns.
Create a new file, packfile.c, to hold all packfile-related functions
currently in sha1_file.c. It has a corresponding header packfile.h.
In this commit, the pack name-related functions are moved. Subsequent
commits will move the other functions.
Signed-off-by: Jonathan Tan <jonathantanmy@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
These used to be for manipulating the in-memory repo_tree structure,
but nowadays they are convenience wrappers to handle a few git-vs-svn
mismatches:
1. Git does not track empty directories but Subversion does. When
looking up a path in git that Subversion thinks exists and finding
nothing, we can safely assume that the path represents a
directory. This is needed when a later Subversion revision
modifies that directory.
2. Subversion allows deleting a file by copying. In Git fast-import
we have to handle that more explicitly as a deletion.
These are details of the tool's interaction with git fast-import.
Move them to fast_export.c, where other such details are handled.
This way the function names do not start with a repo_ prefix that
would clash with the repository object introduced in
v2.14.0-rc0~38^2~16 (repository: introduce the repository object,
2017-06-22) or an svn_ prefix that would clash with libsvn (in case
someone wants to link this code with libsvn some day).
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "ref-store" code reorganization continues.
* mh/packed-ref-store: (32 commits)
files-backend: cheapen refname_available check when locking refs
packed_ref_store: handle a packed-refs file that is a symlink
read_packed_refs(): die if `packed-refs` contains bogus data
t3210: add some tests of bogus packed-refs file contents
repack_without_refs(): don't lock or unlock the packed refs
commit_packed_refs(): remove call to `packed_refs_unlock()`
clear_packed_ref_cache(): don't protest if the lock is held
packed_refs_unlock(), packed_refs_is_locked(): new functions
packed_refs_lock(): report errors via a `struct strbuf *err`
packed_refs_lock(): function renamed from lock_packed_refs()
commit_packed_refs(): use a staging file separate from the lockfile
commit_packed_refs(): report errors rather than dying
packed_ref_store: make class into a subclass of `ref_store`
packed-backend: new module for handling packed references
packed_read_raw_ref(): new function, replacing `resolve_packed_ref()`
packed_ref_store: support iteration
packed_peel_ref(): new function, extracted from `files_peel_ref()`
repack_without_refs(): take a `packed_ref_store *` parameter
get_packed_ref(): take a `packed_ref_store *` parameter
rollback_packed_refs(): take a `packed_ref_store *` parameter
...
A performance test for writing the index to be able to
determine if changes to allocating ondisk structure help.
Signed-off-by: Kevin Willford <kewillf@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Some distros provide SHA1 collision-detect code as a shared library.
It's the same code as we have in git tree (but may be with a different
init default for hash), and git can link with it as well; at least, it
may make maintenance easier, according to our security guys.
This patch allows user to build git linking with the external sha1dc
library instead of the built-in code. User needs to define
DC_SHA1_EXTERNAL explicitly. As default without it, the built-in
sha1dc code is used like before.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The plumbing code between sha1dc and git is defined in
sha1dc_git.[ch], but these aren't compiled / included directly but
only via the indirect inclusion from sha1dc code. This is slightly
confusing when you try to trace the build flow.
This patch brings the following changes for simplification:
- Make sha1dc_git.c stand-alone and build from Makefile
- sha1dc_git.h is the common header to include further sha1.h
depending on the build condition
- Move comments for plumbing codes from the header to definitions
This is also meant as a preliminary work for further plumbing with
external sha1dc shlib.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add the 'style' build rule which will run git-clang-format on the diff
between HEAD and the current worktree. The result is a diff of
suggested changes.
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
We started using "%" PRItime, imitating "%" PRIuMAX and friends, as
a way to format the internal timestamp value, but this does not
play well with gettext(1) i18n framework, and causes "make pot"
that is run by the l10n coordinator to create a broken po/git.pot
file. This is a possible workaround for that problem.
* jc/po-pritime-fix:
Makefile: help gettext tools to cope with our custom PRItime format
A recent update made it easier to use "-fsanitize=" option while
compiling but supported only one sanitize option. Allow more than
one to be combined, joined with a comma, like "make SANITIZE=foo,bar".
* jk/build-with-asan:
Makefile: allow combining UBSan with other sanitizers
We started using our own timestamp_t type and PRItime format
specifier to go along with it, so that we can later change the
underlying type and output format more easily, but this does not
play well with gettext tools.
Because gettext tools need to keep the *.po file portable across
platforms, they have to special-case the format specifiers like
PRIuMAX that are known types in inttypes.h, instead of letting CPP
handle strings like
"%" PRIuMAX " seconds ago"
as an ordinary string concatenation. They fundamentally cannot do
the same for our own custom type/format.
Given that po/git.pot needs to be generated only once every release
and by only one person, i.e. the l10n coordinator, let's update the
Makefile rule to generate po/git.pot so that gettext tools are run
on a munged set of sources in which all mentions of PRItime are
replaced with PRIuMAX, which is what we happen to use right now.
This way, developers do not have to care that PRItime does not play
well with gettext, and translators do not have to care that we use
our own PRItime.
The credit for the idea to munge the source files goes to Dscho.
Possible bugs are mine.
Helped-by: Jiang Xin <worldhello.net@gmail.com>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Multiple sanitizers can be specified as a comma-separated list. Set
the flag NO_UNALIGNED_LOADS even if UndefinedBehaviorSanitizer is not
the only sanitizer to build with.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The build procedure has been improved to allow building and testing
Git with address sanitizer more easily.
* jk/build-with-asan:
Makefile: disable unaligned loads with UBSan
Makefile: turn off -fomit-frame-pointer with sanitizers
Makefile: add helper for compiling with -fsanitize
test-lib: turn on ASan abort_on_error by default
test-lib: set ASAN_OPTIONS variable before we run git
The "collission-detecting" implementation of SHA-1 hash we borrowed
from is replaced by directly binding the upstream project as our
submodule. Glitches on minority platforms are still being worked out.
* ab/sha1dc:
sha1collisiondetection: automatically enable when submodule is populated
sha1dc: optionally use sha1collisiondetection as a submodule
The undefined behavior sanitizer complains about unaligned
loads, even if they're OK for a particular platform in
practice. It's possible that they _are_ a problem, of
course, but since it's a known tradeoff the UBSan errors are
just noise.
Let's quiet it automatically by building with
NO_UNALIGNED_LOADS when SANITIZE=undefined is in use.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The ASan manual recommends disabling this optimization, as
it can make the backtraces produced by the tool harder to
follow (and since this is a test-debug build, we don't care
about squeezing out every last drop of performance).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
You can already build and test with ASan by doing:
make CFLAGS=-fsanitize=address test
but there are a few slight annoyances:
1. It's a little long to type.
2. It override your CFLAGS completely. You'd probably
still want -O2, for instance.
3. It's a good idea to also turn off "recovery", which
lets the program keep running after a problem is
detected (with the intention of finding as many bugs as
possible in a given run). Since Git's test suite should
generally run without triggering any problems, it's
better to abort immediately and fail the test when we
do find an issue.
With this patch, all of that happens automatically when you
run:
make SANITIZE=address test
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce a "repository" object to eventually make it easier to
work in multiple repositories (the primary focus is to work with
the superproject and its submodules) in a single process.
* bw/repo-object:
ls-files: use repository object
repository: enable initialization of submodules
submodule: convert is_submodule_initialized to work on a repository
submodule: add repo_read_gitmodules
submodule-config: store the_submodule_cache in the_repository
repository: add index_state to struct repo
config: read config from a repository object
path: add repo_worktree_path and strbuf_repo_worktree_path
path: add repo_git_path and strbuf_repo_git_path
path: worktree_git_path() should not use file relocation
path: convert do_git_path to take a 'struct repository'
path: convert strbuf_git_common_path to take a 'struct repository'
path: always pass in commondir to update_common_dir
path: create path.h
environment: store worktree in the_repository
environment: place key repository state in the_repository
repository: introduce the repository object
environment: remove namespace_len variable
setup: add comment indicating a hack
setup: don't perform lazy initialization of repository state
If a user wants to experiment with the version of collision
detecting sha1 from the submodule, the user needed to not just
populate the submodule but also needed to turn the knob.
A Makefile trick is easy enough to do so, so let's do this. When
somebody with a copy of the submodule populated wants not to use it,
that can be done by overriding it in config.mak or from the command
line.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add an option to use the sha1collisiondetection library from the
submodule in sha1collisiondetection/ instead of in the copy in the
sha1dc/ directory.
This allows us to try out the submodule in sha1collisiondetection
without breaking the build for anyone who's not expecting them as we
work out any kinks.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Introduce the repository object 'struct repository' which can be used to
hold all state pertaining to a git repository.
Some of the benefits of object-ifying a repository are:
1. Make the code base more readable and easier to reason about.
2. Allow for working on multiple repositories, specifically
submodules, within the same process. Currently the process for
working on a submodule involves setting up an argv_array of options
for a particular command and then launching a child process to
execute the command in the context of the submodule. This is
clunky and can require lots of little hacks in order to ensure
correctness. Ideally it would be nice to simply pass a repository
and an options struct to a command.
3. Eliminating reliance on global state will make it easier to
enable the use of threading to improve performance.
Signed-off-by: Brandon Williams <bmwill@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that the interface between `files_ref_store` and
`packed_ref_store` is relatively narrow, move the latter into a new
module, "refs/packed-backend.h" and "refs/packed-backend.c". It still
doesn't quite implement the `ref_store` interface, but it will soon.
This commit moves code around and adjusts its visibility, but doesn't
change anything.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Update "perl-compatible regular expression" support to enable JIT
and also allow linking with the newer PCRE v2 library.
* ab/pcre-v2:
grep: add support for PCRE v2
grep: un-break building with PCRE >= 8.32 without --enable-jit
grep: un-break building with PCRE < 8.20
grep: un-break building with PCRE < 8.32
grep: add support for the PCRE v1 JIT API
log: add -P as a synonym for --perl-regexp
grep: skip pthreads overhead when using one thread
grep: don't redundantly compile throwaway patterns under threading
We added an FREAD_READS_DIRECTORIES Makefile knob long ago
in cba22528f (Add compat/fopen.c which returns NULL on
attempt to open directory, 2008-02-08) to handle systems
where reading from a directory returned garbage. This works
by catching the problem at the fopen() stage and returning
NULL.
More recently, we found that there is a class of systems
(including Linux) where fopen() succeeds but fread() fails.
Since the solution is the same (having fopen return NULL),
they use the same Makefile knob as of e2d90fd1c
(config.mak.uname: set FREAD_READS_DIRECTORIES for Linux and
FreeBSD, 2017-05-03).
This works fine except for one thing: the autoconf test in
configure.ac to set FREAD_READS_DIRECTORIES actually checks
whether fread succeeds. Which means that on Linux systems,
the knob isn't set (and we even override the config.mak.uname
default). t1308 catches the failure.
We can fix this by tweaking the autoconf test to cover both
cases. In theory we might care about the distinction between
the traditional "fread reads directories" case and the new
"fopen opens directories". But since our solution catches
the problem at the fopen stage either way, we don't actually
need to know the difference. The "fopen" case is a superset.
This does mean the FREAD_READS_DIRECTORIES name is slightly
misleading. Probably FOPEN_OPENS_DIRECTORIES would be more
accurate. But it would be disruptive to simply change the
name (people's existing build configs would fail), and it's
not worth the complexity of handling both. Let's just add a
comment in the knob description.
Reported-by: Øyvind A. Holm <sunny@sunbase.org>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The internal logic used in "git blame" has been libified to make it
easier to use by cgit.
* js/blame-lib: (29 commits)
blame: move entry prepend to libgit
blame: move scoreboard setup to libgit
blame: move scoreboard-related methods to libgit
blame: move fake-commit-related methods to libgit
blame: move origin-related methods to libgit
blame: move core structures to header
blame: create entry prepend function
blame: create scoreboard setup function
blame: create scoreboard init function
blame: rework methods that determine 'final' commit
blame: wrap blame_sort and compare_blame_final
blame: move progress updates to a scoreboard callback
blame: make sanity_check use a callback in scoreboard
blame: move no_whole_file_rename flag to scoreboard
blame: move xdl_opts flags to scoreboard
blame: move show_root flag to scoreboard
blame: move reverse flag to scoreboard
blame: move contents_from to scoreboard
blame: move copy/move thresholds to scoreboard
blame: move stat counters to scoreboard
...
The "collision detecting" SHA-1 implementation shipped with 2.13
was quite broken on some big-endian platforms and/or platforms that
do not like unaligned fetches. Update to the upstream code which
has already fixed these issues.
* ab/sha1dc-maint:
sha1dc: update from upstream