The codepath that reads the index.version configuration was broken
with a recent update, which has been corrected.
* ds/feature-macros:
repo-settings: read an int for index.version
Update installation procedure for Perforce on MacOS in the CI jobs
running on Azure pipelines, which was failing.
* js/azure-ci-osx-fix:
ci(osx): use new location of the `perforce` cask
Several config options were combined into a repo_settings struct in
ds/feature-macros, including a move of the "index.version" config
setting in 7211b9e (repo-settings: consolidate some config settings,
2019-08-13).
Unfortunately, that file looked like a lot of boilerplate and what is
clearly a factor of copy-paste overload, the config setting is parsed
with repo_config_ge_bool() instead of repo_config_get_int(). This means
that a setting "index.version=4" would not register correctly and would
revert to the default version of 3.
I caught this while incorporating v2.24.0-rc0 into the VFS for Git
codebase, where we really care that the index is in version 4.
This was not caught by the codebase because the version checks placed
in t1600-index.sh did not test the "basic" scenario enough. Here, we
modify the test to include these normal settings to not be overridden by
features.manyFiles or GIT_INDEX_VERSION. While the "default" version is
3, this is demoted to version 2 in do_write_index() when not necessary.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
A few days ago Travis CI updated their existing OSX images, including
the Homebrew database in the xcode10.1 OSX image that we use. Since
then installing dependencies in the 'osx-gcc' job fails when it tries
to link gcc@8:
+ brew link gcc@8
Error: No such keg: /usr/local/Cellar/gcc@8
GCC8 is still installed but not linked to '/usr/local' in the updated
image, as it was before this update, but now we have to link it by
running 'brew link gcc'. So let's do that then, and fall back to
linking gcc@8 if it doesn't, just to be sure.
Our builds on Azure Pipelines are unaffected by this issue. The OSX
image over there doesn't contain the gcc@8 package, so we have to
'brew install' it, which already takes care of linking it to
'/usr/local'. After that the 'brew link gcc' command added by this
patch fails, but the ||-chained fallback 'brew link gcc@8' command
succeeds with an "already linked" warning.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Leakfix.
* cb/pcre2-chartables-leakfix:
grep: avoid leak of chartables in PCRE2
grep: make PCRE2 aware of custom allocator
grep: make PCRE1 aware of custom allocator
The atomic push over smart HTTP transport did not work, which has
been corrected.
* bc/smart-http-atomic-push:
remote-curl: pass on atomic capability to remote side
The installation instruction for zsh completion script (in
contrib/) has been a bit improved.
* mb/clarify-zsh-completion-doc:
completion: clarify installation instruction for zsh
The Azure Pipelines builds are failing for macOS due to a change in the
location of the perforce cask. The command outputs the following error:
+ brew install caskroom/cask/perforce
Error: caskroom/cask was moved. Tap homebrew/cask-cask instead.
So let's try to call `brew cask install perforce` first (which is what
that error message suggests, in a most round-about way).
Prior to 672f51cb we used to install the 'perforce' package with 'brew
install perforce' (note: no 'cask' in there). The justification for
672f51cb was that the command 'brew install perforce' simply stopped
working, after Homebrew folks decided that it's better to move the
'perforce' package to a "cask". Their justification for this move was
that 'brew install perforce' "can fail due to a checksum mismatch ...",
and casks can be installed without checksum verification. And indeed,
both 'brew cask install perforce' and 'brew install
caskroom/cask/perforce' printed something along the lines of:
==> No checksum defined for Cask perforce, skipping verification
It is unclear why 672f51cb used 'brew install caskroom/cask/perforce'
instead of 'brew cask install perforce'. It appears (by running both
commands on old Travis CI macOS images) that both commands worked all
the same already back then.
In any case, as the error message at the top of this commit message
shows, 'brew install caskroom/cask/perforce' has stopped working
recently, but 'brew cask install perforce' still does, so let's use
that.
CI servers are typically fresh virtual machines, but not always. To
accommodate for that, let's try harder if `brew cask install perforce`
fails, by specifically pulling the latest `master` of the
`homebrew-cask` repository.
This will still fail, of course, when `homebrew-cask` falls behind
Perforce's release schedule. But once it is updated, we can now simply
re-run the failed jobs and they will pick up that update.
As for updating `homebrew-cask`: the beginnings of automating this in
https://dev.azure.com/gitgitgadget/git/_build?definitionId=11&_a=summary
will be finished once the next Perforce upgrade comes around.
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
According to t/README, test_must_fail() should only be used to test for
failure in Git commands. Replace the invocations of
`test_must_fail grep` with `! grep`.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
As noted by Gábor in [1], the new tests in edefc31873 ("format-patch:
create leading components of output directory", 2019-10-11) cannot be
run independently. Fix this.
[1] https://public-inbox.org/git/20191011144650.GM29845@szeder.dev/
Signed-off-by: Bert Wesarg <bert.wesarg@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Originally, the CI/PR builds that build and test using Visual Studio
were implemented imitating `linux-clang`, i.e. still using the
`Makefile`-based build infrastructure.
Later (but still before the patches made their way into git.git's
`master`), however, this was changed to generate Visual Studio project
files and build the binaries using `MSBuild`, as this reflects more
accurately how Visual Studio users would want to build Git (internally,
Visual Studio uses `MSBuild`, or at least something very similar).
During that transition, we needed to implement a new way to run the test
suite in parallel, as Visual Studio users typically will only have a Git
Bash available (which does not ship with `make` nor with support for
`prove`): we simply implemented a new test helper to run the test suite.
This helper even knows how to run the tests in parallel, but due to a
mistake on this developer's part, it was never turned on in the CI/PR
builds. This results in 2x-3x longer run times of the test phase.
Let's use the `--jobs=10` option to fix this.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
To make full use of the work that went into the Visual Studio build &
test jobs in our CI/PR builds, let's turn on strict compiler flags. This
will give us the benefit of Visual C's compiler warnings (which, at
times, seem to catch things that GCC does not catch, and vice versa).
While at it, also turn on optimization; It does not make sense to
produce binaries with debug information, and we can use any ounce of
speed that we get (because the test suite is particularly slow on
Windows, thanks to the need to run inside a Unix shell, which
requires us to use the POSIX emulation layer provided by MSYS2).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
While reviewing some dts diffs recently I noticed that the hunk header
logic was failing to find the containing node. This is because the regex
doesn't consider properties that may span multiple lines, i.e.
property = <something>,
<something_else>;
and it got hung up on comments inside nodes that look like the root node
because they start with '/*'. Add tests for these cases and update the
regex to find them. Maybe detecting the root node is too complicated but
forcing it to be a backslash with any amount of whitespace up to an open
bracket seemed OK. I tried to detect that a comment is in-between the
two parts but I wasn't happy so I just dropped it.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Reviewed-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
In 't0500-progress-display.sh' all tests running 'test-tool progress
--total=<N>' fail on big-endian systems, e.g. like this:
+ test-tool progress --total=3 Working hard
[...]
+ test_i18ncmp expect out
--- expect 2019-10-18 23:07:54.765523916 +0000
+++ out 2019-10-18 23:07:54.773523916 +0000
@@ -1,4 +1,2 @@
-Working hard: 33% (1/3)<CR>
-Working hard: 66% (2/3)<CR>
-Working hard: 100% (3/3)<CR>
-Working hard: 100% (3/3), done.
+Working hard: 0% (1/12884901888)<CR>
+Working hard: 0% (3/12884901888), done.
The reason for that bogus value is that '--total's parameter is parsed
via parse-options's OPT_INTEGER into a uint64_t variable [1], so the
two bits of 3 end up in the "wrong" bytes on big-endian systems
(12884901888 = 0x300000000).
Change the type of that variable from uint64_t to int, to match what
parse-options expects; in the tests of the progress output we won't
use values that don't fit into an int anyway.
[1] start_progress() expects the total number as an uint64_t, that's
why I chose the same type when declaring the variable holding the
value given on the command line.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
[jpag: Debian unstable/ppc64 (big-endian)]
Tested-By: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
[tz: Fedora s390x (big-endian)]
Tested-By: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
The original comment does not describe type of ~/.zsh/_git explicitly
and zsh does not warn or fail if a user create it as a dictionary.
So unexperienced users could be misled by the original comment.
There is a small update to clarify it.
Helped-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Maxim Belsky <public.belsky@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
"git stash save" lost local changes to submodules, which has been
corrected.
* jj/stash-reset-only-toplevel:
stash: avoid recursive hard reset on submodules
"git format-patch -o <outdir>" did an equivalent of "mkdir <outdir>"
not "mkdir -p <outdir>", which is being corrected.
* bw/format-patch-o-create-leading-dirs:
format-patch: create leading components of output directory
94da9193a6 ("grep: add support for PCRE v2", 2017-06-01) introduced
a small memory leak visible with valgrind in t7813.
Complete the creation of a PCRE2 specific variable that was missing from
the original change and free the generated table just like it is done
for PCRE1.
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
94da9193a6 (grep: add support for PCRE v2, 2017-06-01) didn't include
a way to override the system allocator, and so it is incompatible with
custom allocators (e.g. nedmalloc). This problem became obvious when we
tried to plug a memory leak by `free()`ing a data structure allocated by
PCRE2, triggering a segfault in Windows (where we use nedmalloc by
default).
PCRE2 requires the use of a general context to override the allocator
and therefore, there is a lot more code needed than in PCRE1, including
a couple of wrapper functions.
Extend the grep API with a "destructor" that could be called to cleanup
any objects that were created and used globally.
Update `builtin/grep.c` to use that new API, but any other future users
should make sure to have matching `grep_init()`/`grep_destroy()` calls
if they are using the pattern matching functionality.
Move some of the logic that was before done per thread (in the workers)
into an earlier phase to avoid degrading performance, but as the use of
PCRE2 with custom allocators is better understood it is expected more of
its functions will be instructed to use the custom allocator as well as
was done in the original code[1] this work was based on.
[1] https://public-inbox.org/git/3397e6797f872aedd18c6d795f4976e1c579514b.1565005867.git.gitgitgadget@gmail.com/
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09) didn't include a way
to override the system alocator, and so it is incompatible with
USE_NED_ALLOCATOR as reported by Dscho[1] (in similar code from PCRE2)
Note that nedmalloc, as well as other custom allocators like jemalloc
and mi-malloc, can be configured at runtime (via `LD_PRELOAD`),
therefore we cannot know at compile time whether a custom allocator is
used or not.
Make the minimum change possible to ensure this combination is supported
by extending `grep_init()` to set the PCRE1 specific functions to Git's
idea of `malloc()` and `free()` and therefore making sure all
allocations are done inside PCRE1 with the same allocator than the rest
of Git.
This change has negligible performance impact: PCRE needs to allocate
memory once per program run for the character table and for each pattern
compilation. These are both rare events compared to matching patterns
against lines. Actual measurements[2] show that the impact is lost in
the noise.
[1] https://public-inbox.org/git/pull.306.git.gitgitgadget@gmail.com
[2] https://public-inbox.org/git/7f42007f-911b-c570-17f6-1c6af0429586@web.de
Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
When pushing more than one reference with the --atomic option, the
server is supposed to perform a single atomic transaction to update the
references, leaving them either all to succeed or all to fail. This
works fine when pushing locally or over SSH, but when pushing over HTTP,
we fail to pass the atomic capability to the remote side. In fact, we
have not reported this capability to any remote helpers during the life
of the feature.
Now normally, things happen to work nevertheless, since we actually
check for most types of failures, such as non-fast-forward updates, on
the client side, and just abort the entire attempt. However, if the
server side reports a problem, such as the inability to lock a ref, the
transaction isn't atomic, because we haven't passed the appropriate
capability over and the remote side has no way of knowing that we wanted
atomic behavior.
Fix this by passing the option from the transport code through to remote
helpers, and from the HTTP remote helper down to send-pack. With this
change, we can detect if the server side rejects the push and report
back appropriately. Note the difference in the messages: the remote
side reports "atomic transaction failed", while our own checking rejects
pushes with the message "atomic push failed".
Document the atomic option in the remote helper documentation, so other
implementers can implement it if they like.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
This changes the indent from
"<tab><sp><sp><sp><sp><sp><sp><sp><sp>"
to
"<tab><tab>"
so that the statement lines up with the rest of the block.
Signed-off-by: Norman Rasmussen <norman@rasmussen.co.za>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>