Commit graph

234 commits

Author SHA1 Message Date
Brian C Tracy
fe2033b84f fuzz: add fuzzer for config parsing
Add a new fuzz target that exercises the parsing of git configs.
The existing git_config_from_mem function is a perfect entry point
for fuzzing as it exercises the same code paths as the rest of the
config parsing functions and offers an easily fuzzable interface.

Config parsing is a useful thing to fuzz because it operates on user
controlled data and is a central component of many git operations.

Signed-off-by: Brian C Tracy <brian.tracy33@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-03-15 10:47:05 -07:00
Patrick Steinhardt
c0350cb964 ci: add jobs to test with the reftable backend
Add CI jobs for both GitHub Workflows and GitLab CI to run Git with the
new reftable backend.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-02-07 08:28:37 -08:00
Junio C Hamano
9869e02a64 Merge branch 'js/oss-fuzz-build-in-ci'
oss-fuzz tests are built and run in CI.

* js/oss-fuzz-build-in-ci:
  ci: build and run minimal fuzzers in GitHub CI
  fuzz: fix fuzz test build rules
2024-01-29 16:03:00 -08:00
Josh Steadmon
c4a9cf1df3 ci: build and run minimal fuzzers in GitHub CI
To prevent bitrot, we would like to regularly exercise the fuzz tests in
order to make sure they still link & run properly. We already compile
the fuzz test objects as part of the default `make` target, but we do
not link the executables due to the fuzz tests needing specific
compilers and compiler features. This has lead to frequent build
breakages for the fuzz tests.

To remedy this, we can add a CI step to actually link the fuzz
executables, and run them (with finite input rather than the default
infinite random input mode) to verify that they execute properly.

Since the main use of the fuzz tests is via OSS-Fuzz [1], and OSS-Fuzz
only runs tests on Linux [2], we only set up a CI test for the fuzzers
on Linux.

[1] https://github.com/google/oss-fuzz
[2] https://google.github.io/oss-fuzz/further-reading/fuzzer-environment/

Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-19 14:29:25 -08:00
Patrick Steinhardt
56090a35ab ci: add macOS jobs to GitLab CI
Add a job to GitLab CI which runs tests on macOS, which matches the
equivalent "osx-clang" job that we have for GitHub Workflows. One
significant difference though is that this new job runs on Apple M1
machines and thus uses the "arm64" architecture. As GCC does not yet
support this comparatively new architecture we cannot easily include an
equivalent for the "osx-gcc" job that exists in GitHub Workflows.

Note that one test marked as `test_must_fail` is surprisingly passing:

  t7815-grep-binary.sh                             (Wstat: 0 Tests: 22 Failed: 0)
    TODO passed:   12

This seems to boil down to an unexpected difference in how regcomp(3P)
works when matching NUL bytes. Cross-checking with the respective GitHub
job shows that this is not an issue unique to the GitLab CI job as it
passes in the same way there.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18 11:53:17 -08:00
Patrick Steinhardt
c4b84b137a ci: make p4 setup on macOS more robust
When setting up Perforce on macOS we put both `p4` and `p4d` into
"$HOME/bin". On GitHub CI this directory is indeed contained in the PATH
environment variable and thus there is no need for additional setup than
to put the binaries there. But GitLab CI does not do this, and thus our
Perforce-based tests would be skipped there even though we download the
binaries.

Refactor the setup code to become more robust by downloading binaries
into a separate directory which we then manually append to our PATH.
This matches what we do on Linux-based jobs.

Note that it may seem like we already did append "$HOME/bin" to PATH
because we're actually removing the lines that adapt PATH. But we only
ever adapted the PATH variable in "ci/install-dependencies.sh", and
didn't adapt it when running "ci/run-build-and-test.sh". Consequently,
the required binaries wouldn't be found during the test run unless the
CI platform already had the "$HOME/bin" in PATH right from the start.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18 11:53:17 -08:00
Patrick Steinhardt
99c60edc5b ci: handle TEST_OUTPUT_DIRECTORY when printing test failures
The TEST_OUTPUT_DIRECTORY environment variable can be used to instruct
the test suite to write test data and test results into a different
location than into "t/". The "ci/print-test-failures.sh" script does not
know to handle this environment variable though, which means that it
will search for test results in the wrong location if it was set.

Update the script to handle TEST_OUTPUT_DIRECTORY so that we can start
to set it in our CI.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-18 11:53:17 -08:00
Patrick Steinhardt
cd69c635a1 ci: add job performing static analysis on GitLab CI
Our GitHub Workflows definitions have a static analysis job that
runs the following tasks:

  - Coccinelle to check for suggested refactorings.

  - `make hdr-check` to check for missing includes or forward
    declarations in our header files.

  - `make check-pot` to check our translations for issues.

  - `./ci/check-directional-formatting.bash` to check whether our
    sources contain any Unicode directional formatting code points.

Add an equivalent job to our GitLab CI definitions.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-01-08 11:23:03 -08:00
Junio C Hamano
b4e6618fdf Merge branch 'js/ci-discard-prove-state'
The way CI testing used "prove" could lead to running the test
suite twice needlessly, which has been corrected.

* js/ci-discard-prove-state:
  ci: avoid running the test suite _twice_
2023-12-09 16:37:48 -08:00
Junio C Hamano
14a4445d18 Merge branch 'ps/ci-gitlab'
Add support for GitLab CI.

* ps/ci-gitlab:
  ci: add support for GitLab CI
  ci: install test dependencies for linux-musl
  ci: squelch warnings when testing with unusable Git repo
  ci: unify setup of some environment variables
  ci: split out logic to set up failed test artifacts
  ci: group installation of Docker dependencies
  ci: make grouping setup more generic
  ci: reorder definitions for grouping functions
2023-12-09 16:37:48 -08:00
Junio C Hamano
8bf6fbd00d Merge branch 'js/doc-unit-tests'
Process to add some form of low-level unit tests has started.

* js/doc-unit-tests:
  ci: run unit tests in CI
  unit tests: add TAP unit test framework
  unit tests: add a project plan document
2023-12-09 16:37:47 -08:00
Johannes Schindelin
e7e03ef995 ci: avoid running the test suite _twice_
This is a late amendment of 4a6e4b9602 (CI: remove Travis CI support,
2021-11-23), whereby the `.prove` file (being written by the `prove`
command that is used to run the test suite) is no longer retained
between CI builds: This feature was only ever used in the Travis CI
builds, we tried for a while to do the same in Azure Pipelines CI runs
(but I gave up on it after a while), and we never used that feature in
GitHub Actions (nor does the new GitLab CI code use it).

Retaining the Prove cache has been fragile from the start, even though
the idea seemed good at the time, the idea being that the `.prove` file
caches information about previous `prove` runs (`save`) and uses them
(`slow`) to run the tests in the order from longer-running to shorter
ones, making optimal use of the parallelism implied by `--jobs=<N>`.

However, using a Prove cache can cause some surprising behavior: When
the `prove` caches information about a test script it has run,
subsequent `prove` runs (with `--state=slow`) will run the same test
script again even if said script is not specified on the `prove`
command-line!

So far, this bug did not matter. Right until d8f416bbb8 (ci: run unit
tests in CI, 2023-11-09) did it not matter.

But starting with that commit, we invoke `prove` _twice_ in CI, once to
run the regular test suite of regression test scripts, and once to run
the unit tests. Due to the bug, the second invocation re-runs all of the
tests that were already run as part of the first invocation. This not
only wastes build minutes, it also frequently causes the `osx-*` jobs to
fail because they already take a long time and now are likely to run
into a timeout.

The worst part about it is that there is actually no benefit to keep
running with `--state=slow,save`, ever since we decided no longer to
try to reuse the Prove cache between CI runs.

So let's just drop that Prove option and live happily ever after.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-14 09:24:23 +09:00
Josh Steadmon
d8f416bbb8 ci: run unit tests in CI
Run unit tests in both Cirrus and GitHub CI. For sharded CI instances
(currently just Windows on GitHub), run only on the first shard. This is
OK while we have only a single unit test executable, but we may wish to
distribute tests more evenly when we add new unit tests in the future.

We may also want to add more status output in our unit test framework,
so that we can do similar post-processing as in
ci/lib.sh:handle_failed_tests().

Signed-off-by: Josh Steadmon <steadmon@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-10 08:15:32 +09:00
Patrick Steinhardt
0e3b67e2aa ci: add support for GitLab CI
We already support Azure Pipelines and GitHub Workflows in the Git
project, but until now we do not have support for GitLab CI. While it is
arguably not in the interest of the Git project to maintain a ton of
different CI platforms, GitLab has recently ramped up its efforts and
tries to contribute to the Git project more regularly.

Part of a problem we hit at GitLab rather frequently is that our own,
custom CI setup we have is so different to the setup that the Git
project has. More esoteric jobs like "linux-TEST-vars" that also set a
couple of environment variables do not exist in GitLab's custom CI
setup, and maintaining them to keep up with what Git does feels like
wasted time. The result is that we regularly send patch series upstream
that fail to compile or pass tests in GitHub Workflows. We would thus
like to integrate the GitLab CI configuration into the Git project to
help us send better patch series upstream and thus reduce overhead for
the maintainer. Results of these pipeline runs will be made available
(at least) in GitLab's mirror of the Git project at [1].

This commit introduces the integration into our regular CI scripts so
that most of the setup continues to be shared across all of the CI
solutions. Note that as the builds on GitLab CI run as unprivileged
user, we need to pull in both sudo and shadow packages to our Alpine
based job to set this up.

[1]: https://gitlab.com/gitlab-org/git

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:10 +09:00
Patrick Steinhardt
0d3911ad73 ci: install test dependencies for linux-musl
The linux-musl CI job executes tests on Alpine Linux, which is based on
musl libc instead of glibc. We're missing some test dependencies though,
which causes us to skip a subset of tests.

Install these test dependencies to increase our test coverage on this
platform. There are still some missing test dependecies, but these do
not have a corresponding package in the Alpine repositories:

    - p4 and p4d, both parts of the Perforce version control system.

    - cvsps, which generates patch sets for CVS.

    - Subversion and the SVN::Core Perl library, the latter of which is
      not available in the Alpine repositories. While the tool itself is
      available, all Subversion-related tests are skipped without the
      SVN::Core Perl library anyway.

The Apache2-based tests require a bit more care though. For one, the
module path is different on Alpine Linux, which requires us to add it to
the list of known module paths to detect it. But second, the WebDAV
module on Alpine Linux is broken because it does not bundle the default
database backend [1]. We thus need to skip the WebDAV-based tests on
Alpine Linux for now.

[1]: https://gitlab.alpinelinux.org/alpine/aports/-/issues/13112

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:10 +09:00
Patrick Steinhardt
dd02c3b68c ci: squelch warnings when testing with unusable Git repo
Our CI jobs that run on Docker also use mostly the same architecture to
build and test Git via the "ci/run-build-and-tests.sh" script. These
scripts also provide some functionality to massage the Git repository
we're supposedly operating in.

In our Docker-based infrastructure we may not even have a Git repository
available though, which leads to warnings when those functions execute.
Make the helpers exit gracefully in case either there is no Git in our
PATH, or when not running in a Git repository.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:10 +09:00
Patrick Steinhardt
9f17bef9a6 ci: unify setup of some environment variables
Both GitHub Actions and Azure Pipelines set up the environment variables
GIT_TEST_OPTS, GIT_PROVE_OPTS and MAKEFLAGS. And while most values are
actually the same, the setup is completely duplicate. With the upcoming
support for GitLab CI this duplication would only extend even further.

Unify the setup of those environment variables so that only the uncommon
parts are separated. While at it, we also perform some additional small
improvements:

    - We now always pass `--state=failed,slow,save` via GIT_PROVE_OPTS.
      It doesn't hurt on platforms where we don't persist the state, so
      this further reduces boilerplate.

    - When running on Windows systems we set `--no-chain-lint` and
      `--no-bin-wrappers`. Interestingly though, we did so _after_
      already having exported the respective environment variables.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
Patrick Steinhardt
e624f206bc ci: split out logic to set up failed test artifacts
We have some logic in place to create a directory with the output from
failed tests, which will then subsequently be uploaded as CI artifacts.
We're about to add support for GitLab CI, which will want to reuse the
logic.

Split the logic into a separate function so that it is reusable.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
Patrick Steinhardt
412847ced4 ci: group installation of Docker dependencies
The output of CI jobs tends to be quite long-winded and hard to digest.
To help with this, many CI systems provide the ability to group output
into collapsible sections, and we're also doing this in some of our
scripts.

One notable omission is the script to install Docker dependencies.
Address it to bring more structure to the output for Docker-based jobs.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
Patrick Steinhardt
a7d499cb93 ci: make grouping setup more generic
Make the grouping setup more generic by always calling `begin_group ()`
and `end_group ()` regardless of whether we have stubbed those functions
or not. This ensures we can more readily add support for additional CI
platforms.

Furthermore, the `group ()` function is made generic so that it is the
same for both GitHub Actions and for other platforms. There is a
semantic conflict here though: GitHub Actions used to call `set +x` in
`group ()` whereas the non-GitHub case unconditionally uses `set -x`.
The latter would get overriden if we kept the `set +x` in the generic
version of `group ()`. To resolve this conflict, we simply drop the `set
+x` in the generic variant of this function. As `begin_group ()` calls
`set -x` anyway this is not much of a change though, as the only
commands that aren't printed anymore now are the ones between the
beginning of `group ()` and the end of `begin_group ()`.

Last, this commit changes `end_group ()` to also accept a parameter that
indicates _which_ group should end. This will be required by a later
commit that introduces support for GitLab CI.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:09 +09:00
Patrick Steinhardt
a4761b605c ci: reorder definitions for grouping functions
We define a set of grouping functions that are used to group together
output in our CI, where these groups then end up as collapsible sections
in the respective pipeline platform. The way these functions are defined
is not easily extensible though as we have an up front check for the CI
_not_ being GitHub Actions, where we define the non-stub logic in the
else branch.

Reorder the conditional branches such that we explicitly handle GitHub
Actions.

Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-09 18:56:08 +09:00
Johannes Schindelin
682a868f67 ci: upgrade to using macos-13
In April, GitHub announced that the `macos-13` pool is available:
https://github.blog/changelog/2023-04-24-github-actions-macos-13-is-now-available/.
It is only a matter of time until the `macos-12` pool is going away,
therefore we should switch now, without pressure of a looming deadline.

Since the `macos-13` runners no longer include Python2, we also drop
specifically testing with Python2 and switch uniformly to Python3, see
https://github.com/actions/runner-images/blob/HEAD/images/macos/macos-13-Readme.md
for details about the software available on the `macos-13` pool's
runners.

Also, on macOS 13, Homebrew seems to install a `gcc@9` package that no
longer comes with a regular `unistd.h` (there seems only to be a
`ssp/unistd.h`), and hence builds would fail with:

    In file included from base85.c:1:
    git-compat-util.h:223:10: fatal error: unistd.h: No such file or directory
      223 | #include <unistd.h>
          |          ^~~~~~~~~~
    compilation terminated.

The reason why we install GCC v9.x explicitly is historical, and back in
the days it was because it was the _newest_ version available via
Homebrew: 176441bfb5 (ci: build Git with GCC 9 in the 'osx-gcc' build
job, 2019-11-27).

To reinstate the spirit of that commit _and_ to fix that build failure,
let's switch to the now-newest GCC version: v13.x.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-11-03 18:52:02 +09:00
Junio C Hamano
0510d06b56 Merge branch 'jk/ci-retire-allow-ref' into maint-2.42
CI update.

* jk/ci-retire-allow-ref:
  ci: deprecate ci/config/allow-ref script
  ci: allow branch selection through "vars"
2023-11-02 16:53:23 +09:00
Jeff King
edf80d23f1 ci: deprecate ci/config/allow-ref script
Now that we have the CI_BRANCHES mechanism, there is no need for anybody
to use the ci/config/allow-ref mechanism. In the long run, we can
hopefully remove it and the whole "config" job, as it consumes CPU and
adds to the end-to-end latency of the whole workflow. But we don't want
to do that immediately, as people need time to migrate until the
CI_BRANCHES change has made it into the workflow file of every branch.

So let's issue a warning, which will appear in the "annotations" section
below the workflow result in GitHub's web interface. And let's remove
the sample allow-refs script, as we don't want to encourage anybody to
use it.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-08-30 15:56:11 -07:00
Jeff King
21c82dcd62 ci: allow branch selection through "vars"
When we added config to skip CI for certain branches in e76eec3554 (ci:
allow per-branch config for GitHub Actions, 2020-05-07), there wasn't
any way to avoid spinning up a VM just to check the config. From the
developer's perspective this isn't too bad, as the "skipped" branches
complete successfully after running the config job (the workflow result
is "success" instead of "skipped", but that is a minor lie).

But we are still wasting time and GitHub's CPU to spin up a VM just to
check the result of a short shell script. At the time there wasn't any
way to avoid this. But they've since introduced repo-level variables
that should let us do the same thing:

  https://github.blog/2023-01-10-introducing-required-workflows-and-configuration-variables-to-github-actions/#configuration-variables

This is more efficient, and as a bonus is probably less confusing to
configure (the existing system requires sticking your config on a magic
ref).

See the included docs for how to configure it.

The code itself is pretty simple: it checks the variable and skips the
config job if appropriate (and everything else depends on the config job
already). There are two slight inaccuracies here:

  - we don't insist on branches, so this likewise applies to tag names
    or other refs. I think in practice this is OK, and keeping the code
    (and docs) short is more important than trying to be more exact. We
    are targeting developers of git.git and their limited workflows.

  - the match is done as a substring (so if you want to run CI for
    "foobar", then branch "foo" will accidentally match). Again, this
    should be OK in practice, as anybody who uses this is likely to only
    specify a handful of well-known names. If we want to be more exact,
    we can have the code check for adjoining spaces. Or even move to a
    more general CI_CONFIG variable formatted as JSON. I went with this
    scheme for the sake of simplicity.

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-08-30 15:56:09 -07:00
Johannes Schindelin
6ba913629f ci(linux-asan-ubsan): let's save some time
Every once in a while, the `git-p4` tests flake for reasons outside of
our control. It typically fails with "Connection refused" e.g. here:
https://github.com/git/git/actions/runs/5969707156/job/16196057724

	[...]
	+ git p4 clone --dest=/home/runner/work/git/git/t/trash directory.t9807-git-p4-submit/git //depot
	  Initialized empty Git repository in /home/runner/work/git/git/t/trash directory.t9807-git-p4-submit/git/.git/
	  Perforce client error:
		Connect to server failed; check $P4PORT.
		TCP connect to localhost:9807 failed.
		connect: 127.0.0.1:9807: Connection refused
	  failure accessing depot: could not run p4
	  Importing from //depot into /home/runner/work/git/git/t/trash directory.t9807-git-p4-submit/git
	 [...]

This happens in other jobs, too, but in the `linux-asan-ubsan` job it
hurts the most because that job often takes over a full hour to run,
therefore re-running a failed `linux-asan-ubsan` job is _very_ costly.

The purpose of the `linux-asan-ubsan` job is to exercise the C code of
Git, anyway, and any part of Git's source code that the `git-p4` tests
run and that would benefit from the attention of ASAN/UBSAN are run
better in other tests anyway, as debugging C code run via Python scripts
can get a bit hairy.

In fact, it is not even just `git-p4` that is the problem (even if it
flakes often enough to be problematic in the CI builds), but really the
part about Python scripts. So let's just skip any Python parts of the
tests from being run in that job.

For good measure, also skip the Subversion tests because debugging C
code run via Perl scripts is as much fun as debugging C code run via
Python scripts. And it will reduce the time this very expensive job
takes, which is a big benefit.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-08-29 13:54:55 -07:00
Junio C Hamano
f2ffc74186 Merge branch 'tb/pack-bitmap-traversal-with-boundary'
The object traversal using reachability bitmap done by
"pack-object" has been tweaked to take advantage of the fact that
using "boundary" commits as representative of all the uninteresting
ones can save quite a lot of object enumeration.

* tb/pack-bitmap-traversal-with-boundary:
  pack-bitmap.c: use commit boundary during bitmap traversal
  pack-bitmap.c: extract `fill_in_bitmap()`
  object: add object_array initializer helper function
2023-06-22 16:29:05 -07:00
Junio C Hamano
056d16406d Merge branch 'jk/ci-use-clang-for-sanitizer-jobs'
Clang's sanitizer implementation seems to work better than GCC's.

* jk/ci-use-clang-for-sanitizer-jobs:
  ci: drop linux-clang job
  ci: run ASan/UBSan in a single job
  ci: use clang for ASan/UBSan checks
2023-06-20 15:53:11 -07:00
Jeff King
ec6915265a ci: run ASan/UBSan in a single job
When we started running sanitizers in CI via 1c0962c0c4 (ci: add address
and undefined sanitizer tasks, 2022-10-20), we ran them as two separate
CI jobs, since as that commit notes, the combination "seems to take
forever".

And indeed, it does with gcc. However, since the previous commit
switched to using clang, the situation is different, and we can save
some CPU by using a single job for both. Comparing before/after CI runs,
this saved about 14 minutes (the single combined job took 54m, versus
44m plus 24m for ASan and UBSan jobs, respectively). That's wall-clock
and not CPU, but since our jobs are mostly CPU-bound, the two should be
closely proportional.

This does increase the end-to-end time of a CI run, though, since before
this patch the two jobs could run in parallel, and the sanitizer job is
our longest single job. It also means that we won't get a separate
result for "this passed with UBSan but not with ASan" or vice versa).
But as 1c0962c0c4 noted, that is not a very useful signal in practice.

Below are some more detailed timings of gcc vs clang that I measured by
running the test suite on my local workstation. Each measurement counts
only the time to run the test suite with each compiler (not the compile
time itself). We'll focus on the wall-clock times for simplicity, though
the CPU times follow roughly similar trends.

Here's a run with CC=gcc as a baseline:

  real	1m12.931s
  user	9m30.566s
  sys	8m9.538s

Running with SANITIZE=address increases the time by a factor of ~4.7x:

  real	5m40.352s
  user	49m37.044s
  sys	36m42.950s

Running with SANITIZE=undefined increases the time by a factor of ~1.7x:

  real	2m5.956s
  user	12m42.847s
  sys	19m27.067s

So let's call that 6.4 time units to run them separately (where a unit
is the time it takes to run the test suite with no sanitizers). As a
simplistic model, we might imagine that running them together would take
5.4 units (we save 1 unit because we are no longer running the test
suite twice, but just paying the sanitizer overhead on top of a single
run).

But that's not what happens. Running with SANITIZE=address,undefined
results in a factor of 9.3x:

  real	11m9.817s
  user	77m31.284s
  sys	96m40.454s

So not only did we not get faster when doing them together, we actually
spent 1.5x as much CPU as doing them separately! And while those
wall-clock numbers might not look too terrible, keep in mind that this
is on an unloaded 8-core machine. In the CI environment, wall-clock
times will be much closer to CPU times. So not only are we wasting CPU,
but we risk hitting timeouts.

Now let's try the same thing with clang. Here's our no-sanitizer
baseline run, which is almost identical to the gcc one (which is quite
convenient, because we can keep using the same "time units" to get an
apples-to-apples comparison):

  real	1m11.844s
  user	9m28.313s
  sys	8m8.240s

And now again with SANITIZE=address, we get a 5x factor (so slightly
worse than gcc's 4.7x, though I wouldn't read too much into it; there is
a fair bit of run-to-run noise):

  real	6m7.662s
  user	49m24.330s
  sys	44m13.846s

And with SANITIZE=undefined, we are at 1.5x, slightly outperforming gcc
(though again, that's probably mostly noise):

  real	1m50.028s
  user	11m0.973s
  sys	16m42.731s

So running them separately, our total cost is 6.5x. But if we combine
them in a single run (SANITIZE=address,undefined), we get:

  real	6m51.804s
  user	52m32.049s
  sys	51m46.711s

which is a factor of 5.7x. That's along the lines we'd hoped for!
Running them together saves us almost a whole time unit. And that's not
counting any time spent outside the test suite itself (starting the job,
setting up the environment, compiling) that we're no longer duplicating
by having two jobs.

So clang behaves like we'd hope: the overhead to run the sanitizers is
additive as you add more sanitizers. Whereas gcc's numbers seem very
close to multiplicative, almost as if the sanitizers were enforcing
their overheads on each other (though that is purely a guess on what is
going on; ultimately what matters to us is the amount of time it takes).

And that roughly matches the CI improvement I saw. A "time unit" there
is more like 12 minutes, and the observed time savings was 14 minutes
(with the extra presumably coming from avoiding duplicated setup, etc).

Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-06-03 10:35:13 +09:00
Taylor Blau
b0afdce5da pack-bitmap.c: use commit boundary during bitmap traversal
When reachability bitmap coverage exists in a repository, Git will use a
different (and hopefully faster) traversal to compute revision walks.

Consider a set of positive and negative tips (which we'll refer to with
their standard bitmap parlance by "wants", and "haves"). In order to
figure out what objects exist between the tips, the existing traversal
in `prepare_bitmap_walk()` does something like:

  1. Consider if we can even compute the set of objects with bitmaps,
     and fall back to the usual traversal if we cannot. For example,
     pathspec limiting traversals can't be computed using bitmaps (since
     they don't know which objects are at which paths). The same is true
     of certain kinds of non-trivial object filters.

  2. If we can compute the traversal with bitmaps, partition the
     (dereferenced) tips into two object lists, "haves", and "wants",
     based on whether or not the objects have the UNINTERESTING flag,
     respectively.

  3. Fall back to the ordinary object traversal if either (a) there are
     more than zero haves, none of which are in the bitmapped pack or
     MIDX, or (b) there are no wants.

  4. Construct a reachability bitmap for the "haves" side by walking
     from the revision tips down to any existing bitmaps, OR-ing in any
     bitmaps as they are found.

  5. Then do the same for the "wants" side, stopping at any objects that
     appear in the "haves" bitmap.

  6. Filter the results if any object filter (that can be easily
     computed with bitmaps alone) was given, and then return back to the
     caller.

When there is good bitmap coverage relative to the traversal tips, this
walk is often significantly faster than an ordinary object traversal
because it can visit far fewer objects.

But in certain cases, it can be significantly *slower* than the usual
object traversal. Why? Because we need to compute complete bitmaps on
either side of the walk. If either one (or both) of the sides require
walking many (or all!) objects before they get to an existing bitmap,
the extra bitmap machinery is mostly or all overhead.

One of the benefits, however, is that even if the walk is slower, bitmap
traversals are guaranteed to provide an *exact* answer. Unlike the
traditional object traversal algorithm, which can over-count the results
by not opening trees for older commits, the bitmap walk builds an exact
reachability bitmap for either side, meaning the results are never
over-counted.

But producing non-exact results is OK for our traversal here (both in
the bitmap case and not), as long as the results are over-counted, not
under.

Relaxing the bitmap traversal to allow it to produce over-counted
results gives us the opportunity to make some significant improvements.
Instead of the above, the new algorithm only has to walk from the
*boundary* down to the nearest bitmap, instead of from each of the
UNINTERESTING tips.

The boundary-based approach still has degenerate cases, but we'll show
in a moment that it is often a significant improvement.

The new algorithm works as follows:

  1. Build a (partial) bitmap of the haves side by first OR-ing any
     bitmap(s) that already exist for UNINTERESTING commits between the
     haves and the boundary.

  2. For each commit along the boundary, add it as a fill-in traversal
     tip (where the traversal terminates once an existing bitmap is
     found), and perform fill-in traversal.

  3. Build up a complete bitmap of the wants side as usual, stopping any
     time we intersect the (partial) haves side.

  4. Return the results.

And is more-or-less equivalent to using the *old* algorithm with this
invocation:

    $ git rev-list --objects --use-bitmap-index $WANTS --not \
        $(git rev-list --objects --boundary $WANTS --not $HAVES |
          perl -lne 'print $1 if /^-(.*)/')

The new result performs significantly better in many cases, particularly
when the distance from the boundary commit(s) to an existing bitmap is
shorter than the distance from (all of) the have tips to the nearest
bitmapped commit.

Note that when using the old bitmap traversal algorithm, the results can
be *slower* than without bitmaps! Under the new algorithm, the result is
computed faster with bitmaps than without (at the cost of over-counting
the true number of objects in a similar fashion as the non-bitmap
traversal):

    # (Computing the number of tagged objects not on any branches
    # without bitmaps).
    $ time git rev-list --count --objects --tags --not --branches
    20

    real	0m1.388s
    user	0m1.092s
    sys	0m0.296s

    # (Computing the same query using the old bitmap traversal).
    $ time git rev-list --count --objects --tags --not --branches --use-bitmap-index
    19

    real	0m22.709s
    user	0m21.628s
    sys	0m1.076s

    # (this commit)
    $ time git.compile rev-list --count --objects --tags --not --branches --use-bitmap-index
    19

    real	0m1.518s
    user	0m1.234s
    sys	0m0.284s

The new algorithm is still slower than not using bitmaps at all, but it
is nearly a 15-fold improvement over the existing traversal.

In a more realistic setting (using my local copy of git.git), I can
observe a similar (if more modest) speed-up:

    $ argv="--count --objects --branches --not --tags"
    hyperfine \
      -n 'no bitmaps' "git.compile rev-list $argv" \
      -n 'existing traversal' "git.compile rev-list --use-bitmap-index $argv" \
      -n 'boundary traversal' "git.compile -c pack.useBitmapBoundaryTraversal=true rev-list --use-bitmap-index $argv"
    Benchmark 1: no bitmaps
      Time (mean ± σ):     124.6 ms ±   2.1 ms    [User: 103.7 ms, System: 20.8 ms]
      Range (min … max):   122.6 ms … 133.1 ms    22 runs

    Benchmark 2: existing traversal
      Time (mean ± σ):     368.6 ms ±   3.0 ms    [User: 325.3 ms, System: 43.1 ms]
      Range (min … max):   365.1 ms … 374.8 ms    10 runs

    Benchmark 3: boundary traversal
      Time (mean ± σ):     167.6 ms ±   0.9 ms    [User: 139.5 ms, System: 27.9 ms]
      Range (min … max):   166.1 ms … 169.2 ms    17 runs

    Summary
      'no bitmaps' ran
        1.34 ± 0.02 times faster than 'boundary traversal'
        2.96 ± 0.05 times faster than 'existing traversal'

Here, the new algorithm is also still slower than not using bitmaps, but
represents a more than 2-fold improvement over the existing traversal in
a more modest example.

Since this algorithm was originally written (nearly a year and a half
ago, at the time of writing), the bitmap lookup table shipped, making
the new algorithm's result more competitive. A few other future
directions for improving bitmap traversal times beyond not using bitmaps
at all:

  - Decrease the cost to decompress and OR together many bitmaps
    together (particularly when enumerating the uninteresting side of
    the walk). Here we could explore more efficient bitmap storage
    techniques, like Roaring+Run and/or use SIMD instructions to speed
    up ORing them together.

  - Store pseudo-merge bitmaps, which could allow us to OR together
    fewer "summary" bitmaps (which would also help with the above).

Helped-by: Jeff King <peff@peff.net>
Helped-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-05-08 12:05:55 -07:00
Taylor Blau
9f7f10a282 t: invert GIT_TEST_WRITE_REV_INDEX
Back in e8c58f894b (t: support GIT_TEST_WRITE_REV_INDEX, 2021-01-25), we
added a test knob to conditionally enable writing a ".rev" file when
indexing a pack. At the time, this was used to ensure that the test
suite worked even when ".rev" files were written, which served as a
stress-test for the on-disk reverse index implementation.

Now that reading from on-disk ".rev" files is enabled by default, the
test knob `GIT_TEST_WRITE_REV_INDEX` no longer has any meaning.

We could get rid of the option entirely, but there would be no
convenient way to test Git when ".rev" files *aren't* in place.

Instead of getting rid of the option, invert its meaning to instead
disable writing ".rev" files, thereby running the test suite in a mode
where the reverse index is generated from scratch.

This ensures that, when GIT_TEST_NO_WRITE_REV_INDEX is set to some
spelling of "true", we are still running and exercising Git's behavior
when forced to generate reverse indexes from scratch. Do so by setting
it in the linux-TEST-vars CI run to ensure that we are maintaining good
coverage of this now-legacy code.

Signed-off-by: Taylor Blau <me@ttaylorr.com>
Acked-by: Derrick Stolee <derrickstolee@github.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-04-13 07:55:46 -07:00
Ævar Arnfjörð Bjarmason
20b813d7d3 add: remove "add.interactive.useBuiltin" & Perl "git add--interactive"
Since [1] first released with Git v2.37.0 the built-in version of "add
-i" has been the default. That built-in implementation was added in
[2], first released with Git v2.25.0.

At this point enough time has passed to allow for finding any
remaining bugs in this new implementation, so let's remove the
fallback code.

As with similar migrations for "stash"[3] and "rebase"[4] we're
keeping a mention of "add.interactive.useBuiltin" in the
documentation, but adding a warning() to notify any outstanding users
that the built-in is now the default. As with [5] and [6] we should
follow-up in the future and eventually remove that warning.

1. 0527ccb1b5 (add -i: default to the built-in implementation,
   2021-11-30)
2. f83dff60a7 (Start to implement a built-in version of `git add
   --interactive`, 2019-11-13)
3. 8a2cd3f512 (stash: remove the stash.useBuiltin setting,
   2020-03-03)
4. d03ebd411c (rebase: remove the rebase.useBuiltin setting,
   2019-03-18)
5. deeaf5ee07 (stash: remove documentation for `stash.useBuiltin`,
   2022-01-27)
6. 9bcde4d531 (rebase: remove transitory rebase.useBuiltin setting &
   env, 2021-03-23)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-02-06 15:03:34 -08:00
Ævar Arnfjörð Bjarmason
35898ad24d Makefile: use sha1collisiondetection by default on OSX and Darwin
When the sha1collisiondetection library was added and made the default
in [1] the interaction with APPLE_COMMON_CRYPTO added in [2] and [3]
seems to have been missed. On modern OSX and Darwin we are able to use
Apple's CommonCrypto both for SHA-1, and as a generic (but partial)
OpenSSL replacement.

This left OSX and Darwin without protection against the SHAttered
attack when building Git in its default configuration.

Let's also use sha1collisiondetection on OSX, to do so we'll need to
split up the "APPLE_COMMON_CRYPTO" flag into that flag and a new
"APPLE_COMMON_CRYPTO_SHA1".

Because of this we can stop conflating whether we want to use Apple's
CommonCrypto at all, and whether we want to use it for SHA-1.  This
makes the CI recipe added in [4] simpler.

1. e6b07da278 (Makefile: make DC_SHA1 the default, 2017-03-17)
2. 4dcd7732db (Makefile: add support for Apple CommonCrypto facility, 2013-05-19)
3. 61067954ce (cache.h: eliminate SHA-1 deprecation warnings on Mac OS X, 2013-05-19)
4. 1ad5c3df35 (ci: use DC_SHA1=YesPlease on osx-clang job for CI,
   2022-10-20)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-16 06:06:56 +09:00
Junio C Hamano
0b32d1aea2 Merge branch 'ab/ci-use-macos-12'
CI fix.

* ab/ci-use-macos-12:
  CI: upgrade to macos-12, and pin OSX version
2022-12-10 14:01:06 +09:00
Ævar Arnfjörð Bjarmason
d11192255d CI: upgrade to macos-12, and pin OSX version
Per [1] and the warnings our CI is emitting GitHub is phasing in
"macos-12" as their "macos-latest".

As with [2], let's pin our image to a specific version so that we're
not having it swept from under us, and our upgrade cycle can be more
predictable than whenever GitHub changes their images.

1. https://github.com/actions/runner-images/issues/6384
2. 0178420b9c (github-actions: run gcc-8 on ubuntu-20.04 image,
   2022-11-25)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-12-07 13:36:22 +09:00
Junio C Hamano
515ffabccf Merge branch 'jx/ci-ubuntu-fix'
Adjust the GitHub CI to newer ubuntu release.

* jx/ci-ubuntu-fix:
  ci: install python on ubuntu
  ci: use the same version of p4 on both Linux and macOS
  ci: remove the pipe after "p4 -V" to catch errors
  github-actions: run gcc-8 on ubuntu-20.04 image
2022-11-29 10:41:06 +09:00
Jiang Xin
0d3507f3e7 ci: install python on ubuntu
Python is missing from the default ubuntu-22.04 runner image, which
prevents git-p4 from working. To install python on ubuntu, we need
to provide the correct package names:

 * On Ubuntu 18.04 (bionic), "/usr/bin/python2" is provided by the
   "python" package, and "/usr/bin/python3" is provided by the "python3"
   package.

 * On Ubuntu 20.04 (focal) and above, "/usr/bin/python2" is provided by
   the "python2" package which has a different name from bionic, and
   "/usr/bin/python3" is provided by "python3".

Since the "ubuntu-latest" runner image has a higher version, its
safe to use "python2" or "python3" package name.

Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:33:43 +09:00
Jiang Xin
31a1952bbd ci: use the same version of p4 on both Linux and macOS
There would be a segmentation fault when running p4 v16.2 on ubuntu
22.04 which is the latest version of ubuntu runner image for github
actions.

By checking each version from [1], p4d version 21.1 and above can work
properly on ubuntu 22.04. But version 22.x will break some p4 test
cases. So p4 version 21.x is exactly the version we can use.

With this update, the versions of p4 for Linux and macOS happen to be
the same. So we can add the version number directly into the "P4WHENCE"
variable, and reuse it in p4 installation for macOS.

By removing the "LINUX_P4_VERSION" variable from "ci/lib.sh", the
comment left above has nothing to do with p4, but still applies to
git-lfs. Since we have a fixed version of git-lfs installed on Linux,
we may have a different version on macOS.

[1]: https://cdist2.perforce.com/perforce/

Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:32:56 +09:00
Jiang Xin
4137c84198 ci: remove the pipe after "p4 -V" to catch errors
When installing p4 as a dependency, we used to pipe output of "p4 -V"
and "p4d -V" to validate the installation and output a condensed version
information. But this would hide potential errors of p4 and would stop
with an empty output. E.g.: p4d version 16.2 running on ubuntu 22.04
causes sigfaults, even before it produces any output.

By removing the pipe after "p4 -V" and "p4d -V", we may get a
verbose output, and stop immediately on errors because we have "set
-e" in "ci/lib.sh". Since we won't look at these trace logs unless
something fails, just including the raw output seems most sensible.

Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:31:59 +09:00
Jiang Xin
0178420b9c github-actions: run gcc-8 on ubuntu-20.04 image
GitHub starts to upgrade its runner image "ubuntu-latest" from version
"ubuntu-20.04" to version "ubuntu-22.04". It will fail to find and
install "gcc-8" package on the new runner image.

Change some of the runner images from "ubuntu-latest" to "ubuntu-20.04"
in order to install "gcc-8" as a dependency.

The first revision of this patch tried to replace "$runs_on_pool" in
"ci/*.sh" with a new "$runs_on_os" environment variable based on the
"os" field in the matrix strategy. But these "os" fields in matrix
strategies are obsolete legacies from commit [1] and commit [2], and
are no longer useful. So remove these unused "os" fields.

[1]: c08bb26010 (CI: rename the "Linux32" job to lower-case "linux32",
                 2021-11-23)
[2]: 25715419bf (CI: don't run "make test" twice in one job, 2021-11-23)

Reviewed-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-11-27 09:31:12 +09:00
Ævar Arnfjörð Bjarmason
dc1cf3580e Makefile & test-tool: replace "DC_SHA1" variable with a "define"
Address the root cause of technical debt we've been carrying since
sha1collisiondetection was made the default in [1]. In a preceding
commit we narrowly fixed a bug where the "DC_SHA1" variable would be
unset (in combination with "NO_APPLE_COMMON_CRYPTO=" on OSX), even
though we had the sha1collisiondetection library enabled.

But the only reason we needed to have such a user-exposed knob went
away with [1], and it's been doing nothing useful since then. We don't
care if you define DC_SHA1=*, we only care that you don't ask for any
other SHA-1 implementation. If it turns out that you didn't, we'll use
sha1collisiondetection, whether you had "DC_SHA1" set or not.

As a result of this being confusing we had e.g. [2] for cmake and the
recent [3] for ci/lib.sh setting "DC_SHA1" explicitly, even though
this was always a NOOP.

A much simpler way to do this is to stop having the Makefile and
CMakeLists.txt set "DC_SHA1" to be picked up by the test-lib.sh, let's
instead add a trivial "test-tool sha1-is-sha1dc". It returns zero if
we're using sha1collisiondetection, non-zero otherwise.

1. e6b07da278 (Makefile: make DC_SHA1 the default, 2017-03-17)
2. c4b2f41b5f (cmake: support for testing git with ctest, 2020-06-26)
3. 1ad5c3df35 (ci: use DC_SHA1=YesPlease on osx-clang job for CI,
   2022-10-20)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-07 22:11:51 -05:00
Junio C Hamano
4039b8f112 Merge branch 'jc/more-sanitizer-at-ci'
Enable address and undefined sanitizer tasks at GitHub Actions CI.

* jc/more-sanitizer-at-ci:
  ci: add address and undefined sanitizer tasks
2022-10-25 17:11:44 -07:00
Junio C Hamano
1ad5c3df35 ci: use DC_SHA1=YesPlease on osx-clang job for CI
7b8cfe34 (Merge branch 'ed/fsmonitor-on-networked-macos',
2022-10-17) broke the build on macOS with sha1dc by bypassing our
hash abstraction (git_SHA_CTX etc.), but it wasn't caught before the
problematic topic was merged down to the 'master' branch.  Nobody
was even compile testing with DC_SHA1 set, although it is the
recommended choice in these days for folks when they use SHA-1.

This was because the default for macOS uses Apple Common Crypto, and
both of the two CI jobs did not override the default.  Tweak one of
them to use DC_SHA1 to improve the coverage.

We may want to give similar diversity for Linux jobs so that some of
them build with other implementations of SHA-1; they currently all
build and test with DC_SHA1 as that is the default on everywhere
other than macOS.

But let's start small to fill only the immediate need.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-20 10:01:37 -07:00
Junio C Hamano
1c0962c0c4 ci: add address and undefined sanitizer tasks
The current code is clean with these two sanitizers, and we would
like to keep it that way by running the checks for any new code.

The signal of "passed with asan, but not ubsan" (or vice versa) is
not that useful in practice, so it is tempting to run both santizers
in a single task, but it seems to take forever, so tentatively let's
try having two separate ones.

Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-10-20 09:20:59 -07:00
Ævar Arnfjörð Bjarmason
c24feabcfb CI: use "GIT_TEST_SANITIZE_LEAK_LOG=true" in linux-leaks
As noted in a preceding commit the leak checking done by
"GIT_TEST_PASSING_SANITIZE_LEAK=true" (added in [1]) is incomplete
without combining it with "GIT_TEST_SANITIZE_LEAK_LOG=true".

Let's run our CI with that, to ensure that we catch cases where our
tests are missing the abort() exit code resulting from a leak for
whatever reason. The reasons for that are discussed in detail in a
preceding commit.

1. 956d2e4639 (tests: add a test mode for SANITIZE=leak, run it in
   CI, 2021-09-23)

Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-07-27 16:35:40 -07:00
Junio C Hamano
d0d96b8280 Merge branch 'js/ci-github-workflow-markup'
Recent CI update hides certain failures in test jobs, which has
been corrected.

* js/ci-github-workflow-markup:
  ci(github): also mark up compile errors
  ci(github): use grouping also in the `win-build` job
  ci(github): bring back the 'print test failures' step
2022-06-17 10:33:32 -07:00
Johannes Schindelin
cadcafc331 ci(github): also mark up compile errors
When GCC produces those helpful errors, we will want to present them in
the GitHub workflow runs in the most helpful manner. To that end, we
want to use workflow commands to render errors and warnings:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions

In the previous commit, we ensured that grouping is used for the build
in all jobs, and this allows us to piggy-back onto the `group` function
to transmogrify the output.

Note: If `set -o pipefail` was available, we could do this in a little
more elegant way. But since some of the steps are run using `dash`, we
have to do a little `{ ...; echo $? >exit.status; } | ...` dance.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-13 09:58:42 -07:00
Johannes Schindelin
df5fed9c34 ci(github): use grouping also in the win-build job
We already do the same when building Git in all the other jobs.

This will allow us to piggy-back on top of grouping to mark up compiler
errors in the next commit.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-13 09:58:41 -07:00
Johannes Schindelin
5aeb145780 ci(github): bring back the 'print test failures' step
Git now shows better information in the GitHub workflow runs when a test
case failed. However, when a test case was implemented incorrectly and
therefore does not even run, nothing is shown.

Let's bring back the step that prints the full logs of the failed tests,
and to improve the user experience, print out an informational message
for readers so that they do not have to know/remember where to see the
full logs.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-06-08 16:12:37 -07:00
Junio C Hamano
fc5a070f59 Merge branch 'js/ci-github-workflow-markup'
Update the GitHub workflow support to make it quicker to get to the
failing test.

* js/ci-github-workflow-markup:
  ci: call `finalize_test_case_output` a little later
  ci(github): mention where the full logs can be found
  ci: use `--github-workflow-markup` in the GitHub workflow
  ci(github): avoid printing test case preamble twice
  ci(github): skip the logs of the successful test cases
  ci: optionally mark up output in the GitHub workflow
  ci/run-build-and-tests: add some structure to the GitHub workflow output
  ci: make it easier to find failed tests' logs in the GitHub workflow
  ci/run-build-and-tests: take a more high-level view
  test(junit): avoid line feeds in XML attributes
  tests: refactor --write-junit-xml code
  ci: fix code style
2022-06-07 14:10:57 -07:00