git/t/perf/perf-lib.sh

335 lines
8.5 KiB
Bash
Raw Normal View History

# Performance testing framework. Each perf script starts much like
# a normal test script, except it sources this library instead of
# test-lib.sh. See t/perf/README for documentation.
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
#
# Copyright (c) 2011 Thomas Rast
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/ .
perf-lib.sh: rely on test-lib.sh for --tee handling Since its inception, the perf-lib.sh script has manually handled the "--tee" option (and other options which imply it, like "--valgrind") with a cut-and-pasted block from test-lib.sh. That block has grown stale over the years, and has at least three problems: 1. It uses $SHELL to re-exec the script, whereas the version in test-lib.sh learned to use $TEST_SHELL_PATH. 2. It does an ad-hoc search of the "$*" string, whereas test-lib.sh learned to carefully parse the arguments left to right. 3. It never learned about --verbose-log (which also implies --tee), so it would not trigger for that option. This last one was especially annoying, because t/perf/run uses the GIT_TEST_OPTS from your config.mak to run the perf scripts. So if you've set, say, "-x --verbose-log" there, it will be passed as part of most perf runs. And while this script doesn't recognize the option, the test-lib.sh that we source _does_, and the behavior ends up being much more annoying: - as the comment at the top of the block says, we have to run this tee code early, before we start munging variables (it says GIT_BUILD_DIR, but the problematic variable is actually GIT_TEST_INSTALLED). - since we don't recognize --verbose-log, we don't trigger the block. We go on to munge GIT_TEST_INSTALLED, converting it from a relative to an absolute path. - then we source test-lib.sh, which _does_ recognize --verbose-log. It re-execs the script, which runs again. But this time with an absolute version of GIT_TEST_INSTALLED. - As a result, we copy the absolute version of GIT_TEST_INSTALLED into perf_results_prefix. Instead of writing our results to the expected "test-results/build_1234abcd.p1234-whatever.times", we instead write them to "test-results/_full_path_to_repo_t_perf_build_1234...". The aggregate.perl script doesn't expect this, and so it prints "<missing>" for each result (even though it spent considerable time running the tests!). We can solve all of these in one blow by just deleting our custom handling, and relying on the inclusion of test-lib.sh to handle --tee, --verbose-log, etc. There's one catch, though. We want to handle GIT_TEST_INSTALLED after we've included test-lib.sh, since we want it un-munged in the re-exec'd version of the script. But if we want to convert it from a relative to an absolute path, we must do so before we load test-lib.sh, since it will change our working directory. So we compute the absolute directory first, store it away, then include test-lib.sh, and finally assign to GIT_TEST_INSTALLED as appropriate. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-15 06:14:17 +00:00
# These variables must be set before the inclusion of test-lib.sh below,
# because it will change our working directory.
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
TEST_DIRECTORY=$(pwd)/..
TEST_OUTPUT_DIRECTORY=$(pwd)
TEST_NO_CREATE_REPO=t
TEST_NO_MALLOC_CHECK=t
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
. ../test-lib.sh
unset GIT_CONFIG_NOSYSTEM
GIT_CONFIG_SYSTEM="$TEST_DIRECTORY/perf/config"
export GIT_CONFIG_SYSTEM
if test -n "$GIT_TEST_INSTALLED" -a -z "$PERF_SET_GIT_TEST_INSTALLED"
then
error "Do not use GIT_TEST_INSTALLED with the perf tests.
Instead use:
./run <path-to-git> -- <tests>
See t/perf/README for details."
fi
# Variables from test-lib that are normally internal to the tests; we
# need to export them for test_perf subshells
export TEST_DIRECTORY TRASH_DIRECTORY GIT_BUILD_DIR GIT_TEST_CMP
MODERN_GIT=$GIT_BUILD_DIR/bin-wrappers/git
export MODERN_GIT
MODERN_SCALAR=$GIT_BUILD_DIR/bin-wrappers/scalar
export MODERN_SCALAR
perf_results_dir=$TEST_RESULTS_DIR
test -n "$GIT_PERF_SUBSECTION" && perf_results_dir="$perf_results_dir/$GIT_PERF_SUBSECTION"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
mkdir -p "$perf_results_dir"
rm -f "$perf_results_dir"/$(basename "$0" .sh).subtests
die_if_build_dir_not_repo () {
if ! ( cd "$TEST_DIRECTORY/.." &&
git rev-parse --build-dir >/dev/null 2>&1 ); then
error "No $1 defined, and your build directory is not a repo"
fi
}
if test -z "$GIT_PERF_REPO"; then
die_if_build_dir_not_repo '$GIT_PERF_REPO'
GIT_PERF_REPO=$TEST_DIRECTORY/..
fi
if test -z "$GIT_PERF_LARGE_REPO"; then
die_if_build_dir_not_repo '$GIT_PERF_LARGE_REPO'
GIT_PERF_LARGE_REPO=$TEST_DIRECTORY/..
fi
test_perf_do_repo_symlink_config_ () {
test_have_prereq SYMLINKS || git config core.symlinks false
}
t/perf: handle worktrees as test repos The perf suite gets confused when test_perf_default_repo is pointed at a worktree (which includes when it is run from within a worktree at all, since the default is to use the current repository). Here's an example: $ git worktree add ~/foo Preparing worktree (new branch 'foo') HEAD is now at 328c109303 The eighth batch $ cd ~/foo $ make [...build output...] $ cd t/perf $ ./p0000-perf-lib-sanity.sh -v -i [...] perf 1 - test_perf_default_repo works: running: foo=$(git rev-parse HEAD) && test_export foo fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' The problem is that we didn't copy all of the necessary files from the source repository (in this case we got HEAD, but we have no refs!). We discover the git-dir with "rev-parse --git-dir", but this points to the worktree's partial repository in .../.git/worktrees/foo. That partial repository has a "commondir" file which points to the main repository, where the actual refs are stored, but we don't copy it. This is the correct thing to do, though! If we did copy it, then our scratch test repo would be pointing back to the original main repo, and any ref updates we made in the tests would impact that original repo. Instead, we need to either: 1. Make a scratch copy of the original main repo (in addition to the worktree repo), and point the scratch worktree repo's commondir at it. This preserves the original relationship, but it's doubtful any script really cares (if they are testing worktree performance, they'd probably make their own worktrees). And it's trickier to get right. 2. Collapse the main and worktree repos into a single scratch repo. This can be done by copying everything from both, preferring any files from the worktree repo. This patch does the second one. With this applied, the example above results in p0000 running successfully. Reported-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 07:11:39 +00:00
test_perf_copy_repo_contents () {
for stuff in "$1"/*
do
case "$stuff" in
*/objects|*/hooks|*/config|*/commondir|*/gitdir|*/worktrees|*/fsmonitor--daemon*)
t/perf: handle worktrees as test repos The perf suite gets confused when test_perf_default_repo is pointed at a worktree (which includes when it is run from within a worktree at all, since the default is to use the current repository). Here's an example: $ git worktree add ~/foo Preparing worktree (new branch 'foo') HEAD is now at 328c109303 The eighth batch $ cd ~/foo $ make [...build output...] $ cd t/perf $ ./p0000-perf-lib-sanity.sh -v -i [...] perf 1 - test_perf_default_repo works: running: foo=$(git rev-parse HEAD) && test_export foo fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' The problem is that we didn't copy all of the necessary files from the source repository (in this case we got HEAD, but we have no refs!). We discover the git-dir with "rev-parse --git-dir", but this points to the worktree's partial repository in .../.git/worktrees/foo. That partial repository has a "commondir" file which points to the main repository, where the actual refs are stored, but we don't copy it. This is the correct thing to do, though! If we did copy it, then our scratch test repo would be pointing back to the original main repo, and any ref updates we made in the tests would impact that original repo. Instead, we need to either: 1. Make a scratch copy of the original main repo (in addition to the worktree repo), and point the scratch worktree repo's commondir at it. This preserves the original relationship, but it's doubtful any script really cares (if they are testing worktree performance, they'd probably make their own worktrees). And it's trickier to get right. 2. Collapse the main and worktree repos into a single scratch repo. This can be done by copying everything from both, preferring any files from the worktree repo. This patch does the second one. With this applied, the example above results in p0000 running successfully. Reported-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 07:11:39 +00:00
;;
*)
cp -R "$stuff" "$repo/.git/" || exit 1
;;
esac
done
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
test_perf_create_repo_from () {
test "$#" = 2 ||
tests: send "bug in the test script" errors to the script's stderr Some of the functions in our test library check that they were invoked properly with conditions like this: test "$#" = 2 || error "bug in the test script: not 2 parameters to test-expect-success" If this particular condition is triggered, then 'error' will abort the whole test script with a bold red error message [1] right away. However, under certain circumstances the test script will be aborted completely silently, namely if: - a similar condition in a test helper function like 'test_line_count' is triggered, - which is invoked from the test script's "main" shell [2], - and the test script is run manually (i.e. './t1234-foo.sh' as opposed to 'make t1234-foo.sh' or 'make test') [3] - and without the '--verbose' option, because the error message is printed from within 'test_eval_', where standard output is redirected either to /dev/null or to a log file. The only indication that something is wrong is that not all tests in the script are executed and at the end of the test script's output there is no "# passed all N tests" message, which are subtle and can easily go unnoticed, as I had to experience myself. Send these "bug in the test script" error messages directly to the test scripts standard error and thus to the terminal, so those bugs will be much harder to overlook. Instead of updating all ~20 such 'error' calls with a redirection, let's add a BUG() function to 'test-lib.sh', wrapping an 'error' call with the proper redirection and also including the common prefix of those error messages, and convert all those call sites [4] to use this new BUG() function instead. [1] That particular error message from 'test_expect_success' is printed in color only when running with or without '--verbose'; with '--tee' or '--verbose-log' the error is printed without color, but it is printed to the terminal nonetheless. [2] If such a condition is triggered in a subshell of a test, then 'error' won't be able to abort the whole test script, but only the subshell, which in turn causes the test to fail in the usual way, indicating loudly and clearly that something is wrong. [3] Well, 'error' aborts the test script the same way when run manually or by 'make' or 'prove', but both 'make' and 'prove' pay attention to the test script's exit status, and even a silently aborted test script would then trigger those tools' usual noticable error messages. [4] Strictly speaking, not all those 'error' calls need that redirection to send their output to the terminal, see e.g. 'test_expect_success' in the opening example, but I think it's better to be consistent. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-19 13:13:26 +00:00
BUG "not 2 parameters to test-create-repo"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
repo="$1"
source="$2"
source_git="$("$MODERN_GIT" -C "$source" rev-parse --git-dir)"
objects_dir="$("$MODERN_GIT" -C "$source" rev-parse --git-path objects)"
t/perf: handle worktrees as test repos The perf suite gets confused when test_perf_default_repo is pointed at a worktree (which includes when it is run from within a worktree at all, since the default is to use the current repository). Here's an example: $ git worktree add ~/foo Preparing worktree (new branch 'foo') HEAD is now at 328c109303 The eighth batch $ cd ~/foo $ make [...build output...] $ cd t/perf $ ./p0000-perf-lib-sanity.sh -v -i [...] perf 1 - test_perf_default_repo works: running: foo=$(git rev-parse HEAD) && test_export foo fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' The problem is that we didn't copy all of the necessary files from the source repository (in this case we got HEAD, but we have no refs!). We discover the git-dir with "rev-parse --git-dir", but this points to the worktree's partial repository in .../.git/worktrees/foo. That partial repository has a "commondir" file which points to the main repository, where the actual refs are stored, but we don't copy it. This is the correct thing to do, though! If we did copy it, then our scratch test repo would be pointing back to the original main repo, and any ref updates we made in the tests would impact that original repo. Instead, we need to either: 1. Make a scratch copy of the original main repo (in addition to the worktree repo), and point the scratch worktree repo's commondir at it. This preserves the original relationship, but it's doubtful any script really cares (if they are testing worktree performance, they'd probably make their own worktrees). And it's trickier to get right. 2. Collapse the main and worktree repos into a single scratch repo. This can be done by copying everything from both, preferring any files from the worktree repo. This patch does the second one. With this applied, the example above results in p0000 running successfully. Reported-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 07:11:39 +00:00
common_dir="$("$MODERN_GIT" -C "$source" rev-parse --git-common-dir)"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
mkdir -p "$repo/.git"
(
cd "$source" &&
{ cp -Rl "$objects_dir" "$repo/.git/" 2>/dev/null ||
cp -R "$objects_dir" "$repo/.git/"; } &&
t/perf: handle worktrees as test repos The perf suite gets confused when test_perf_default_repo is pointed at a worktree (which includes when it is run from within a worktree at all, since the default is to use the current repository). Here's an example: $ git worktree add ~/foo Preparing worktree (new branch 'foo') HEAD is now at 328c109303 The eighth batch $ cd ~/foo $ make [...build output...] $ cd t/perf $ ./p0000-perf-lib-sanity.sh -v -i [...] perf 1 - test_perf_default_repo works: running: foo=$(git rev-parse HEAD) && test_export foo fatal: ambiguous argument 'HEAD': unknown revision or path not in the working tree. Use '--' to separate paths from revisions, like this: 'git <command> [<revision>...] -- [<file>...]' The problem is that we didn't copy all of the necessary files from the source repository (in this case we got HEAD, but we have no refs!). We discover the git-dir with "rev-parse --git-dir", but this points to the worktree's partial repository in .../.git/worktrees/foo. That partial repository has a "commondir" file which points to the main repository, where the actual refs are stored, but we don't copy it. This is the correct thing to do, though! If we did copy it, then our scratch test repo would be pointing back to the original main repo, and any ref updates we made in the tests would impact that original repo. Instead, we need to either: 1. Make a scratch copy of the original main repo (in addition to the worktree repo), and point the scratch worktree repo's commondir at it. This preserves the original relationship, but it's doubtful any script really cares (if they are testing worktree performance, they'd probably make their own worktrees). And it's trickier to get right. 2. Collapse the main and worktree repos into a single scratch repo. This can be done by copying everything from both, preferring any files from the worktree repo. This patch does the second one. With this applied, the example above results in p0000 running successfully. Reported-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-26 07:11:39 +00:00
# common_dir must come first here, since we want source_git to
# take precedence and overwrite any overlapping files
test_perf_copy_repo_contents "$common_dir"
if test "$source_git" != "$common_dir"
then
test_perf_copy_repo_contents "$source_git"
fi
) &&
(
cd "$repo" &&
"$MODERN_GIT" init -q &&
test_perf_do_repo_symlink_config_ &&
mv .git/hooks .git/hooks-disabled 2>/dev/null &&
if test -f .git/index.lock
then
# We may be copying a repo that can't run "git
# status" due to a locked index. Since we have
# a copy it's fine to remove the lock.
rm .git/index.lock
fi &&
if test_bool_env GIT_PERF_USE_SCALAR false
then
"$MODERN_SCALAR" register
fi
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
) || error "failed to copy repository '$source' to '$repo'"
}
# call at least one of these to establish an appropriately-sized repository
test_perf_fresh_repo () {
repo="${1:-$TRASH_DIRECTORY}"
"$MODERN_GIT" init -q "$repo" &&
(
cd "$repo" &&
test_perf_do_repo_symlink_config_ &&
if test_bool_env GIT_PERF_USE_SCALAR false
then
"$MODERN_SCALAR" register
fi
)
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
test_perf_default_repo () {
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_REPO"
}
test_perf_large_repo () {
if test "$GIT_PERF_LARGE_REPO" = "$GIT_BUILD_DIR"; then
echo "warning: \$GIT_PERF_LARGE_REPO is \$GIT_BUILD_DIR." >&2
echo "warning: This will work, but may not be a sufficiently large repo" >&2
echo "warning: for representative measurements." >&2
fi
test_perf_create_repo_from "${1:-$TRASH_DIRECTORY}" "$GIT_PERF_LARGE_REPO"
}
test_checkout_worktree () {
git checkout-index -u -a ||
error "git checkout-index failed"
}
# Performance tests should never fail. If they do, stop immediately
immediate=t
# Perf tests require GNU time
case "$(uname -s)" in Darwin) GTIME="${GTIME:-gtime}";; esac
GTIME="${GTIME:-/usr/bin/time}"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
test_run_perf_ () {
test_cleanup=:
test_export_="test_cleanup"
export test_cleanup test_export_
"$GTIME" -f "%E %U %S" -o test_time.$i "$TEST_SHELL_PATH" -c '
. '"$TEST_DIRECTORY"/test-lib-functions.sh'
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
test_export () {
test_export_="$test_export_ $*"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
}
'"$1"'
ret=$?
t/perf: fix test_export() failure with BSD `sed` test_perf() runs each test in its own subshell which makes it difficult to persist variables between tests. test_export() addresses this shortcoming by grabbing the values of specified variables after a test runs but before the subshell exits, and writes those values to a file which is loaded into the environment of subsequent tests. To grab the values to be persisted, test_export() pipes the output of the shell's builtin `set` command through `sed` which plucks them out using a regular expression along the lines of `s/^(var1|var2)/.../p`. Unfortunately, though, this use of alternation is not portable. For instance, BSD-lineage `sed` (including macOS `sed`) does not support it in the default "basic regular expression" mode (BRE). It may be possible to enable "extended regular expression" mode (ERE) in some cases with `sed -E`, however, `-E` is neither portable nor part of POSIX. Fortunately, alternation is unnecessary in this case and can easily be avoided, so replace it with a series of simple expressions such as `s/^var1/.../p;s/^var2/.../p`. While at it, tighten the expressions so they match the variable names exactly rather than matching prefixes (i.e. use `s/^var1=/.../p`). If the requirements of test_export() become more complex in the future, then an alternative would be to replace `sed` with `perl` which supports alternation on all platforms, however, the simple elimination of alternation via multiple `sed` expressions suffices for the present. Reported-by: Sangeeta <sangunb09@gmail.com> Diagnosed-by: Philippe Blain <levraiphilippeblain@gmail.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-16 07:39:07 +00:00
needles=
for v in $test_export_
do
needles="$needles;s/^$v=/export $v=/p"
done
set | sed -n "s'"/'/'\\\\''/g"'$needles" >test_vars
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
exit $ret' >&3 2>&4
eval_ret=$?
if test $eval_ret = 0 || test -n "$expecting_failure"
then
test_eval_ "$test_cleanup"
. ./test_vars || error "failed to load updated environment"
fi
if test "$verbose" = "t" && test -n "$HARNESS_ACTIVE"; then
echo ""
fi
return "$eval_ret"
}
test_wrapper_ () {
local test_wrapper_func_="$1"; shift
local test_title_="$1"; shift
test_start_
test_prereq=
test_perf_setup_=
while test $# != 0
do
case $1 in
--prereq)
test_prereq=$2
shift
;;
--setup)
test_perf_setup_=$2
shift
;;
*)
break
;;
esac
shift
done
test "$#" = 1 || BUG "test_wrapper_ needs 2 positional parameters"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
export test_prereq
export test_perf_setup_
if ! test_skip "$test_title_" "$@"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
then
base=$(basename "$0" .sh)
echo "$test_count" >>"$perf_results_dir"/$base.subtests
echo "$test_title_" >"$perf_results_dir"/$base.$test_count.descr
perf-lib.sh: remove GIT_TEST_INSTALLED from perf-lib.sh Follow-up my preceding change which fixed the immediate "./run <revisions>" regression in 0baf78e7bc ("perf-lib.sh: rely on test-lib.sh for --tee handling", 2019-03-15) and entirely get rid of GIT_TEST_INSTALLED from perf-lib.sh (and aggregate.perl). As noted in that change the dance we're doing with GIT_TEST_INSTALLED perf-lib.sh isn't necessary, but there I was doing the most minimal set of changes to quickly fix a regression. But it's much simpler to never deal with the "GIT_TEST_INSTALLED" we were setting in perf-lib.sh at all. Instead the run_dirs_helper() sets the previously inferred $PERF_RESULTS_PREFIX directly. Setting this at the callsite that's already best positioned to exhaustively know about all the different cases we need to handle where PERF_RESULTS_PREFIX isn't what we want already (the empty string) makes the most sense. In one-off cases like: ./run ./p0000-perf-lib-sanity.sh ./p0000-perf-lib-sanity.sh We'll just do the right thing because PERF_RESULTS_PREFIX will be empty, and test-lib.sh takes care of finding where our git is. Any refactoring of this code needs to change both the shell code and the Perl code in aggregate.perl, because when running e.g.: ./run ../../ -- <test> The "../../" path to a relative bindir needs to be munged to a filename containing the results, and critically aggregate.perl does not get passed the path to those aggregations, just "../..". Let's fix cases where aggregate.perl would print e.g. ".." in its report output for this, and "git" for "/home/avar/g/git", i.e. it would always pick the last element. Now'll always print the full path instead. This also makes the code sturdier, e.g. you can feed "../.." to "./run" and then an absolute path to the aggregate.perl script, as long as the absolute path and "../.." resolved to the same directory printing the aggregation will work. Also simplify the "[_*]" on the RHS of "tr -c", we're trimming everything to "_", so we don't need that. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
2019-05-07 10:54:32 +00:00
base="$perf_results_dir"/"$PERF_RESULTS_PREFIX$(basename "$0" .sh)"."$test_count"
"$test_wrapper_func_" "$test_title_" "$@"
fi
test_finish_
}
test_perf_ () {
if test -z "$verbose"; then
printf "%s" "perf $test_count - $1:"
else
echo "perf $test_count - $1:"
fi
for i in $(test_seq 1 $GIT_PERF_REPEAT_COUNT); do
if test -n "$test_perf_setup_"
then
say >&3 "setup: $test_perf_setup_"
if ! test_eval_ $test_perf_setup_
then
test_failure_ "$test_perf_setup_"
break
fi
fi
say >&3 "running: $2"
if test_run_perf_ "$2"
then
if test -z "$verbose"; then
printf " %s" "$i"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
else
echo "* timing run $i/$GIT_PERF_REPEAT_COUNT:"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
fi
else
test -z "$verbose" && echo
test_failure_ "$@"
break
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
fi
done
if test -z "$verbose"; then
echo " ok"
else
test_ok_ "$1"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
fi
perf-lib: use a single filename for all measurement types The perf tests write files recording the results of tests. These results are later aggregated by 'aggregate.perl'. If the tests are run multiple times, those results are overwritten by the new results. This works just fine as long as there are only perf tests measuring the times, whose results are stored in "$base".times files. However 22bec79d1a ("t/perf: add infrastructure for measuring sizes", 2018-08-17) introduced a new type of test for measuring the size of something. The results of this are written to "$base".size files. "$base" is essentially made up of the basename of the script plus the test number. So if test numbers shift because a new test was introduced earlier in the script we might end up with both a ".times" and a ".size" file for the same test. In the aggregation script the ".times" file is preferred over the ".size" file, so some size tests might end with performance numbers from a previous run of the test. This is mainly relevant when writing perf tests that check both performance and sizes, and can get quite confusing during developement. We could fix this by doing a more thorough job of cleaning out old ".times" and ".size" files before running each test. However, an even easier solution is to just use the same filename for both types of measurement, meaning we'll always overwrite the previous result. We don't even need to change the file format to distinguish the two; aggregate.perl already decides which is which based on a regex of the content (this may become ambiguous if we add new types in the future, but we could easily add a header field to the file at that point). Based on an initial patch from Thomas Gummerer, who discovered the problem and did all of the analysis (which I stole for the commit message above): https://public-inbox.org/git/20191119185047.8550-1-t.gummerer@gmail.com/ Helped-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-25 14:09:25 +00:00
"$TEST_DIRECTORY"/perf/min_time.perl test_time.* >"$base".result
rm test_time.*
}
# Usage: test_perf 'title' [options] 'perf-test'
# Run the performance test script specified in perf-test with
# optional prerequisite and setup steps.
# Options:
# --prereq prerequisites: Skip the test if prequisites aren't met
# --setup "setup-steps": Run setup steps prior to each measured iteration
#
test_perf () {
test_wrapper_ test_perf_ "$@"
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
}
test_size_ () {
if test -n "$test_perf_setup_"
then
say >&3 "setup: $test_perf_setup_"
test_eval_ $test_perf_setup_
fi
say >&3 "running: $2"
perf-lib: use a single filename for all measurement types The perf tests write files recording the results of tests. These results are later aggregated by 'aggregate.perl'. If the tests are run multiple times, those results are overwritten by the new results. This works just fine as long as there are only perf tests measuring the times, whose results are stored in "$base".times files. However 22bec79d1a ("t/perf: add infrastructure for measuring sizes", 2018-08-17) introduced a new type of test for measuring the size of something. The results of this are written to "$base".size files. "$base" is essentially made up of the basename of the script plus the test number. So if test numbers shift because a new test was introduced earlier in the script we might end up with both a ".times" and a ".size" file for the same test. In the aggregation script the ".times" file is preferred over the ".size" file, so some size tests might end with performance numbers from a previous run of the test. This is mainly relevant when writing perf tests that check both performance and sizes, and can get quite confusing during developement. We could fix this by doing a more thorough job of cleaning out old ".times" and ".size" files before running each test. However, an even easier solution is to just use the same filename for both types of measurement, meaning we'll always overwrite the previous result. We don't even need to change the file format to distinguish the two; aggregate.perl already decides which is which based on a regex of the content (this may become ambiguous if we add new types in the future, but we could easily add a header field to the file at that point). Based on an initial patch from Thomas Gummerer, who discovered the problem and did all of the analysis (which I stole for the commit message above): https://public-inbox.org/git/20191119185047.8550-1-t.gummerer@gmail.com/ Helped-by: Thomas Gummerer <t.gummerer@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-25 14:09:25 +00:00
if test_eval_ "$2" 3>"$base".result; then
test_ok_ "$1"
else
test_failure_ "$@"
fi
}
# Usage: test_size 'title' [options] 'size-test'
# Run the size test script specified in size-test with optional
# prerequisites and setup steps. Returns the numeric value
# returned by size-test.
# Options:
# --prereq prerequisites: Skip the test if prequisites aren't met
# --setup "setup-steps": Run setup steps prior to the size measurement
test_size () {
test_wrapper_ test_size_ "$@"
}
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
# We extend test_done to print timings at the end (./run disables this
# and does it after running everything)
test_at_end_hook_ () {
if test -z "$GIT_PERF_AGGREGATING_LATER"; then
(
cd "$TEST_DIRECTORY"/perf &&
./aggregate.perl --results-dir="$TEST_RESULTS_DIR" $(basename "$0")
)
Introduce a performance testing framework This introduces a performance testing framework under t/perf/. It tries to be as close to the test-lib.sh infrastructure as possible, and thus should be easy to get used to for git developers. The following points were considered for the implementation: 1. You usually want to compare arbitrary revisions/build trees against each other. They may not have the performance test under consideration, or even the perf-lib.sh infrastructure. To cope with this, the 'run' script lets you specify arbitrary build dirs and revisions. It even automatically builds the revisions if it doesn't have them at hand yet. 2. Usually you would not want to run all tests. It would take too long anyway. The 'run' script lets you specify which tests to run; or you can also do it manually. There is a Makefile for discoverability and 'make clean', but it is not meant for real-world use. 3. Creating test repos from scratch in every test is extremely time-consuming, and shipping or downloading such large/weird repos is out of the question. We leave this decision to the user. Two different sizes of test repos can be configured, and the scripts just copy one or more of those (using hardlinks for the object store). By default it tries to use the build tree's git.git repository. This is fairly fast and versatile. Using a copy instead of a clone preserves many properties that the user may want to test for, such as lots of loose objects, unpacked refs, etc. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-17 10:25:09 +00:00
fi
}
test_export () {
export "$@"
}
test_lazy_prereq PERF_EXTRA 'test_bool_env GIT_PERF_EXTRA false'