Merge branch 'js/doc-unit-tests'

Process to add some form of low-level unit tests has started.

* js/doc-unit-tests:
  ci: run unit tests in CI
  unit tests: add TAP unit test framework
  unit tests: add a project plan document
This commit is contained in:
Junio C Hamano 2023-12-09 16:37:47 -08:00
commit 8bf6fbd00d
13 changed files with 1041 additions and 5 deletions

View file

@ -19,4 +19,4 @@ freebsd_12_task:
build_script:
- su git -c gmake
test_script:
- su git -c 'gmake test'
- su git -c 'gmake DEFAULT_UNIT_TEST_TARGET=unit-tests-prove test unit-tests'

View file

@ -122,6 +122,7 @@ TECH_DOCS += technical/scalar
TECH_DOCS += technical/send-pack-pipeline
TECH_DOCS += technical/shallow
TECH_DOCS += technical/trivial-merge
TECH_DOCS += technical/unit-tests
SP_ARTICLES += $(TECH_DOCS)
SP_ARTICLES += technical/api-index

View file

@ -0,0 +1,240 @@
= Unit Testing
In our current testing environment, we spend a significant amount of effort
crafting end-to-end tests for error conditions that could easily be captured by
unit tests (or we simply forgo some hard-to-setup and rare error conditions).
Unit tests additionally provide stability to the codebase and can simplify
debugging through isolation. Writing unit tests in pure C, rather than with our
current shell/test-tool helper setup, simplifies test setup, simplifies passing
data around (no shell-isms required), and reduces testing runtime by not
spawning a separate process for every test invocation.
We believe that a large body of unit tests, living alongside the existing test
suite, will improve code quality for the Git project.
== Definitions
For the purposes of this document, we'll use *test framework* to refer to
projects that support writing test cases and running tests within the context
of a single executable. *Test harness* will refer to projects that manage
running multiple executables (each of which may contain multiple test cases) and
aggregating their results.
In reality, these terms are not strictly defined, and many of the projects
discussed below contain features from both categories.
For now, we will evaluate projects solely on their framework features. Since we
are relying on having TAP output (see below), we can assume that any framework
can be made to work with a harness that we can choose later.
== Summary
We believe the best way forward is to implement a custom TAP framework for the
Git project. We use a version of the framework originally proposed in
https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[1].
See the <<framework-selection,Framework Selection>> section below for the
rationale behind this decision.
== Choosing a test harness
During upstream discussion, it was occasionally noted that `prove` provides many
convenient features, such as scheduling slower tests first, or re-running
previously failed tests.
While we already support the use of `prove` as a test harness for the shell
tests, it is not strictly required. The t/Makefile allows running shell tests
directly (though with interleaved output if parallelism is enabled). Git
developers who wish to use `prove` as a more advanced harness can do so by
setting DEFAULT_TEST_TARGET=prove in their config.mak.
We will follow a similar approach for unit tests: by default the test
executables will be run directly from the t/Makefile, but `prove` can be
configured with DEFAULT_UNIT_TEST_TARGET=prove.
[[framework-selection]]
== Framework selection
There are a variety of features we can use to rank the candidate frameworks, and
those features have different priorities:
* Critical features: we probably won't consider a framework without these
** Can we legally / easily use the project?
*** <<license,License>>
*** <<vendorable-or-ubiquitous,Vendorable or ubiquitous>>
*** <<maintainable-extensible,Maintainable / extensible>>
*** <<major-platform-support,Major platform support>>
** Does the project support our bare-minimum needs?
*** <<tap-support,TAP support>>
*** <<diagnostic-output,Diagnostic output>>
*** <<runtime-skippable-tests,Runtime-skippable tests>>
* Nice-to-have features:
** <<parallel-execution,Parallel execution>>
** <<mock-support,Mock support>>
** <<signal-error-handling,Signal & error-handling>>
* Tie-breaker stats
** <<project-kloc,Project KLOC>>
** <<adoption,Adoption>>
[[license]]
=== License
We must be able to legally use the framework in connection with Git. As Git is
licensed only under GPLv2, we must eliminate any LGPLv3, GPLv3, or Apache 2.0
projects.
[[vendorable-or-ubiquitous]]
=== Vendorable or ubiquitous
We want to avoid forcing Git developers to install new tools just to run unit
tests. Any prospective frameworks and harnesses must either be vendorable
(meaning, we can copy their source directly into Git's repository), or so
ubiquitous that it is reasonable to expect that most developers will have the
tools installed already.
[[maintainable-extensible]]
=== Maintainable / extensible
It is unlikely that any pre-existing project perfectly fits our needs, so any
project we select will need to be actively maintained and open to accepting
changes. Alternatively, assuming we are vendoring the source into our repo, it
must be simple enough that Git developers can feel comfortable making changes as
needed to our version.
In the comparison table below, "True" means that the framework seems to have
active developers, that it is simple enough that Git developers can make changes
to it, and that the project seems open to accepting external contributions (or
that it is vendorable). "Partial" means that at least one of the above
conditions holds.
[[major-platform-support]]
=== Major platform support
At a bare minimum, unit-testing must work on Linux, MacOS, and Windows.
In the comparison table below, "True" means that it works on all three major
platforms with no issues. "Partial" means that there may be annoyances on one or
more platforms, but it is still usable in principle.
[[tap-support]]
=== TAP support
The https://testanything.org/[Test Anything Protocol] is a text-based interface
that allows tests to communicate with a test harness. It is already used by
Git's integration test suite. Supporting TAP output is a mandatory feature for
any prospective test framework.
In the comparison table below, "True" means this is natively supported.
"Partial" means TAP output must be generated by post-processing the native
output.
Frameworks that do not have at least Partial support will not be evaluated
further.
[[diagnostic-output]]
=== Diagnostic output
When a test case fails, the framework must generate enough diagnostic output to
help developers find the appropriate test case in source code in order to debug
the failure.
[[runtime-skippable-tests]]
=== Runtime-skippable tests
Test authors may wish to skip certain test cases based on runtime circumstances,
so the framework should support this.
[[parallel-execution]]
=== Parallel execution
Ideally, we will build up a significant collection of unit test cases, most
likely split across multiple executables. It will be necessary to run these
tests in parallel to enable fast develop-test-debug cycles.
In the comparison table below, "True" means that individual test cases within a
single test executable can be run in parallel. We assume that executable-level
parallelism can be handled by the test harness.
[[mock-support]]
=== Mock support
Unit test authors may wish to test code that interacts with objects that may be
inconvenient to handle in a test (e.g. interacting with a network service).
Mocking allows test authors to provide a fake implementation of these objects
for more convenient tests.
[[signal-error-handling]]
=== Signal & error handling
The test framework should fail gracefully when test cases are themselves buggy
or when they are interrupted by signals during runtime.
[[project-kloc]]
=== Project KLOC
The size of the project, in thousands of lines of code as measured by
https://dwheeler.com/sloccount/[sloccount] (rounded up to the next multiple of
1,000). As a tie-breaker, we probably prefer a project with fewer LOC.
[[adoption]]
=== Adoption
As a tie-breaker, we prefer a more widely-used project. We use the number of
GitHub / GitLab stars to estimate this.
=== Comparison
:true: [lime-background]#True#
:false: [red-background]#False#
:partial: [yellow-background]#Partial#
:gpl: [lime-background]#GPL v2#
:isc: [lime-background]#ISC#
:mit: [lime-background]#MIT#
:expat: [lime-background]#Expat#
:lgpl: [lime-background]#LGPL v2.1#
:custom-impl: https://lore.kernel.org/git/c902a166-98ce-afba-93f2-ea6027557176@gmail.com/[Custom Git impl.]
:greatest: https://github.com/silentbicycle/greatest[Greatest]
:criterion: https://github.com/Snaipe/Criterion[Criterion]
:c-tap: https://github.com/rra/c-tap-harness/[C TAP]
:check: https://libcheck.github.io/check/[Check]
[format="csv",options="header",width="33%",subs="specialcharacters,attributes,quotes,macros"]
|=====
Framework,"<<license,License>>","<<vendorable-or-ubiquitous,Vendorable or ubiquitous>>","<<maintainable-extensible,Maintainable / extensible>>","<<major-platform-support,Major platform support>>","<<tap-support,TAP support>>","<<diagnostic-output,Diagnostic output>>","<<runtime--skippable-tests,Runtime- skippable tests>>","<<parallel-execution,Parallel execution>>","<<mock-support,Mock support>>","<<signal-error-handling,Signal & error handling>>","<<project-kloc,Project KLOC>>","<<adoption,Adoption>>"
{custom-impl},{gpl},{true},{true},{true},{true},{true},{true},{false},{false},{false},1,0
{greatest},{isc},{true},{partial},{true},{partial},{true},{true},{false},{false},{false},3,1400
{criterion},{mit},{false},{partial},{true},{true},{true},{true},{true},{false},{true},19,1800
{c-tap},{expat},{true},{partial},{partial},{true},{false},{true},{false},{false},{false},4,33
{check},{lgpl},{false},{partial},{true},{true},{true},{false},{false},{false},{true},17,973
|=====
=== Additional framework candidates
Several suggested frameworks have been eliminated from consideration:
* Incompatible licenses:
** https://github.com/zorgnax/libtap[libtap] (LGPL v3)
** https://cmocka.org/[cmocka] (Apache 2.0)
* Missing source: https://www.kindahl.net/mytap/doc/index.html[MyTap]
* No TAP support:
** https://nemequ.github.io/munit/[µnit]
** https://github.com/google/cmockery[cmockery]
** https://github.com/lpabon/cmockery2[cmockery2]
** https://github.com/ThrowTheSwitch/Unity[Unity]
** https://github.com/siu/minunit[minunit]
** https://cunit.sourceforge.net/[CUnit]
== Milestones
* Add useful tests of library-like code
* Integrate with
https://lore.kernel.org/git/20230502211454.1673000-1-calvinwan@google.com/[stdlib
work]
* Run alongside regular `make test` target

View file

@ -682,6 +682,9 @@ TEST_BUILTINS_OBJS =
TEST_OBJS =
TEST_PROGRAMS_NEED_X =
THIRD_PARTY_SOURCES =
UNIT_TEST_PROGRAMS =
UNIT_TEST_DIR = t/unit-tests
UNIT_TEST_BIN = $(UNIT_TEST_DIR)/bin
# Having this variable in your environment would break pipelines because
# you cause "cd" to echo its destination to stdout. It can also take
@ -1335,6 +1338,12 @@ THIRD_PARTY_SOURCES += compat/regex/%
THIRD_PARTY_SOURCES += sha1collisiondetection/%
THIRD_PARTY_SOURCES += sha1dc/%
UNIT_TEST_PROGRAMS += t-basic
UNIT_TEST_PROGRAMS += t-strbuf
UNIT_TEST_PROGS = $(patsubst %,$(UNIT_TEST_BIN)/%$X,$(UNIT_TEST_PROGRAMS))
UNIT_TEST_OBJS = $(patsubst %,$(UNIT_TEST_DIR)/%.o,$(UNIT_TEST_PROGRAMS))
UNIT_TEST_OBJS += $(UNIT_TEST_DIR)/test-lib.o
# xdiff and reftable libs may in turn depend on what is in libgit.a
GITLIBS = common-main.o $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(LIB_FILE)
EXTLIBS =
@ -2676,6 +2685,7 @@ OBJECTS += $(TEST_OBJS)
OBJECTS += $(XDIFF_OBJS)
OBJECTS += $(FUZZ_OBJS)
OBJECTS += $(REFTABLE_OBJS) $(REFTABLE_TEST_OBJS)
OBJECTS += $(UNIT_TEST_OBJS)
ifndef NO_CURL
OBJECTS += http.o http-walker.o remote-curl.o
@ -3178,7 +3188,7 @@ endif
test_bindir_programs := $(patsubst %,bin-wrappers/%,$(BINDIR_PROGRAMS_NEED_X) $(BINDIR_PROGRAMS_NO_X) $(TEST_PROGRAMS_NEED_X))
all:: $(TEST_PROGRAMS) $(test_bindir_programs)
all:: $(TEST_PROGRAMS) $(test_bindir_programs) $(UNIT_TEST_PROGS)
bin-wrappers/%: wrap-for-bin.sh
$(call mkdir_p_parent_template)
@ -3609,7 +3619,7 @@ endif
artifacts-tar:: $(ALL_COMMANDS_TO_INSTALL) $(SCRIPT_LIB) $(OTHER_PROGRAMS) \
GIT-BUILD-OPTIONS $(TEST_PROGRAMS) $(test_bindir_programs) \
$(MOFILES)
$(UNIT_TEST_PROGS) $(MOFILES)
$(QUIET_SUBDIR0)templates $(QUIET_SUBDIR1) \
SHELL_PATH='$(SHELL_PATH_SQ)' PERL_PATH='$(PERL_PATH_SQ)'
test -n "$(ARTIFACTS_DIRECTORY)"
@ -3671,7 +3681,7 @@ clean: profile-clean coverage-clean cocciclean
$(RM) headless-git.o
$(RM) $(LIB_FILE) $(XDIFF_LIB) $(REFTABLE_LIB) $(REFTABLE_TEST_LIB)
$(RM) $(ALL_PROGRAMS) $(SCRIPT_LIB) $(BUILT_INS) $(OTHER_PROGRAMS)
$(RM) $(TEST_PROGRAMS)
$(RM) $(TEST_PROGRAMS) $(UNIT_TEST_PROGS)
$(RM) $(FUZZ_PROGRAMS)
$(RM) $(SP_OBJ)
$(RM) $(HCC)
@ -3850,3 +3860,15 @@ $(FUZZ_PROGRAMS): all
$(XDIFF_OBJS) $(EXTLIBS) git.o $@.o $(LIB_FUZZING_ENGINE) -o $@
fuzz-all: $(FUZZ_PROGRAMS)
$(UNIT_TEST_BIN):
@mkdir -p $(UNIT_TEST_BIN)
$(UNIT_TEST_PROGS): $(UNIT_TEST_BIN)/%$X: $(UNIT_TEST_DIR)/%.o $(UNIT_TEST_DIR)/test-lib.o $(GITLIBS) GIT-LDFLAGS $(UNIT_TEST_BIN)
$(QUIET_LINK)$(CC) $(ALL_CFLAGS) -o $@ $(ALL_LDFLAGS) \
$(filter %.o,$^) $(filter %.a,$^) $(LIBS)
.PHONY: build-unit-tests unit-tests
build-unit-tests: $(UNIT_TEST_PROGS)
unit-tests: $(UNIT_TEST_PROGS)
$(MAKE) -C t/ unit-tests

View file

@ -50,6 +50,8 @@ if test -n "$run_tests"
then
group "Run tests" make test ||
handle_failed_tests
group "Run unit tests" \
make DEFAULT_UNIT_TEST_TARGET=unit-tests-prove unit-tests
fi
check_unignored_build_artifacts

View file

@ -15,4 +15,9 @@ group "Run tests" make --quiet -C t T="$(cd t &&
tr '\n' ' ')" ||
handle_failed_tests
# We only have one unit test at the moment, so run it in the first slice
if [ "$1" == "0" ] ; then
group "Run unit tests" make --quiet -C t unit-tests-prove
fi
check_unignored_build_artifacts

View file

@ -17,6 +17,7 @@ TAR ?= $(TAR)
RM ?= rm -f
PROVE ?= prove
DEFAULT_TEST_TARGET ?= test
DEFAULT_UNIT_TEST_TARGET ?= unit-tests-raw
TEST_LINT ?= test-lint
ifdef TEST_OUTPUT_DIRECTORY
@ -41,6 +42,7 @@ TPERF = $(sort $(wildcard perf/p[0-9][0-9][0-9][0-9]-*.sh))
TINTEROP = $(sort $(wildcard interop/i[0-9][0-9][0-9][0-9]-*.sh))
CHAINLINTTESTS = $(sort $(patsubst chainlint/%.test,%,$(wildcard chainlint/*.test)))
CHAINLINT = '$(PERL_PATH_SQ)' chainlint.pl
UNIT_TESTS = $(sort $(filter-out unit-tests/bin/t-basic%,$(wildcard unit-tests/bin/t-*)))
# `test-chainlint` (which is a dependency of `test-lint`, `test` and `prove`)
# checks all tests in all scripts via a single invocation, so tell individual
@ -65,6 +67,17 @@ prove: pre-clean check-chainlint $(TEST_LINT)
$(T):
@echo "*** $@ ***"; '$(TEST_SHELL_PATH_SQ)' $@ $(GIT_TEST_OPTS)
$(UNIT_TESTS):
@echo "*** $@ ***"; $@
.PHONY: unit-tests unit-tests-raw unit-tests-prove
unit-tests: $(DEFAULT_UNIT_TEST_TARGET)
unit-tests-raw: $(UNIT_TESTS)
unit-tests-prove:
@echo "*** prove - unit tests ***"; $(PROVE) $(GIT_PROVE_OPTS) $(UNIT_TESTS)
pre-clean:
$(RM) -r '$(TEST_RESULTS_DIRECTORY_SQ)'
@ -149,4 +162,4 @@ perf:
$(MAKE) -C perf/ all
.PHONY: pre-clean $(T) aggregate-results clean valgrind perf \
check-chainlint clean-chainlint test-chainlint
check-chainlint clean-chainlint test-chainlint $(UNIT_TESTS)

58
t/t0080-unit-test-output.sh Executable file
View file

@ -0,0 +1,58 @@
#!/bin/sh
test_description='Test the output of the unit test framework'
. ./test-lib.sh
test_expect_success 'TAP output from unit tests' '
cat >expect <<-EOF &&
ok 1 - passing test
ok 2 - passing test and assertion return 1
# check "1 == 2" failed at t/unit-tests/t-basic.c:76
# left: 1
# right: 2
not ok 3 - failing test
ok 4 - failing test and assertion return 0
not ok 5 - passing TEST_TODO() # TODO
ok 6 - passing TEST_TODO() returns 1
# todo check ${SQ}check(x)${SQ} succeeded at t/unit-tests/t-basic.c:25
not ok 7 - failing TEST_TODO()
ok 8 - failing TEST_TODO() returns 0
# check "0" failed at t/unit-tests/t-basic.c:30
# skipping test - missing prerequisite
# skipping check ${SQ}1${SQ} at t/unit-tests/t-basic.c:32
ok 9 - test_skip() # SKIP
ok 10 - skipped test returns 1
# skipping test - missing prerequisite
ok 11 - test_skip() inside TEST_TODO() # SKIP
ok 12 - test_skip() inside TEST_TODO() returns 1
# check "0" failed at t/unit-tests/t-basic.c:48
not ok 13 - TEST_TODO() after failing check
ok 14 - TEST_TODO() after failing check returns 0
# check "0" failed at t/unit-tests/t-basic.c:56
not ok 15 - failing check after TEST_TODO()
ok 16 - failing check after TEST_TODO() returns 0
# check "!strcmp("\thello\\\\", "there\"\n")" failed at t/unit-tests/t-basic.c:61
# left: "\011hello\\\\"
# right: "there\"\012"
# check "!strcmp("NULL", NULL)" failed at t/unit-tests/t-basic.c:62
# left: "NULL"
# right: NULL
# check "${SQ}a${SQ} == ${SQ}\n${SQ}" failed at t/unit-tests/t-basic.c:63
# left: ${SQ}a${SQ}
# right: ${SQ}\012${SQ}
# check "${SQ}\\\\${SQ} == ${SQ}\\${SQ}${SQ}" failed at t/unit-tests/t-basic.c:64
# left: ${SQ}\\\\${SQ}
# right: ${SQ}\\${SQ}${SQ}
not ok 17 - messages from failing string and char comparison
# BUG: test has no checks at t/unit-tests/t-basic.c:91
not ok 18 - test with no checks
ok 19 - test with no checks returns 0
1..19
EOF
! "$GIT_BUILD_DIR"/t/unit-tests/bin/t-basic >actual &&
test_cmp expect actual
'
test_done

1
t/unit-tests/.gitignore vendored Normal file
View file

@ -0,0 +1 @@
/bin

95
t/unit-tests/t-basic.c Normal file
View file

@ -0,0 +1,95 @@
#include "test-lib.h"
/*
* The purpose of this "unit test" is to verify a few invariants of the unit
* test framework itself, as well as to provide examples of output from actually
* failing tests. As such, it is intended that this test fails, and thus it
* should not be run as part of `make unit-tests`. Instead, we verify it behaves
* as expected in the integration test t0080-unit-test-output.sh
*/
/* Used to store the return value of check_int(). */
static int check_res;
/* Used to store the return value of TEST(). */
static int test_res;
static void t_res(int expect)
{
check_int(check_res, ==, expect);
check_int(test_res, ==, expect);
}
static void t_todo(int x)
{
check_res = TEST_TODO(check(x));
}
static void t_skip(void)
{
check(0);
test_skip("missing prerequisite");
check(1);
}
static int do_skip(void)
{
test_skip("missing prerequisite");
return 1;
}
static void t_skip_todo(void)
{
check_res = TEST_TODO(do_skip());
}
static void t_todo_after_fail(void)
{
check(0);
TEST_TODO(check(0));
}
static void t_fail_after_todo(void)
{
check(1);
TEST_TODO(check(0));
check(0);
}
static void t_messages(void)
{
check_str("\thello\\", "there\"\n");
check_str("NULL", NULL);
check_char('a', ==, '\n');
check_char('\\', ==, '\'');
}
static void t_empty(void)
{
; /* empty */
}
int cmd_main(int argc, const char **argv)
{
test_res = TEST(check_res = check_int(1, ==, 1), "passing test");
TEST(t_res(1), "passing test and assertion return 1");
test_res = TEST(check_res = check_int(1, ==, 2), "failing test");
TEST(t_res(0), "failing test and assertion return 0");
test_res = TEST(t_todo(0), "passing TEST_TODO()");
TEST(t_res(1), "passing TEST_TODO() returns 1");
test_res = TEST(t_todo(1), "failing TEST_TODO()");
TEST(t_res(0), "failing TEST_TODO() returns 0");
test_res = TEST(t_skip(), "test_skip()");
TEST(check_int(test_res, ==, 1), "skipped test returns 1");
test_res = TEST(t_skip_todo(), "test_skip() inside TEST_TODO()");
TEST(t_res(1), "test_skip() inside TEST_TODO() returns 1");
test_res = TEST(t_todo_after_fail(), "TEST_TODO() after failing check");
TEST(check_int(test_res, ==, 0), "TEST_TODO() after failing check returns 0");
test_res = TEST(t_fail_after_todo(), "failing check after TEST_TODO()");
TEST(check_int(test_res, ==, 0), "failing check after TEST_TODO() returns 0");
TEST(t_messages(), "messages from failing string and char comparison");
test_res = TEST(t_empty(), "test with no checks");
TEST(check_int(test_res, ==, 0), "test with no checks returns 0");
return test_done();
}

120
t/unit-tests/t-strbuf.c Normal file
View file

@ -0,0 +1,120 @@
#include "test-lib.h"
#include "strbuf.h"
/* wrapper that supplies tests with an empty, initialized strbuf */
static void setup(void (*f)(struct strbuf*, void*), void *data)
{
struct strbuf buf = STRBUF_INIT;
f(&buf, data);
strbuf_release(&buf);
check_uint(buf.len, ==, 0);
check_uint(buf.alloc, ==, 0);
}
/* wrapper that supplies tests with a populated, initialized strbuf */
static void setup_populated(void (*f)(struct strbuf*, void*), char *init_str, void *data)
{
struct strbuf buf = STRBUF_INIT;
strbuf_addstr(&buf, init_str);
check_uint(buf.len, ==, strlen(init_str));
f(&buf, data);
strbuf_release(&buf);
check_uint(buf.len, ==, 0);
check_uint(buf.alloc, ==, 0);
}
static int assert_sane_strbuf(struct strbuf *buf)
{
/* Initialized strbufs should always have a non-NULL buffer */
if (!check(!!buf->buf))
return 0;
/* Buffers should always be NUL-terminated */
if (!check_char(buf->buf[buf->len], ==, '\0'))
return 0;
/*
* Freshly-initialized strbufs may not have a dynamically allocated
* buffer
*/
if (buf->len == 0 && buf->alloc == 0)
return 1;
/* alloc must be at least one byte larger than len */
return check_uint(buf->len, <, buf->alloc);
}
static void t_static_init(void)
{
struct strbuf buf = STRBUF_INIT;
check_uint(buf.len, ==, 0);
check_uint(buf.alloc, ==, 0);
check_char(buf.buf[0], ==, '\0');
}
static void t_dynamic_init(void)
{
struct strbuf buf;
strbuf_init(&buf, 1024);
check(assert_sane_strbuf(&buf));
check_uint(buf.len, ==, 0);
check_uint(buf.alloc, >=, 1024);
check_char(buf.buf[0], ==, '\0');
strbuf_release(&buf);
}
static void t_addch(struct strbuf *buf, void *data)
{
const char *p_ch = data;
const char ch = *p_ch;
size_t orig_alloc = buf->alloc;
size_t orig_len = buf->len;
if (!check(assert_sane_strbuf(buf)))
return;
strbuf_addch(buf, ch);
if (!check(assert_sane_strbuf(buf)))
return;
if (!(check_uint(buf->len, ==, orig_len + 1) &&
check_uint(buf->alloc, >=, orig_alloc)))
return; /* avoid de-referencing buf->buf */
check_char(buf->buf[buf->len - 1], ==, ch);
check_char(buf->buf[buf->len], ==, '\0');
}
static void t_addstr(struct strbuf *buf, void *data)
{
const char *text = data;
size_t len = strlen(text);
size_t orig_alloc = buf->alloc;
size_t orig_len = buf->len;
if (!check(assert_sane_strbuf(buf)))
return;
strbuf_addstr(buf, text);
if (!check(assert_sane_strbuf(buf)))
return;
if (!(check_uint(buf->len, ==, orig_len + len) &&
check_uint(buf->alloc, >=, orig_alloc) &&
check_uint(buf->alloc, >, orig_len + len) &&
check_char(buf->buf[orig_len + len], ==, '\0')))
return;
check_str(buf->buf + orig_len, text);
}
int cmd_main(int argc, const char **argv)
{
if (!TEST(t_static_init(), "static initialization works"))
test_skip_all("STRBUF_INIT is broken");
TEST(t_dynamic_init(), "dynamic initialization works");
TEST(setup(t_addch, "a"), "strbuf_addch adds char");
TEST(setup(t_addch, ""), "strbuf_addch adds NUL char");
TEST(setup_populated(t_addch, "initial value", "a"),
"strbuf_addch appends to initial value");
TEST(setup(t_addstr, "hello there"), "strbuf_addstr adds string");
TEST(setup_populated(t_addstr, "initial value", "hello there"),
"strbuf_addstr appends string to initial value");
return test_done();
}

330
t/unit-tests/test-lib.c Normal file
View file

@ -0,0 +1,330 @@
#include "test-lib.h"
enum result {
RESULT_NONE,
RESULT_FAILURE,
RESULT_SKIP,
RESULT_SUCCESS,
RESULT_TODO
};
static struct {
enum result result;
int count;
unsigned failed :1;
unsigned lazy_plan :1;
unsigned running :1;
unsigned skip_all :1;
unsigned todo :1;
} ctx = {
.lazy_plan = 1,
.result = RESULT_NONE,
};
static void msg_with_prefix(const char *prefix, const char *format, va_list ap)
{
fflush(stderr);
if (prefix)
fprintf(stdout, "%s", prefix);
vprintf(format, ap); /* TODO: handle newlines */
putc('\n', stdout);
fflush(stdout);
}
void test_msg(const char *format, ...)
{
va_list ap;
va_start(ap, format);
msg_with_prefix("# ", format, ap);
va_end(ap);
}
void test_plan(int count)
{
assert(!ctx.running);
fflush(stderr);
printf("1..%d\n", count);
fflush(stdout);
ctx.lazy_plan = 0;
}
int test_done(void)
{
assert(!ctx.running);
if (ctx.lazy_plan)
test_plan(ctx.count);
return ctx.failed;
}
void test_skip(const char *format, ...)
{
va_list ap;
assert(ctx.running);
ctx.result = RESULT_SKIP;
va_start(ap, format);
if (format)
msg_with_prefix("# skipping test - ", format, ap);
va_end(ap);
}
void test_skip_all(const char *format, ...)
{
va_list ap;
const char *prefix;
if (!ctx.count && ctx.lazy_plan) {
/* We have not printed a test plan yet */
prefix = "1..0 # SKIP ";
ctx.lazy_plan = 0;
} else {
/* We have already printed a test plan */
prefix = "Bail out! # ";
ctx.failed = 1;
}
ctx.skip_all = 1;
ctx.result = RESULT_SKIP;
va_start(ap, format);
msg_with_prefix(prefix, format, ap);
va_end(ap);
}
int test__run_begin(void)
{
assert(!ctx.running);
ctx.count++;
ctx.result = RESULT_NONE;
ctx.running = 1;
return ctx.skip_all;
}
static void print_description(const char *format, va_list ap)
{
if (format) {
fputs(" - ", stdout);
vprintf(format, ap);
}
}
int test__run_end(int was_run UNUSED, const char *location, const char *format, ...)
{
va_list ap;
assert(ctx.running);
assert(!ctx.todo);
fflush(stderr);
va_start(ap, format);
if (!ctx.skip_all) {
switch (ctx.result) {
case RESULT_SUCCESS:
printf("ok %d", ctx.count);
print_description(format, ap);
break;
case RESULT_FAILURE:
printf("not ok %d", ctx.count);
print_description(format, ap);
break;
case RESULT_TODO:
printf("not ok %d", ctx.count);
print_description(format, ap);
printf(" # TODO");
break;
case RESULT_SKIP:
printf("ok %d", ctx.count);
print_description(format, ap);
printf(" # SKIP");
break;
case RESULT_NONE:
test_msg("BUG: test has no checks at %s", location);
printf("not ok %d", ctx.count);
print_description(format, ap);
ctx.result = RESULT_FAILURE;
break;
}
}
va_end(ap);
ctx.running = 0;
if (ctx.skip_all)
return 1;
putc('\n', stdout);
fflush(stdout);
ctx.failed |= ctx.result == RESULT_FAILURE;
return ctx.result != RESULT_FAILURE;
}
static void test_fail(void)
{
assert(ctx.result != RESULT_SKIP);
ctx.result = RESULT_FAILURE;
}
static void test_pass(void)
{
assert(ctx.result != RESULT_SKIP);
if (ctx.result == RESULT_NONE)
ctx.result = RESULT_SUCCESS;
}
static void test_todo(void)
{
assert(ctx.result != RESULT_SKIP);
if (ctx.result != RESULT_FAILURE)
ctx.result = RESULT_TODO;
}
int test_assert(const char *location, const char *check, int ok)
{
assert(ctx.running);
if (ctx.result == RESULT_SKIP) {
test_msg("skipping check '%s' at %s", check, location);
return 1;
}
if (!ctx.todo) {
if (ok) {
test_pass();
} else {
test_msg("check \"%s\" failed at %s", check, location);
test_fail();
}
}
return !!ok;
}
void test__todo_begin(void)
{
assert(ctx.running);
assert(!ctx.todo);
ctx.todo = 1;
}
int test__todo_end(const char *location, const char *check, int res)
{
assert(ctx.running);
assert(ctx.todo);
ctx.todo = 0;
if (ctx.result == RESULT_SKIP)
return 1;
if (res) {
test_msg("todo check '%s' succeeded at %s", check, location);
test_fail();
} else {
test_todo();
}
return !res;
}
int check_bool_loc(const char *loc, const char *check, int ok)
{
return test_assert(loc, check, ok);
}
union test__tmp test__tmp[2];
int check_int_loc(const char *loc, const char *check, int ok,
intmax_t a, intmax_t b)
{
int ret = test_assert(loc, check, ok);
if (!ret) {
test_msg(" left: %"PRIdMAX, a);
test_msg(" right: %"PRIdMAX, b);
}
return ret;
}
int check_uint_loc(const char *loc, const char *check, int ok,
uintmax_t a, uintmax_t b)
{
int ret = test_assert(loc, check, ok);
if (!ret) {
test_msg(" left: %"PRIuMAX, a);
test_msg(" right: %"PRIuMAX, b);
}
return ret;
}
static void print_one_char(char ch, char quote)
{
if ((unsigned char)ch < 0x20u || ch == 0x7f) {
/* TODO: improve handling of \a, \b, \f ... */
printf("\\%03o", (unsigned char)ch);
} else {
if (ch == '\\' || ch == quote)
putc('\\', stdout);
putc(ch, stdout);
}
}
static void print_char(const char *prefix, char ch)
{
printf("# %s: '", prefix);
print_one_char(ch, '\'');
fputs("'\n", stdout);
}
int check_char_loc(const char *loc, const char *check, int ok, char a, char b)
{
int ret = test_assert(loc, check, ok);
if (!ret) {
fflush(stderr);
print_char(" left", a);
print_char(" right", b);
fflush(stdout);
}
return ret;
}
static void print_str(const char *prefix, const char *str)
{
printf("# %s: ", prefix);
if (!str) {
fputs("NULL\n", stdout);
} else {
putc('"', stdout);
while (*str)
print_one_char(*str++, '"');
fputs("\"\n", stdout);
}
}
int check_str_loc(const char *loc, const char *check,
const char *a, const char *b)
{
int ok = (!a && !b) || (a && b && !strcmp(a, b));
int ret = test_assert(loc, check, ok);
if (!ret) {
fflush(stderr);
print_str(" left", a);
print_str(" right", b);
fflush(stdout);
}
return ret;
}

149
t/unit-tests/test-lib.h Normal file
View file

@ -0,0 +1,149 @@
#ifndef TEST_LIB_H
#define TEST_LIB_H
#include "git-compat-util.h"
/*
* Run a test function, returns 1 if the test succeeds, 0 if it
* fails. If test_skip_all() has been called then the test will not be
* run. The description for each test should be unique. For example:
*
* TEST(test_something(arg1, arg2), "something %d %d", arg1, arg2)
*/
#define TEST(t, ...) \
test__run_end(test__run_begin() ? 0 : (t, 1), \
TEST_LOCATION(), __VA_ARGS__)
/*
* Print a test plan, should be called before any tests. If the number
* of tests is not known in advance test_done() will automatically
* print a plan at the end of the test program.
*/
void test_plan(int count);
/*
* test_done() must be called at the end of main(). It will print the
* plan if plan() was not called at the beginning of the test program
* and returns the exit code for the test program.
*/
int test_done(void);
/* Skip the current test. */
__attribute__((format (printf, 1, 2)))
void test_skip(const char *format, ...);
/* Skip all remaining tests. */
__attribute__((format (printf, 1, 2)))
void test_skip_all(const char *format, ...);
/* Print a diagnostic message to stdout. */
__attribute__((format (printf, 1, 2)))
void test_msg(const char *format, ...);
/*
* Test checks are built around test_assert(). checks return 1 on
* success, 0 on failure. If any check fails then the test will fail. To
* create a custom check define a function that wraps test_assert() and
* a macro to wrap that function to provide a source location and
* stringified arguments. Custom checks that take pointer arguments
* should be careful to check that they are non-NULL before
* dereferencing them. For example:
*
* static int check_oid_loc(const char *loc, const char *check,
* struct object_id *a, struct object_id *b)
* {
* int res = test_assert(loc, check, a && b && oideq(a, b));
*
* if (!res) {
* test_msg(" left: %s", a ? oid_to_hex(a) : "NULL";
* test_msg(" right: %s", b ? oid_to_hex(a) : "NULL";
*
* }
* return res;
* }
*
* #define check_oid(a, b) \
* check_oid_loc(TEST_LOCATION(), "oideq("#a", "#b")", a, b)
*/
int test_assert(const char *location, const char *check, int ok);
/* Helper macro to pass the location to checks */
#define TEST_LOCATION() TEST__MAKE_LOCATION(__LINE__)
/* Check a boolean condition. */
#define check(x) \
check_bool_loc(TEST_LOCATION(), #x, x)
int check_bool_loc(const char *loc, const char *check, int ok);
/*
* Compare two integers. Prints a message with the two values if the
* comparison fails. NB this is not thread safe.
*/
#define check_int(a, op, b) \
(test__tmp[0].i = (a), test__tmp[1].i = (b), \
check_int_loc(TEST_LOCATION(), #a" "#op" "#b, \
test__tmp[0].i op test__tmp[1].i, \
test__tmp[0].i, test__tmp[1].i))
int check_int_loc(const char *loc, const char *check, int ok,
intmax_t a, intmax_t b);
/*
* Compare two unsigned integers. Prints a message with the two values
* if the comparison fails. NB this is not thread safe.
*/
#define check_uint(a, op, b) \
(test__tmp[0].u = (a), test__tmp[1].u = (b), \
check_uint_loc(TEST_LOCATION(), #a" "#op" "#b, \
test__tmp[0].u op test__tmp[1].u, \
test__tmp[0].u, test__tmp[1].u))
int check_uint_loc(const char *loc, const char *check, int ok,
uintmax_t a, uintmax_t b);
/*
* Compare two chars. Prints a message with the two values if the
* comparison fails. NB this is not thread safe.
*/
#define check_char(a, op, b) \
(test__tmp[0].c = (a), test__tmp[1].c = (b), \
check_char_loc(TEST_LOCATION(), #a" "#op" "#b, \
test__tmp[0].c op test__tmp[1].c, \
test__tmp[0].c, test__tmp[1].c))
int check_char_loc(const char *loc, const char *check, int ok,
char a, char b);
/* Check whether two strings are equal. */
#define check_str(a, b) \
check_str_loc(TEST_LOCATION(), "!strcmp("#a", "#b")", a, b)
int check_str_loc(const char *loc, const char *check,
const char *a, const char *b);
/*
* Wrap a check that is known to fail. If the check succeeds then the
* test will fail. Returns 1 if the check fails, 0 if it
* succeeds. For example:
*
* TEST_TODO(check(0));
*/
#define TEST_TODO(check) \
(test__todo_begin(), test__todo_end(TEST_LOCATION(), #check, check))
/* Private helpers */
#define TEST__STR(x) #x
#define TEST__MAKE_LOCATION(line) __FILE__ ":" TEST__STR(line)
union test__tmp {
intmax_t i;
uintmax_t u;
char c;
};
extern union test__tmp test__tmp[2];
int test__run_begin(void);
__attribute__((format (printf, 3, 4)))
int test__run_end(int, const char *, const char *, ...);
void test__todo_begin(void);
int test__todo_end(const char *, const char *, int);
#endif /* TEST_LIB_H */