2018-10-17 22:13:26 +00:00
|
|
|
#ifndef TEST_TOOL_H
|
|
|
|
#define TEST_TOOL_H
|
2018-03-24 07:44:30 +00:00
|
|
|
|
2018-08-21 18:41:40 +00:00
|
|
|
#include "git-compat-util.h"
|
|
|
|
|
2020-03-02 20:01:59 +00:00
|
|
|
int cmd__advise_if_enabled(int argc, const char **argv);
|
2021-04-01 01:32:11 +00:00
|
|
|
int cmd__bitmap(int argc, const char **argv);
|
2020-03-30 00:31:24 +00:00
|
|
|
int cmd__bloom(int argc, const char **argv);
|
2022-10-12 12:52:32 +00:00
|
|
|
int cmd__bundle_uri(int argc, const char **argv);
|
cache-tree: add perf test comparing update and prime
Add a performance test comparing the execution times of 'prime_cache_tree()'
and 'cache_tree_update(_, WRITE_TREE_SILENT | WRITE_TREE_REPAIR)'. The goal
of comparing these two is to identify which is the faster method for
rebuilding an invalid cache tree, ultimately to remove one when both are
(reundantly) called in immediate succession.
Both methods are fast, so the new tests in 'p0090-cache-tree.sh' must call
each tested function multiple times to ensure the reported times (to 0.01s
resolution) convey the differences between them.
The tests compare the timing of a 'test-tool cache-tree' run as a no-op (to
capture a baseline for the overhead associated with running the tool),
'cache_tree_update()', and 'prime_cache_tree()' on four scenarios:
- A completely valid cache tree
- A cache tree with 2 invalid paths
- A cache tree with 50 invalid paths
- A completely empty cache tree
Example results:
Test this tree
-----------------------------------------------------------
0090.2: no-op, clean 1.27(0.48+0.52)
0090.3: prime_cache_tree, clean 2.02(0.83+0.85)
0090.4: cache_tree_update, clean 1.30(0.49+0.54)
0090.5: no-op, invalidate 2 1.29(0.48+0.54)
0090.6: prime_cache_tree, invalidate 2 1.98(0.81+0.83)
0090.7: cache_tree_update, invalidate 2 2.12(0.94+0.86)
0090.8: no-op, invalidate 50 1.32(0.50+0.55)
0090.9: prime_cache_tree, invalidate 50 2.10(0.86+0.89)
0090.10: cache_tree_update, invalidate 50 2.35(1.14+0.90)
0090.11: no-op, empty 1.33(0.50+0.54)
0090.12: prime_cache_tree, empty 2.04(0.84+0.87)
0090.13: cache_tree_update, empty 2.51(1.27+0.92)
These timings show that, while 'cache_tree_update()' is faster when the
cache tree is completely valid, it is equal to or slower than
'prime_cache_tree()' when there are any invalid paths. Since the redundant
calls are mostly in scenarios where the cache tree will be at least
partially invalid (e.g., 'git reset --hard'), 'prime_cache_tree()' will
likely perform better than 'cache_tree_update()' in typical cases.
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Victoria Dye <vdye@github.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-10 19:06:01 +00:00
|
|
|
int cmd__cache_tree(int argc, const char **argv);
|
2018-03-24 07:44:31 +00:00
|
|
|
int cmd__chmtime(int argc, const char **argv);
|
2018-03-24 07:44:34 +00:00
|
|
|
int cmd__config(int argc, const char **argv);
|
2020-09-11 17:49:18 +00:00
|
|
|
int cmd__crontab(int argc, const char **argv);
|
wrapper: add a helper to generate numbers from a CSPRNG
There are many situations in which having access to a cryptographically
secure pseudorandom number generator (CSPRNG) is helpful. In the
future, we'll encounter one of these when dealing with temporary files.
To make this possible, let's add a function which reads from a system
CSPRNG and returns some bytes.
We know that all systems will have such an interface. A CSPRNG is
required for a secure TLS or SSH implementation and a Git implementation
which provided neither would be of little practical use. In addition,
POSIX is set to standardize getentropy(2) in the next version, so in the
(potentially distant) future we can rely on that.
For systems which lack one of the other interfaces, we provide the
ability to use OpenSSL's CSPRNG. OpenSSL is highly portable and
functions on practically every known OS, and we know it will have access
to some source of cryptographically secure randomness. We also provide
support for the arc4random in libbsd for folks who would prefer to use
that.
Because this is a security sensitive interface, we take some
precautions. We either succeed by filling the buffer completely as we
requested, or we fail. We don't return partial data because the caller
will almost never find that to be a useful behavior.
Specify a makefile knob which users can use to specify one or more
suitable CSPRNGs, and turn the multiple string options into a set of
defines, since we cannot match on strings in the preprocessor. We allow
multiple options to make the job of handling this in autoconf easier.
The order of options is important here. On systems with arc4random,
which is most of the BSDs, we use that, since, except on MirBSD and
macOS, it uses ChaCha20, which is extremely fast, and sits entirely in
userspace, avoiding a system call. We then prefer getrandom over
getentropy, because the former has been available longer on Linux, and
then OpenSSL. Finally, if none of those are available, we use
/dev/urandom, because most Unix-like operating systems provide that API.
We prefer options that don't involve device files when possible because
those work in some restricted environments where device files may not be
available.
Set the configuration variables appropriately for Linux and the BSDs,
including macOS, as well as Windows and NonStop. We specifically only
consider versions which receive publicly available security support
here. For the same reason, we don't specify getrandom(2) on Linux,
because CentOS 7 doesn't support it in glibc (although its kernel does)
and we don't want to resort to making syscalls.
Finally, add a test helper to allow this to be tested by hand and in
tests. We don't add any tests, since invoking the CSPRNG is not likely
to produce interesting, reproducible results.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-01-17 21:56:16 +00:00
|
|
|
int cmd__csprng(int argc, const char **argv);
|
2018-03-24 07:44:35 +00:00
|
|
|
int cmd__ctype(int argc, const char **argv);
|
2018-03-24 07:44:36 +00:00
|
|
|
int cmd__date(int argc, const char **argv);
|
2018-03-24 07:44:37 +00:00
|
|
|
int cmd__delta(int argc, const char **argv);
|
2019-07-10 23:58:57 +00:00
|
|
|
int cmd__dir_iterator(int argc, const char **argv);
|
2018-03-24 07:44:38 +00:00
|
|
|
int cmd__drop_caches(int argc, const char **argv);
|
2018-03-24 07:44:39 +00:00
|
|
|
int cmd__dump_cache_tree(int argc, const char **argv);
|
2018-09-09 17:36:30 +00:00
|
|
|
int cmd__dump_fsmonitor(int argc, const char **argv);
|
2018-03-24 07:44:40 +00:00
|
|
|
int cmd__dump_split_index(int argc, const char **argv);
|
2018-09-09 17:36:27 +00:00
|
|
|
int cmd__dump_untracked_cache(int argc, const char **argv);
|
2021-10-07 20:25:15 +00:00
|
|
|
int cmd__dump_reftable(int argc, const char **argv);
|
2023-01-12 16:03:21 +00:00
|
|
|
int cmd__env_helper(int argc, const char **argv);
|
2018-03-24 07:44:41 +00:00
|
|
|
int cmd__example_decorate(int argc, const char **argv);
|
2020-10-29 20:32:13 +00:00
|
|
|
int cmd__fast_rebase(int argc, const char **argv);
|
2022-03-25 18:03:03 +00:00
|
|
|
int cmd__fsmonitor_client(int argc, const char **argv);
|
2018-03-24 07:44:42 +00:00
|
|
|
int cmd__genrandom(int argc, const char **argv);
|
tests: teach the test-tool to generate NUL bytes and use it
In cc95bc2025 (t5562: replace /dev/zero with a pipe from
generate_zero_bytes, 2019-02-09), we replaced usage of /dev/zero (which
is not available on NonStop, apparently) by a Perl script snippet to
generate NUL bytes.
Sadly, it does not seem to work on NonStop, as t5562 reportedly hangs.
Worse, this also hangs in the Ubuntu 16.04 agents of the CI builds on
Azure Pipelines: for some reason, the Perl script snippet that is run
via `generate_zero_bytes` in t5562's 'CONTENT_LENGTH overflow ssite_t'
test case tries to write out an infinite amount of NUL bytes unless a
broken pipe is encountered, that snippet never encounters the broken
pipe, and keeps going until the build times out.
Oddly enough, this does not reproduce on the Windows and macOS agents,
nor in a local Ubuntu 18.04.
This developer tried for a day to figure out the exact circumstances
under which this hang happens, to no avail, the details remain a
mystery.
In the end, though, what counts is that this here change incidentally
fixes that hang (maybe also on NonStop?). Even more positively, it gets
rid of yet another unnecessary Perl invocation.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14 21:33:12 +00:00
|
|
|
int cmd__genzeros(int argc, const char **argv);
|
2021-07-30 16:18:14 +00:00
|
|
|
int cmd__getcwd(int argc, const char **argv);
|
2018-03-24 07:44:43 +00:00
|
|
|
int cmd__hashmap(int argc, const char **argv);
|
2018-11-14 04:09:34 +00:00
|
|
|
int cmd__hash_speed(int argc, const char **argv);
|
2022-05-26 21:47:20 +00:00
|
|
|
int cmd__hexdump(int argc, const char **argv);
|
2018-03-24 07:44:44 +00:00
|
|
|
int cmd__index_version(int argc, const char **argv);
|
2018-07-13 16:54:08 +00:00
|
|
|
int cmd__json_writer(int argc, const char **argv);
|
2018-03-24 07:44:33 +00:00
|
|
|
int cmd__lazy_init_name_hash(int argc, const char **argv);
|
2018-03-24 07:44:45 +00:00
|
|
|
int cmd__match_trees(int argc, const char **argv);
|
2018-03-24 07:44:46 +00:00
|
|
|
int cmd__mergesort(int argc, const char **argv);
|
2018-03-24 07:44:47 +00:00
|
|
|
int cmd__mktemp(int argc, const char **argv);
|
2019-06-15 10:06:59 +00:00
|
|
|
int cmd__oidmap(int argc, const char **argv);
|
2021-07-07 23:10:19 +00:00
|
|
|
int cmd__oidtree(int argc, const char **argv);
|
2018-03-24 07:44:48 +00:00
|
|
|
int cmd__online_cpus(int argc, const char **argv);
|
2022-05-20 23:17:46 +00:00
|
|
|
int cmd__pack_mtimes(int argc, const char **argv);
|
2018-09-09 17:36:29 +00:00
|
|
|
int cmd__parse_options(int argc, const char **argv);
|
2022-08-19 16:03:55 +00:00
|
|
|
int cmd__parse_options_flags(int argc, const char **argv);
|
2019-12-31 10:15:12 +00:00
|
|
|
int cmd__parse_pathspec_file(int argc, const char** argv);
|
parse-options: add support for parsing subcommands
Several Git commands have subcommands to implement mutually exclusive
"operation modes", and they usually parse their subcommand argument
with a bunch of if-else if statements.
Teach parse-options to handle subcommands as well, which will result
in shorter and simpler code with consistent error handling and error
messages on unknown or missing subcommand, and it will also make
possible for our Bash completion script to handle subcommands
programmatically.
The approach is guided by the following observations:
- Most subcommands [1] are implemented in dedicated functions, and
most of those functions [2] either have a signature matching the
'int cmd_foo(int argc, const char **argc, const char *prefix)'
signature of builtin commands or can be trivially converted to
that signature, because they miss only that last prefix parameter
or have no parameters at all.
- Subcommand arguments only have long form, and they have no double
dash prefix, no negated form, and no description, and they don't
take any arguments, and can't be abbreviated.
- There must be exactly one subcommand among the arguments, or zero
if the command has a default operation mode.
- All arguments following the subcommand are considered to be
arguments of the subcommand, and, conversely, arguments meant for
the subcommand may not preceed the subcommand.
So in the end subcommand declaration and parsing would look something
like this:
parse_opt_subcommand_fn *fn = NULL;
struct option builtin_commit_graph_options[] = {
OPT_STRING(0, "object-dir", &opts.obj_dir, N_("dir"),
N_("the object directory to store the graph")),
OPT_SUBCOMMAND("verify", &fn, graph_verify),
OPT_SUBCOMMAND("write", &fn, graph_write),
OPT_END(),
};
argc = parse_options(argc, argv, prefix, options,
builtin_commit_graph_usage, 0);
return fn(argc, argv, prefix);
Here each OPT_SUBCOMMAND specifies the name of the subcommand and the
function implementing it, and the address of the same 'fn' subcommand
function pointer. parse_options() then processes the arguments until
it finds the first argument matching one of the subcommands, sets 'fn'
to the function associated with that subcommand, and returns, leaving
the rest of the arguments unprocessed. If none of the listed
subcommands is found among the arguments, parse_options() will show
usage and abort.
If a command has a default operation mode, 'fn' should be initialized
to the function implementing that mode, and parse_options() should be
invoked with the PARSE_OPT_SUBCOMMAND_OPTIONAL flag. In this case
parse_options() won't error out when not finding any subcommands, but
will return leaving 'fn' unchanged. Note that if that default
operation mode has any --options, then the PARSE_OPT_KEEP_UNKNOWN_OPT
flag is necessary as well (otherwise parse_options() would error out
upon seeing the unknown option meant to the default operation mode).
Some thoughts about the implementation:
- The same pointer to 'fn' must be specified as 'value' for each
OPT_SUBCOMMAND, because there can be only one set of mutually
exclusive subcommands; parse_options() will BUG() otherwise.
There are other ways to tell parse_options() where to put the
function associated with the subcommand given on the command line,
but I didn't like them:
- Change parse_options()'s signature by adding a pointer to
subcommand function to be set to the function associated with
the given subcommand, affecting all callsites, even those that
don't have subcommands.
- Introduce a specific parse_options_and_subcommand() variant
with that extra funcion parameter.
- I decided against automatically calling the subcommand function
from within parse_options(), because:
- There are commands that have to perform additional actions
after option parsing but before calling the function
implementing the specified subcommand.
- The return code of the subcommand is usually the return code
of the git command, but preserving the return code of the
automatically called subcommand function would have made the
API awkward.
- Also add a OPT_SUBCOMMAND_F() variant to allow specifying an
option flag: we have two subcommands that are purposefully
excluded from completion ('git remote rm' and 'git stash save'),
so they'll have to be specified with the PARSE_OPT_NOCOMPLETE
flag.
- Some of the 'parse_opt_flags' don't make sense with subcommands,
and using them is probably just an oversight or misunderstanding.
Therefore parse_options() will BUG() when invoked with any of the
following flags while the options array contains at least one
OPT_SUBCOMMAND:
- PARSE_OPT_KEEP_DASHDASH: parse_options() stops parsing
arguments when encountering a "--" argument, so it doesn't
make sense to expect and keep one before a subcommand, because
it would prevent the parsing of the subcommand.
However, this flag is allowed in combination with the
PARSE_OPT_SUBCOMMAND_OPTIONAL flag, because the double dash
might be meaningful for the command's default operation mode,
e.g. to disambiguate refs and pathspecs.
- PARSE_OPT_STOP_AT_NON_OPTION: As its name suggests, this flag
tells parse_options() to stop as soon as it encouners a
non-option argument, but subcommands are by definition not
options... so how could they be parsed, then?!
- PARSE_OPT_KEEP_UNKNOWN: This flag can be used to collect any
unknown --options and then pass them to a different command or
subsystem. Surely if a command has subcommands, then this
functionality should rather be delegated to one of those
subcommands, and not performed by the command itself.
However, this flag is allowed in combination with the
PARSE_OPT_SUBCOMMAND_OPTIONAL flag, making possible to pass
--options to the default operation mode.
- If the command with subcommands has a default operation mode, then
all arguments to the command must preceed the arguments of the
subcommand.
AFAICT we don't have any commands where this makes a difference,
because in those commands either only the command accepts any
arguments ('notes' and 'remote'), or only the default subcommand
('reflog' and 'stash'), but never both.
- The 'argv' array passed to subcommand functions currently starts
with the name of the subcommand. Keep this behavior. AFAICT no
subcommand functions depend on the actual content of 'argv[0]',
but the parse_options() call handling their options expects that
the options start at argv[1].
- To support handling subcommands programmatically in our Bash
completion script, 'git cmd --git-completion-helper' will now list
both subcommands and regular --options, if any. This means that
the completion script will have to separate subcommands (i.e.
words without a double dash prefix) from --options on its own, but
that's rather easy to do, and it's not much work either, because
the number of subcommands a command might have is rather low, and
those commands accept only a single --option or none at all. An
alternative would be to introduce a separate option that lists
only subcommands, but then the completion script would need not
one but two git invocations and command substitutions for commands
with subcommands.
Note that this change doesn't affect the behavior of our Bash
completion script, because when completing the --option of a
command with subcommands, e.g. for 'git notes --<TAB>', then all
subcommands will be filtered out anyway, as none of them will
match the word to be completed starting with that double dash
prefix.
[1] Except 'git rerere', because many of its subcommands are
implemented in the bodies of the if-else if statements parsing the
command's subcommand argument.
[2] Except 'credential', 'credential-store' and 'fsmonitor--daemon',
because some of the functions implementing their subcommands take
special parameters.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2022-08-19 16:04:00 +00:00
|
|
|
int cmd__parse_subcommand(int argc, const char **argv);
|
2021-06-17 17:13:26 +00:00
|
|
|
int cmd__partial_clone(int argc, const char **argv);
|
2018-03-24 07:44:49 +00:00
|
|
|
int cmd__path_utils(int argc, const char **argv);
|
grep/pcre2: better support invalid UTF-8 haystacks
Improve the support for invalid UTF-8 haystacks given a non-ASCII
needle when using the PCREv2 backend.
This is a more complete fix for a bug I started to fix in
870eea8166 (grep: do not enter PCRE2_UTF mode on fixed matching,
2019-07-26), now that PCREv2 has the PCRE2_MATCH_INVALID_UTF mode we
can make use of it.
This fixes the sort of case described in 8a5999838e (grep: stess test
PCRE v2 on invalid UTF-8 data, 2019-07-26), i.e.:
- The subject string is non-ASCII (e.g. "ævar")
- We're under a is_utf8_locale(), e.g. "en_US.UTF-8", not "C"
- We are using --ignore-case, or we're a non-fixed pattern
If those conditions were satisfied and we matched found non-valid
UTF-8 data PCREv2 might bark on it, in practice this only happened
under the JIT backend (turned on by default on most platforms).
Ultimately this fixes a "regression" in b65abcafc7 ("grep: use PCRE v2
for optimized fixed-string search", 2019-07-01), I'm putting that in
scare-quotes because before then we wouldn't properly support these
complex case-folding, locale etc. cases either, it just broke in
different ways.
There was a bug related to this the PCRE2_NO_START_OPTIMIZE flag fixed
in PCREv2 10.36. It can be worked around by setting the
PCRE2_NO_START_OPTIMIZE flag. Let's do that in those cases, and add
tests for the bug.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24 17:28:13 +00:00
|
|
|
int cmd__pcre2_config(int argc, const char **argv);
|
2018-09-09 17:36:28 +00:00
|
|
|
int cmd__pkt_line(int argc, const char **argv);
|
2018-03-24 07:44:50 +00:00
|
|
|
int cmd__prio_queue(int argc, const char **argv);
|
receive-pack: add new proc-receive hook
Git calls an internal `execute_commands` function to handle commands
sent from client to `git-receive-pack`. Regardless of what references
the user pushes, git creates or updates the corresponding references if
the user has write-permission. A contributor who has no
write-permission, cannot push to the repository directly. So, the
contributor has to write commits to an alternate location, and sends
pull request by emails or by other ways. We call this workflow as a
distributed workflow.
It would be more convenient to work in a centralized workflow like what
Gerrit provided for some cases. For example, a read-only user who
cannot push to a branch directly can run the following `git push`
command to push commits to a pseudo reference (has a prefix "refs/for/",
not "refs/heads/") to create a code review.
git push origin \
HEAD:refs/for/<branch-name>/<session>
The `<branch-name>` in the above example can be as simple as "master",
or a more complicated branch name like "foo/bar". The `<session>` in
the above example command can be the local branch name of the client
side, such as "my/topic".
We cannot implement a centralized workflow elegantly by using
"pre-receive" + "post-receive", because Git will call the internal
function "execute_commands" to create references (even the special
pseudo reference) between these two hooks. Even though we can delete
the temporarily created pseudo reference via the "post-receive" hook,
having a temporary reference is not safe for concurrent pushes.
So, add a filter and a new handler to support this kind of workflow.
The filter will check the prefix of the reference name, and if the
command has a special reference name, the filter will turn a specific
field (`run_proc_receive`) on for the command. Commands with this filed
turned on will be executed by a new handler (a hook named
"proc-receive") instead of the internal `execute_commands` function.
We can use this "proc-receive" command to create pull requests or send
emails for code review.
Suggested by Junio, this "proc-receive" hook reads the commands,
push-options (optional), and send result using a protocol in pkt-line
format. In the following example, the letter "S" stands for
"receive-pack" and letter "H" stands for the hook.
# Version and features negotiation.
S: PKT-LINE(version=1\0push-options atomic...)
S: flush-pkt
H: PKT-LINE(version=1\0push-options...)
H: flush-pkt
# Send commands from server to the hook.
S: PKT-LINE(<old-oid> <new-oid> <ref>)
S: ... ...
S: flush-pkt
# Send push-options only if the 'push-options' feature is enabled.
S: PKT-LINE(push-option)
S: ... ...
S: flush-pkt
# Receive result from the hook.
# OK, run this command successfully.
H: PKT-LINE(ok <ref>)
# NO, I reject it.
H: PKT-LINE(ng <ref> <reason>)
# Fall through, let 'receive-pack' to execute it.
H: PKT-LINE(ok <ref>)
H: PKT-LINE(option fall-through)
# OK, but has an alternate reference. The alternate reference name
# and other status can be given in options
H: PKT-LINE(ok <ref>)
H: PKT-LINE(option refname <refname>)
H: PKT-LINE(option old-oid <old-oid>)
H: PKT-LINE(option new-oid <new-oid>)
H: PKT-LINE(option forced-update)
H: ... ...
H: flush-pkt
After receiving a command, the hook will execute the command, and may
create/update different reference. For example, a command for a pseudo
reference "refs/for/master/topic" may create/update different reference
such as "refs/pull/123/head". The alternate reference name and other
status are given in option lines.
The list of commands returned from "proc-receive" will replace the
relevant commands that are sent from user to "receive-pack", and
"receive-pack" will continue to run the "execute_commands" function and
other routines. Finally, the result of the execution of these commands
will be reported to end user.
The reporting function from "receive-pack" to "send-pack" will be
extended in latter commit just like what the "proc-receive" hook reports
to "receive-pack".
Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-27 15:45:44 +00:00
|
|
|
int cmd__proc_receive(int argc, const char **argv);
|
Test the progress display
'progress.c' has seen a few fixes recently [1], and, unfortunately,
some of those fixes required further fixes [2]. It seems it's time to
have a few tests focusing on the subtleties of the progress display.
Add the 'test-tool progress' subcommand to help testing the progress
display, reading instructions from standard input and turning them
into calls to the display_progress() and display_throughput()
functions with the given parameters.
The progress display is, however, critically dependent on timing,
because it's only updated once every second or, if the toal is known
in advance, every 1%, and there is the throughput rate as well. These
make the progress display far too undeterministic for testing as-is.
To address this, add a few testing-specific variables and functions to
'progress.c', allowing the the new test helper to:
- Disable the triggered-every-second SIGALRM and set the
'progress_update' flag explicitly based in the input instructions.
This way the progress line will be updated deterministically when
the test wants it to be updated.
- Specify the time elapsed since start_progress() to make the
throughput rate calculations deterministic.
Add the new test script 't0500-progress-display.sh' to check a few
simple cases with and without throughput, and that a shorter progress
line properly covers up the previously displayed line in different
situations.
[1] See commits 545dc345eb (progress: break too long progress bar
lines, 2019-04-12) and 9f1fd84e15 (progress: clear previous
progress update dynamically, 2019-04-12).
[2] 1aed1a5f25 (progress: avoid empty line when breaking the progress
line, 2019-05-19)
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-16 20:54:12 +00:00
|
|
|
int cmd__progress(int argc, const char **argv);
|
2018-07-20 16:33:15 +00:00
|
|
|
int cmd__reach(int argc, const char **argv);
|
2018-03-24 07:44:51 +00:00
|
|
|
int cmd__read_cache(int argc, const char **argv);
|
2019-11-12 16:58:20 +00:00
|
|
|
int cmd__read_graph(int argc, const char **argv);
|
2018-07-12 19:39:23 +00:00
|
|
|
int cmd__read_midx(int argc, const char **argv);
|
2018-03-24 07:44:52 +00:00
|
|
|
int cmd__ref_store(int argc, const char **argv);
|
2022-08-15 01:06:37 +00:00
|
|
|
int cmd__rot13_filter(int argc, const char **argv);
|
2021-10-07 20:25:00 +00:00
|
|
|
int cmd__reftable(int argc, const char **argv);
|
2018-03-24 07:44:53 +00:00
|
|
|
int cmd__regex(int argc, const char **argv);
|
2018-07-11 22:42:42 +00:00
|
|
|
int cmd__repository(int argc, const char **argv);
|
2018-03-24 07:44:54 +00:00
|
|
|
int cmd__revision_walking(int argc, const char **argv);
|
2018-03-24 07:44:55 +00:00
|
|
|
int cmd__run_command(int argc, const char **argv);
|
2018-03-24 07:44:56 +00:00
|
|
|
int cmd__scrap_cache_tree(int argc, const char **argv);
|
2019-04-18 13:16:51 +00:00
|
|
|
int cmd__serve_v2(int argc, const char **argv);
|
2018-03-24 07:44:32 +00:00
|
|
|
int cmd__sha1(int argc, const char **argv);
|
Makefile & test-tool: replace "DC_SHA1" variable with a "define"
Address the root cause of technical debt we've been carrying since
sha1collisiondetection was made the default in [1]. In a preceding
commit we narrowly fixed a bug where the "DC_SHA1" variable would be
unset (in combination with "NO_APPLE_COMMON_CRYPTO=" on OSX), even
though we had the sha1collisiondetection library enabled.
But the only reason we needed to have such a user-exposed knob went
away with [1], and it's been doing nothing useful since then. We don't
care if you define DC_SHA1=*, we only care that you don't ask for any
other SHA-1 implementation. If it turns out that you didn't, we'll use
sha1collisiondetection, whether you had "DC_SHA1" set or not.
As a result of this being confusing we had e.g. [2] for cmake and the
recent [3] for ci/lib.sh setting "DC_SHA1" explicitly, even though
this was always a NOOP.
A much simpler way to do this is to stop having the Makefile and
CMakeLists.txt set "DC_SHA1" to be picked up by the test-lib.sh, let's
instead add a trivial "test-tool sha1-is-sha1dc". It returns zero if
we're using sha1collisiondetection, non-zero otherwise.
1. e6b07da2780 (Makefile: make DC_SHA1 the default, 2017-03-17)
2. c4b2f41b5f5 (cmake: support for testing git with ctest, 2020-06-26)
3. 1ad5c3df35a (ci: use DC_SHA1=YesPlease on osx-clang job for CI,
2022-10-20)
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
2022-11-07 21:23:10 +00:00
|
|
|
int cmd__sha1_is_sha1dc(int argc, const char **argv);
|
2020-03-30 14:04:03 +00:00
|
|
|
int cmd__oid_array(int argc, const char **argv);
|
2018-11-14 04:09:36 +00:00
|
|
|
int cmd__sha256(int argc, const char **argv);
|
2018-03-24 07:44:58 +00:00
|
|
|
int cmd__sigchain(int argc, const char **argv);
|
2021-03-22 10:29:48 +00:00
|
|
|
int cmd__simple_ipc(int argc, const char **argv);
|
2018-03-24 07:44:59 +00:00
|
|
|
int cmd__strcmp_offset(int argc, const char **argv);
|
2018-03-24 07:45:00 +00:00
|
|
|
int cmd__string_list(int argc, const char **argv);
|
2022-08-31 23:17:48 +00:00
|
|
|
int cmd__submodule(int argc, const char **argv);
|
2018-03-24 07:45:01 +00:00
|
|
|
int cmd__submodule_config(int argc, const char **argv);
|
2018-10-25 16:18:13 +00:00
|
|
|
int cmd__submodule_nested_repo_config(int argc, const char **argv);
|
2018-03-24 07:45:02 +00:00
|
|
|
int cmd__subprocess(int argc, const char **argv);
|
2019-02-22 22:25:10 +00:00
|
|
|
int cmd__trace2(int argc, const char **argv);
|
2021-04-08 15:04:21 +00:00
|
|
|
int cmd__userdiff(int argc, const char **argv);
|
2018-03-24 07:45:03 +00:00
|
|
|
int cmd__urlmatch_normalization(int argc, const char **argv);
|
2019-01-29 14:19:27 +00:00
|
|
|
int cmd__xml_encode(int argc, const char **argv);
|
2018-03-24 07:45:04 +00:00
|
|
|
int cmd__wildmatch(int argc, const char **argv);
|
2018-09-11 20:06:01 +00:00
|
|
|
#ifdef GIT_WINDOWS_NATIVE
|
|
|
|
int cmd__windows_named_pipe(int argc, const char **argv);
|
|
|
|
#endif
|
2018-03-24 07:45:05 +00:00
|
|
|
int cmd__write_cache(int argc, const char **argv);
|
2018-03-24 07:44:31 +00:00
|
|
|
|
2018-11-14 04:09:32 +00:00
|
|
|
int cmd_hash_impl(int ac, const char **av, int algo);
|
|
|
|
|
2018-03-24 07:44:30 +00:00
|
|
|
#endif
|