git/Documentation/config.txt

3663 lines
150 KiB
Text
Raw Normal View History

CONFIGURATION FILE
------------------
The Git configuration file contains a number of variables that affect
the Git commands' behavior. The `.git/config` file in each repository
is used to store the configuration for that repository, and
`$HOME/.gitconfig` is used to store a per-user configuration as
fallback values for the `.git/config` file. The file `/etc/gitconfig`
can be used to store a system-wide default configuration.
The configuration variables are used by both the Git plumbing
and the porcelains. The variables are divided into sections, wherein
the fully qualified variable name of the variable itself is the last
dot-separated segment and the section name is everything before the last
dot. The variable names are case-insensitive, allow only alphanumeric
characters and `-`, and must start with an alphabetic character. Some
variables may appear multiple times; we say then that the variable is
multivalued.
Syntax
~~~~~~
The syntax is fairly flexible and permissive; whitespaces are mostly
ignored. The '#' and ';' characters begin comments to the end of line,
blank lines are ignored.
The file consists of sections and variables. A section begins with
the name of the section in square brackets and continues until the next
section begins. Section names are case-insensitive. Only alphanumeric
Documentation: remove extra quoting/emphasis around literal texts If literal text (asciidoc `...`) can be rendered in a differently from normal text for each output format (man, HTML), then we do not need extra quotes or other wrapping around inline literal text segments. config.txt Change '`...`' to `...`. In asciidoc, the single quotes provide emphasis, literal text should be distintive enough. Change "`...`" to `...`. These double quotes do not work if present in the described config value, so drop them. git-checkout.txt Change "`...`" to `...` or `"..."`. All instances are command line argument examples. One "`-`" becomes `-`. Two others are involve curly braces, so move the double quotes inside the literal region to indicate that they might need to be quoted on the command line of certain shells (tcsh). git-merge.txt Change "`...`" to `...`. All instances are used to describe merge conflict markers. The quotes should are not important. git-rev-parse.txt Change "`...`" to `...`. All instances are around command line arguments where no in-shell quoting should be necessary. gitcli.txt Change `"..."` to `...`. All instances are around command line examples or single command arguments. They do not semanticly belong inside the literal text, and they are not needed outside it. glossary-content.txt user-manual.txt Change "`...`" to `...`. All instances were around command lines. Signed-off-by: Chris Johnsen <chris_johnsen@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-15 11:30:52 +00:00
characters, `-` and `.` are allowed in section names. Each variable
must belong to some section, which means that there must be a section
header before the first setting of a variable.
Sections can be further divided into subsections. To begin a subsection
put its name in double quotes, separated by space from the section name,
in the section header, like in the example below:
--------
[section "subsection"]
--------
Subsection names are case sensitive and can contain any characters except
newline and the null byte. Doublequote `"` and backslash can be included
by escaping them as `\"` and `\\`, respectively. Backslashes preceding
other characters are dropped when reading; for example, `\t` is read as
`t` and `\0` is read as `0` Section headers cannot span multiple lines.
Variables may belong directly to a section or to a given subsection. You
can have `[section]` if you have `[section "subsection"]`, but you don't
need to.
There is also a deprecated `[section.subsection]` syntax. With this
syntax, the subsection name is converted to lower-case and is also
compared case sensitively. These subsection names follow the same
restrictions as section names.
All the other lines (and the remainder of the line after the section
header) are recognized as setting variables, in the form
'name = value' (or just 'name', which is a short-hand to say that
the variable is the boolean "true").
The variable names are case-insensitive, allow only alphanumeric characters
and `-`, and must start with an alphabetic character.
A line that defines a value can be continued to the next line by
ending it with a `\`; the backquote and the end-of-line are
stripped. Leading whitespaces after 'name =', the remainder of the
line after the first comment character '#' or ';', and trailing
whitespaces of the line are discarded unless they are enclosed in
double quotes. Internal whitespaces within the value are retained
verbatim.
Inside double quotes, double quote `"` and backslash `\` characters
must be escaped: use `\"` for `"` and `\\` for `\`.
Documentation: remove extra quoting/emphasis around literal texts If literal text (asciidoc `...`) can be rendered in a differently from normal text for each output format (man, HTML), then we do not need extra quotes or other wrapping around inline literal text segments. config.txt Change '`...`' to `...`. In asciidoc, the single quotes provide emphasis, literal text should be distintive enough. Change "`...`" to `...`. These double quotes do not work if present in the described config value, so drop them. git-checkout.txt Change "`...`" to `...` or `"..."`. All instances are command line argument examples. One "`-`" becomes `-`. Two others are involve curly braces, so move the double quotes inside the literal region to indicate that they might need to be quoted on the command line of certain shells (tcsh). git-merge.txt Change "`...`" to `...`. All instances are used to describe merge conflict markers. The quotes should are not important. git-rev-parse.txt Change "`...`" to `...`. All instances are around command line arguments where no in-shell quoting should be necessary. gitcli.txt Change `"..."` to `...`. All instances are around command line examples or single command arguments. They do not semanticly belong inside the literal text, and they are not needed outside it. glossary-content.txt user-manual.txt Change "`...`" to `...`. All instances were around command lines. Signed-off-by: Chris Johnsen <chris_johnsen@pobox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-15 11:30:52 +00:00
The following escape sequences (beside `\"` and `\\`) are recognized:
`\n` for newline character (NL), `\t` for horizontal tabulation (HT, TAB)
and `\b` for backspace (BS). Other char escape sequences (including octal
escape sequences) are invalid.
config: add include directive It can be useful to split your ~/.gitconfig across multiple files. For example, you might have a "main" file which is used on many machines, but a small set of per-machine tweaks. Or you may want to make some of your config public (e.g., clever aliases) while keeping other data back (e.g., your name or other identifying information). Or you may want to include a number of config options in some subset of your repos without copying and pasting (e.g., you want to reference them from the .git/config of participating repos). This patch introduces an include directive for config files. It looks like: [include] path = /path/to/file This is syntactically backwards-compatible with existing git config parsers (i.e., they will see it as another config entry and ignore it unless you are looking up include.path). The implementation provides a "git_config_include" callback which wraps regular config callbacks. Callers can pass it to git_config_from_file, and it will transparently follow any include directives, passing all of the discovered options to the real callback. Include directives are turned on automatically for "regular" git config parsing. This includes calls to git_config, as well as calls to the "git config" program that do not specify a single file (e.g., using "-f", "--global", etc). They are not turned on in other cases, including: 1. Parsing of other config-like files, like .gitmodules. There isn't a real need, and I'd rather be conservative and avoid unnecessary incompatibility or confusion. 2. Reading single files via "git config". This is for two reasons: a. backwards compatibility with scripts looking at config-like files. b. inspection of a specific file probably means you care about just what's in that file, not a general lookup for "do we have this value anywhere at all". If that is not the case, the caller can always specify "--includes". 3. Writing files via "git config"; we want to treat include.* variables as literal items to be copied (or modified), and not expand them. So "git config --unset-all foo.bar" would operate _only_ on .git/config, not any of its included files (just as it also does not operate on ~/.gitconfig). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 09:54:04 +00:00
Includes
~~~~~~~~
The `include` and `includeIf` sections allow you to include config
directives from another source. These sections behave identically to
each other with the exception that `includeIf` sections may be ignored
if their condition does not evaluate to true; see "Conditional includes"
below.
You can include a config file from another by setting the special
`include.path` (or `includeIf.*.path`) variable to the name of the file
to be included. The variable takes a pathname as its value, and is
subject to tilde expansion. These variables can be given multiple times.
The contents of the included file are inserted immediately, as if they
had been found at the location of the include directive. If the value of the
variable is a relative path, the path is considered to
be relative to the configuration file in which the include directive
was found. See below for examples.
config: add conditional include Sometimes a set of repositories want to share configuration settings among themselves that are distinct from other such sets of repositories. A user may work on two projects, each of which have multiple repositories, and use one user.email for one project while using another for the other. Setting $GIT_DIR/.config works, but if the penalty of forgetting to update $GIT_DIR/.config is high (especially when you end up cloning often), it may not be the best way to go. Having the settings in ~/.gitconfig, which would work for just one set of repositories, would not well in such a situation. Having separate ${HOME}s may add more problems than it solves. Extend the include.path mechanism that lets a config file include another config file, so that the inclusion can be done only when some conditions hold. Then ~/.gitconfig can say "include config-project-A only when working on project-A" for each project A the user works on. In this patch, the only supported grouping is based on $GIT_DIR (in absolute path), so you would need to group repositories by directory, or something like that to take advantage of it. We already have include.path for unconditional includes. This patch goes with includeIf.<condition>.path to make it clearer that a condition is required. The new config has the same backward compatibility approach as include.path: older git versions that don't understand includeIf will simply ignore them. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-01 11:26:31 +00:00
Conditional includes
~~~~~~~~~~~~~~~~~~~~
You can include a config file from another conditionally by setting a
`includeIf.<condition>.path` variable to the name of the file to be
included.
config: add conditional include Sometimes a set of repositories want to share configuration settings among themselves that are distinct from other such sets of repositories. A user may work on two projects, each of which have multiple repositories, and use one user.email for one project while using another for the other. Setting $GIT_DIR/.config works, but if the penalty of forgetting to update $GIT_DIR/.config is high (especially when you end up cloning often), it may not be the best way to go. Having the settings in ~/.gitconfig, which would work for just one set of repositories, would not well in such a situation. Having separate ${HOME}s may add more problems than it solves. Extend the include.path mechanism that lets a config file include another config file, so that the inclusion can be done only when some conditions hold. Then ~/.gitconfig can say "include config-project-A only when working on project-A" for each project A the user works on. In this patch, the only supported grouping is based on $GIT_DIR (in absolute path), so you would need to group repositories by directory, or something like that to take advantage of it. We already have include.path for unconditional includes. This patch goes with includeIf.<condition>.path to make it clearer that a condition is required. The new config has the same backward compatibility approach as include.path: older git versions that don't understand includeIf will simply ignore them. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-01 11:26:31 +00:00
The condition starts with a keyword followed by a colon and some data
whose format and meaning depends on the keyword. Supported keywords
are:
`gitdir`::
The data that follows the keyword `gitdir:` is used as a glob
pattern. If the location of the .git directory matches the
pattern, the include condition is met.
+
The .git location may be auto-discovered, or come from `$GIT_DIR`
environment variable. If the repository is auto discovered via a .git
file (e.g. from submodules, or a linked worktree), the .git location
would be the final location where the .git directory is, not where the
.git file is.
+
The pattern can contain standard globbing wildcards and two additional
ones, `**/` and `/**`, that can match multiple path components. Please
refer to linkgit:gitignore[5] for details. For convenience:
* If the pattern starts with `~/`, `~` will be substituted with the
content of the environment variable `HOME`.
* If the pattern starts with `./`, it is replaced with the directory
containing the current config file.
* If the pattern does not start with either `~/`, `./` or `/`, `**/`
will be automatically prepended. For example, the pattern `foo/bar`
becomes `**/foo/bar` and would match `/any/path/to/foo/bar`.
* If the pattern ends with `/`, `**` will be automatically added. For
example, the pattern `foo/` becomes `foo/**`. In other words, it
matches "foo" and everything inside, recursively.
`gitdir/i`::
This is the same as `gitdir` except that matching is done
case-insensitively (e.g. on case-insensitive file sytems)
A few more notes on matching via `gitdir` and `gitdir/i`:
* Symlinks in `$GIT_DIR` are not resolved before matching.
* Both the symlink & realpath versions of paths will be matched
outside of `$GIT_DIR`. E.g. if ~/git is a symlink to
/mnt/storage/git, both `gitdir:~/git` and `gitdir:/mnt/storage/git`
will match.
+
This was not the case in the initial release of this feature in
v2.13.0, which only matched the realpath version. Configuration that
wants to be compatible with the initial release of this feature needs
to either specify only the realpath version, or both versions.
config: add conditional include Sometimes a set of repositories want to share configuration settings among themselves that are distinct from other such sets of repositories. A user may work on two projects, each of which have multiple repositories, and use one user.email for one project while using another for the other. Setting $GIT_DIR/.config works, but if the penalty of forgetting to update $GIT_DIR/.config is high (especially when you end up cloning often), it may not be the best way to go. Having the settings in ~/.gitconfig, which would work for just one set of repositories, would not well in such a situation. Having separate ${HOME}s may add more problems than it solves. Extend the include.path mechanism that lets a config file include another config file, so that the inclusion can be done only when some conditions hold. Then ~/.gitconfig can say "include config-project-A only when working on project-A" for each project A the user works on. In this patch, the only supported grouping is based on $GIT_DIR (in absolute path), so you would need to group repositories by directory, or something like that to take advantage of it. We already have include.path for unconditional includes. This patch goes with includeIf.<condition>.path to make it clearer that a condition is required. The new config has the same backward compatibility approach as include.path: older git versions that don't understand includeIf will simply ignore them. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-01 11:26:31 +00:00
* Note that "../" is not special and will match literally, which is
unlikely what you want.
config: add include directive It can be useful to split your ~/.gitconfig across multiple files. For example, you might have a "main" file which is used on many machines, but a small set of per-machine tweaks. Or you may want to make some of your config public (e.g., clever aliases) while keeping other data back (e.g., your name or other identifying information). Or you may want to include a number of config options in some subset of your repos without copying and pasting (e.g., you want to reference them from the .git/config of participating repos). This patch introduces an include directive for config files. It looks like: [include] path = /path/to/file This is syntactically backwards-compatible with existing git config parsers (i.e., they will see it as another config entry and ignore it unless you are looking up include.path). The implementation provides a "git_config_include" callback which wraps regular config callbacks. Callers can pass it to git_config_from_file, and it will transparently follow any include directives, passing all of the discovered options to the real callback. Include directives are turned on automatically for "regular" git config parsing. This includes calls to git_config, as well as calls to the "git config" program that do not specify a single file (e.g., using "-f", "--global", etc). They are not turned on in other cases, including: 1. Parsing of other config-like files, like .gitmodules. There isn't a real need, and I'd rather be conservative and avoid unnecessary incompatibility or confusion. 2. Reading single files via "git config". This is for two reasons: a. backwards compatibility with scripts looking at config-like files. b. inspection of a specific file probably means you care about just what's in that file, not a general lookup for "do we have this value anywhere at all". If that is not the case, the caller can always specify "--includes". 3. Writing files via "git config"; we want to treat include.* variables as literal items to be copied (or modified), and not expand them. So "git config --unset-all foo.bar" would operate _only_ on .git/config, not any of its included files (just as it also does not operate on ~/.gitconfig). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 09:54:04 +00:00
Example
~~~~~~~
# Core variables
[core]
; Don't trust file modes
filemode = false
# Our diff algorithm
[diff]
external = /usr/local/bin/diff-wrapper
renames = true
[branch "devel"]
remote = origin
merge = refs/heads/devel
# Proxy settings
[core]
gitProxy="ssh" for "kernel.org"
gitProxy=default-proxy ; for the rest
config: add include directive It can be useful to split your ~/.gitconfig across multiple files. For example, you might have a "main" file which is used on many machines, but a small set of per-machine tweaks. Or you may want to make some of your config public (e.g., clever aliases) while keeping other data back (e.g., your name or other identifying information). Or you may want to include a number of config options in some subset of your repos without copying and pasting (e.g., you want to reference them from the .git/config of participating repos). This patch introduces an include directive for config files. It looks like: [include] path = /path/to/file This is syntactically backwards-compatible with existing git config parsers (i.e., they will see it as another config entry and ignore it unless you are looking up include.path). The implementation provides a "git_config_include" callback which wraps regular config callbacks. Callers can pass it to git_config_from_file, and it will transparently follow any include directives, passing all of the discovered options to the real callback. Include directives are turned on automatically for "regular" git config parsing. This includes calls to git_config, as well as calls to the "git config" program that do not specify a single file (e.g., using "-f", "--global", etc). They are not turned on in other cases, including: 1. Parsing of other config-like files, like .gitmodules. There isn't a real need, and I'd rather be conservative and avoid unnecessary incompatibility or confusion. 2. Reading single files via "git config". This is for two reasons: a. backwards compatibility with scripts looking at config-like files. b. inspection of a specific file probably means you care about just what's in that file, not a general lookup for "do we have this value anywhere at all". If that is not the case, the caller can always specify "--includes". 3. Writing files via "git config"; we want to treat include.* variables as literal items to be copied (or modified), and not expand them. So "git config --unset-all foo.bar" would operate _only_ on .git/config, not any of its included files (just as it also does not operate on ~/.gitconfig). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 09:54:04 +00:00
[include]
path = /path/to/foo.inc ; include by absolute path
path = foo.inc ; find "foo.inc" relative to the current file
path = ~/foo.inc ; find "foo.inc" in your `$HOME` directory
config: add include directive It can be useful to split your ~/.gitconfig across multiple files. For example, you might have a "main" file which is used on many machines, but a small set of per-machine tweaks. Or you may want to make some of your config public (e.g., clever aliases) while keeping other data back (e.g., your name or other identifying information). Or you may want to include a number of config options in some subset of your repos without copying and pasting (e.g., you want to reference them from the .git/config of participating repos). This patch introduces an include directive for config files. It looks like: [include] path = /path/to/file This is syntactically backwards-compatible with existing git config parsers (i.e., they will see it as another config entry and ignore it unless you are looking up include.path). The implementation provides a "git_config_include" callback which wraps regular config callbacks. Callers can pass it to git_config_from_file, and it will transparently follow any include directives, passing all of the discovered options to the real callback. Include directives are turned on automatically for "regular" git config parsing. This includes calls to git_config, as well as calls to the "git config" program that do not specify a single file (e.g., using "-f", "--global", etc). They are not turned on in other cases, including: 1. Parsing of other config-like files, like .gitmodules. There isn't a real need, and I'd rather be conservative and avoid unnecessary incompatibility or confusion. 2. Reading single files via "git config". This is for two reasons: a. backwards compatibility with scripts looking at config-like files. b. inspection of a specific file probably means you care about just what's in that file, not a general lookup for "do we have this value anywhere at all". If that is not the case, the caller can always specify "--includes". 3. Writing files via "git config"; we want to treat include.* variables as literal items to be copied (or modified), and not expand them. So "git config --unset-all foo.bar" would operate _only_ on .git/config, not any of its included files (just as it also does not operate on ~/.gitconfig). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-06 09:54:04 +00:00
config: add conditional include Sometimes a set of repositories want to share configuration settings among themselves that are distinct from other such sets of repositories. A user may work on two projects, each of which have multiple repositories, and use one user.email for one project while using another for the other. Setting $GIT_DIR/.config works, but if the penalty of forgetting to update $GIT_DIR/.config is high (especially when you end up cloning often), it may not be the best way to go. Having the settings in ~/.gitconfig, which would work for just one set of repositories, would not well in such a situation. Having separate ${HOME}s may add more problems than it solves. Extend the include.path mechanism that lets a config file include another config file, so that the inclusion can be done only when some conditions hold. Then ~/.gitconfig can say "include config-project-A only when working on project-A" for each project A the user works on. In this patch, the only supported grouping is based on $GIT_DIR (in absolute path), so you would need to group repositories by directory, or something like that to take advantage of it. We already have include.path for unconditional includes. This patch goes with includeIf.<condition>.path to make it clearer that a condition is required. The new config has the same backward compatibility approach as include.path: older git versions that don't understand includeIf will simply ignore them. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-01 11:26:31 +00:00
; include if $GIT_DIR is /path/to/foo/.git
[includeIf "gitdir:/path/to/foo/.git"]
path = /path/to/foo.inc
; include for all repositories inside /path/to/group
[includeIf "gitdir:/path/to/group/"]
path = /path/to/foo.inc
; include for all repositories inside $HOME/to/group
[includeIf "gitdir:~/to/group/"]
path = /path/to/foo.inc
; relative paths are always relative to the including
; file (if the condition is true); their location is not
; affected by the condition
[includeIf "gitdir:/path/to/group/"]
path = foo.inc
Values
~~~~~~
Values of many variables are treated as a simple string, but there
are variables that take values of specific types and there are rules
as to how to spell them.
boolean::
When a variable is said to take a boolean value, many
synonyms are accepted for 'true' and 'false'; these are all
case-insensitive.
true;; Boolean true literals are `yes`, `on`, `true`,
and `1`. Also, a variable defined without `= <value>`
is taken as true.
false;; Boolean false literals are `no`, `off`, `false`,
`0` and the empty string.
+
When converting value to the canonical form using `--bool` type
specifier, 'git config' will ensure that the output is "true" or
"false" (spelled in lowercase).
integer::
The value for many variables that specify various sizes can
be suffixed with `k`, `M`,... to mean "scale the number by
1024", "by 1024x1024", etc.
color::
The value for a variable that takes a color is a list of
colors (at most two, one for foreground and one for background)
and attributes (as many as you want), separated by spaces.
+
The basic colors accepted are `normal`, `black`, `red`, `green`, `yellow`,
`blue`, `magenta`, `cyan` and `white`. The first color given is the
foreground; the second is the background.
+
Colors may also be given as numbers between 0 and 255; these use ANSI
256-color mode (but note that not all terminals may support this). If
your terminal supports it, you may also specify 24-bit RGB values as
hex, like `#ff0ab3`.
+
The accepted attributes are `bold`, `dim`, `ul`, `blink`, `reverse`,
`italic`, and `strike` (for crossed-out or "strikethrough" letters).
The position of any attributes with respect to the colors
(before, after, or in between), doesn't matter. Specific attributes may
be turned off by prefixing them with `no` or `no-` (e.g., `noreverse`,
`no-ul`, etc).
+
An empty color string produces no color effect at all. This can be used
to avoid coloring specific elements without disabling color entirely.
+
For git's pre-defined color slots, the attributes are meant to be reset
at the beginning of each item in the colored output. So setting
`color.decorate.branch` to `black` will paint that branch name in a
plain `black`, even if the previous thing on the same output line (e.g.
opening parenthesis before the list of branch names in `log --decorate`
output) is set to be painted with `bold` or some other attribute.
However, custom log formats may do more complicated and layered
coloring, and the negated forms may be useful there.
pathname::
A variable that takes a pathname value can be given a
string that begins with "`~/`" or "`~user/`", and the usual
tilde expansion happens to such a string: `~/`
is expanded to the value of `$HOME`, and `~user/` to the
specified user's home directory.
Variables
~~~~~~~~~
Note that this list is non-comprehensive and not necessarily complete.
For command-specific variables, you will find a more detailed description
in the appropriate manual page.
Other git-related tools may and do use their own variables. When
inventing new variables for use in your own tool, make sure their
names do not conflict with those that are used by Git itself and
other popular tools, and describe them in your documentation.
advice.*::
These variables control various optional help messages designed to
aid new users. All 'advice.*' variables default to 'true', and you
can tell Git that you do not need help by setting these to 'false':
+
--
pushUpdateRejected::
push: Provide situational hints for non-fast-forward errors Pushing a non-fast-forward update to a remote repository will result in an error, but the hint text doesn't provide the correct resolution in every case. Give better resolution advice in three push scenarios: 1) If you push your current branch and it triggers a non-fast-forward error, you should merge remote changes with 'git pull' before pushing again. 2) If you push to a shared repository others push to, and your local tracking branches are not kept up to date, the 'matching refs' default will generate non-fast-forward errors on outdated branches. If this is your workflow, the 'matching refs' default is not for you. Consider setting the 'push.default' configuration variable to 'current' or 'upstream' to ensure only your current branch is pushed. 3) If you explicitly specify a ref that is not your current branch or push matching branches with ':', you will generate a non-fast-forward error if any pushed branch tip is out of date. You should checkout the offending branch and merge remote changes before pushing again. Teach transport.c to recognize these scenarios and configure push.c to hint for them. If 'git push's default behavior changes or we discover more scenarios, extension is easy. Standardize on the advice API and add three new advice variables, 'pushNonFFCurrent', 'pushNonFFDefault', and 'pushNonFFMatching'. Setting any of these to 'false' will disable their affiliated advice. Setting 'pushNonFastForward' to false will disable all three, thus preserving the config option for users who already set it, but guaranteeing new users won't disable push advice accidentally. Based-on-patch-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Christopher Tiwald <christiwald@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-20 04:31:33 +00:00
Set this variable to 'false' if you want to disable
'pushNonFFCurrent',
push: introduce REJECT_FETCH_FIRST and REJECT_NEEDS_FORCE When we push to update an existing ref, if: * the object at the tip of the remote is not a commit; or * the object we are pushing is not a commit, it won't be correct to suggest to fetch, integrate and push again, as the old and new objects will not "merge". We should explain that the push must be forced when there is a non-committish object is involved in such a case. If we do not have the current object at the tip of the remote, we do not even know that object, when fetched, is something that can be merged. In such a case, suggesting to pull first just like non-fast-forward case may not be technically correct, but in practice, most such failures are seen when you try to push your work to a branch without knowing that somebody else already pushed to update the same branch since you forked, so "pull first" would work as a suggestion most of the time. And if the object at the tip is not a commit, "pull first" will fail, without making any permanent damage. As a side effect, it also makes the error message the user will get during the next "push" attempt easier to understand, now the user is aware that a non-commit object is involved. In these cases, the current code already rejects such a push on the client end, but we used the same error and advice messages as the ones used when rejecting a non-fast-forward push, i.e. pull from there and integrate before pushing again. Introduce new rejection reasons and reword the messages appropriately. [jc: with help by Peff on message details] Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-23 21:55:30 +00:00
'pushNonFFMatching', 'pushAlreadyExists',
'pushFetchFirst', and 'pushNeedsForce'
simultaneously.
push: Provide situational hints for non-fast-forward errors Pushing a non-fast-forward update to a remote repository will result in an error, but the hint text doesn't provide the correct resolution in every case. Give better resolution advice in three push scenarios: 1) If you push your current branch and it triggers a non-fast-forward error, you should merge remote changes with 'git pull' before pushing again. 2) If you push to a shared repository others push to, and your local tracking branches are not kept up to date, the 'matching refs' default will generate non-fast-forward errors on outdated branches. If this is your workflow, the 'matching refs' default is not for you. Consider setting the 'push.default' configuration variable to 'current' or 'upstream' to ensure only your current branch is pushed. 3) If you explicitly specify a ref that is not your current branch or push matching branches with ':', you will generate a non-fast-forward error if any pushed branch tip is out of date. You should checkout the offending branch and merge remote changes before pushing again. Teach transport.c to recognize these scenarios and configure push.c to hint for them. If 'git push's default behavior changes or we discover more scenarios, extension is easy. Standardize on the advice API and add three new advice variables, 'pushNonFFCurrent', 'pushNonFFDefault', and 'pushNonFFMatching'. Setting any of these to 'false' will disable their affiliated advice. Setting 'pushNonFastForward' to false will disable all three, thus preserving the config option for users who already set it, but guaranteeing new users won't disable push advice accidentally. Based-on-patch-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Christopher Tiwald <christiwald@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-20 04:31:33 +00:00
pushNonFFCurrent::
Advice shown when linkgit:git-push[1] fails due to a
non-fast-forward update to the current branch.
pushNonFFMatching::
Advice shown when you ran linkgit:git-push[1] and pushed
'matching refs' explicitly (i.e. you used ':', or
specified a refspec that isn't your current branch) and
it resulted in a non-fast-forward error.
pushAlreadyExists::
Shown when linkgit:git-push[1] rejects an update that
does not qualify for fast-forwarding (e.g., a tag.)
push: introduce REJECT_FETCH_FIRST and REJECT_NEEDS_FORCE When we push to update an existing ref, if: * the object at the tip of the remote is not a commit; or * the object we are pushing is not a commit, it won't be correct to suggest to fetch, integrate and push again, as the old and new objects will not "merge". We should explain that the push must be forced when there is a non-committish object is involved in such a case. If we do not have the current object at the tip of the remote, we do not even know that object, when fetched, is something that can be merged. In such a case, suggesting to pull first just like non-fast-forward case may not be technically correct, but in practice, most such failures are seen when you try to push your work to a branch without knowing that somebody else already pushed to update the same branch since you forked, so "pull first" would work as a suggestion most of the time. And if the object at the tip is not a commit, "pull first" will fail, without making any permanent damage. As a side effect, it also makes the error message the user will get during the next "push" attempt easier to understand, now the user is aware that a non-commit object is involved. In these cases, the current code already rejects such a push on the client end, but we used the same error and advice messages as the ones used when rejecting a non-fast-forward push, i.e. pull from there and integrate before pushing again. Introduce new rejection reasons and reword the messages appropriately. [jc: with help by Peff on message details] Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-23 21:55:30 +00:00
pushFetchFirst::
Shown when linkgit:git-push[1] rejects an update that
tries to overwrite a remote ref that points at an
object we do not have.
pushNeedsForce::
Shown when linkgit:git-push[1] rejects an update that
tries to overwrite a remote ref that points at an
object that is not a commit-ish, or make the remote
ref point at an object that is not a commit-ish.
statusHints::
Show directions on how to proceed from the current
state in the output of linkgit:git-status[1], in
the template shown when writing commit messages in
linkgit:git-commit[1], and in the help message shown
by linkgit:git-checkout[1] when switching branch.
statusUoption::
Advise to consider using the `-u` option to linkgit:git-status[1]
when the command takes more than 2 seconds to enumerate untracked
files.
commitBeforeMerge::
Advice shown when linkgit:git-merge[1] refuses to
merge to avoid overwriting local changes.
Be more user-friendly when refusing to do something because of conflict. Various commands refuse to run in the presence of conflicts (commit, merge, pull, cherry-pick/revert). They all used to provide rough, and inconsistant error messages. A new variable advice.resolveconflict is introduced, and allows more verbose messages, pointing the user to the appropriate solution. For commit, the error message used to look like this: $ git commit foo.txt: needs merge foo.txt: unmerged (c34a92682e0394bc0d6f4d4a67a8e2d32395c169) foo.txt: unmerged (3afcd75de8de0bb5076942fcb17446be50451030) foo.txt: unmerged (c9785d77b76dfe4fb038bf927ee518f6ae45ede4) error: Error building trees The "need merge" line is given by refresh_cache. We add the IN_PORCELAIN option to make the output more consistant with the other porcelain commands, and catch the error in return, to stop with a clean error message. The next lines were displayed by a call to cache_tree_update(), which is not reached anymore if we noticed the conflict. The new output looks like: U foo.txt fatal: 'commit' is not possible because you have unmerged files. Please, fix them up in the work tree, and then use 'git add/rm <file>' as appropriate to mark resolution and make a commit, or use 'git commit -a'. Pull is slightly modified to abort immediately if $GIT_DIR/MERGE_HEAD exists instead of waiting for merge to complain. The behavior of merge and the test-case are slightly modified to reflect the usual flow: start with conflicts, fix them, and afterwards get rid of MERGE_HEAD, with different error messages at each stage. Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12 09:54:44 +00:00
resolveConflict::
Advice shown by various commands when conflicts
Be more user-friendly when refusing to do something because of conflict. Various commands refuse to run in the presence of conflicts (commit, merge, pull, cherry-pick/revert). They all used to provide rough, and inconsistant error messages. A new variable advice.resolveconflict is introduced, and allows more verbose messages, pointing the user to the appropriate solution. For commit, the error message used to look like this: $ git commit foo.txt: needs merge foo.txt: unmerged (c34a92682e0394bc0d6f4d4a67a8e2d32395c169) foo.txt: unmerged (3afcd75de8de0bb5076942fcb17446be50451030) foo.txt: unmerged (c9785d77b76dfe4fb038bf927ee518f6ae45ede4) error: Error building trees The "need merge" line is given by refresh_cache. We add the IN_PORCELAIN option to make the output more consistant with the other porcelain commands, and catch the error in return, to stop with a clean error message. The next lines were displayed by a call to cache_tree_update(), which is not reached anymore if we noticed the conflict. The new output looks like: U foo.txt fatal: 'commit' is not possible because you have unmerged files. Please, fix them up in the work tree, and then use 'git add/rm <file>' as appropriate to mark resolution and make a commit, or use 'git commit -a'. Pull is slightly modified to abort immediately if $GIT_DIR/MERGE_HEAD exists instead of waiting for merge to complain. The behavior of merge and the test-case are slightly modified to reflect the usual flow: start with conflicts, fix them, and afterwards get rid of MERGE_HEAD, with different error messages at each stage. Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12 09:54:44 +00:00
prevent the operation from being performed.
implicitIdentity::
Advice on how to set your identity configuration when
your information is guessed from the system username and
domain name.
detachedHead::
Advice shown when you used linkgit:git-checkout[1] to
move to the detach HEAD state, to instruct how to create
a local branch after the fact.
checkout: add advice for ambiguous "checkout <branch>" As the "checkout" documentation describes: If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name, treat as equivalent to [...] <remote>/<branch. This is a really useful feature. The problem is that when you add another remote (e.g. a fork), git won't find a unique branch name anymore, and will instead print this unhelpful message: $ git checkout master error: pathspec 'master' did not match any file(s) known to git Now it will, on my git.git checkout, print: $ ./git --exec-path=$PWD checkout master error: pathspec 'master' did not match any file(s) known to git. hint: 'master' matched more than one remote tracking branch. hint: We found 26 remotes with a reference that matched. So we fell back hint: on trying to resolve the argument as a path, but failed there too! hint: hint: If you meant to check out a remote tracking branch on, e.g. 'origin', hint: you can do so by fully qualifying the name with the --track option: hint: hint: git checkout --track origin/<name> Note that the "error: pathspec[...]" message is still printed. This is because whatever else checkout may have tried earlier, its final fallback is to try to resolve the argument as a path. E.g. in this case: $ ./git --exec-path=$PWD checkout master pu error: pathspec 'master' did not match any file(s) known to git. error: pathspec 'pu' did not match any file(s) known to git. There we don't print the "hint:" implicitly due to earlier logic around the DWIM fallback. That fallback is only used if it looks like we have one argument that might be a branch. I can't think of an intrinsic reason for why we couldn't in some future change skip printing the "error: pathspec[...]" error. However, to do so we'd need to pass something down to checkout_paths() to make it suppress printing an error on its own, and for us to be confident that we're not silencing cases where those errors are meaningful. I don't think that's worth it since determining whether that's the case could easily change due to future changes in the checkout logic. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-05 14:40:48 +00:00
checkoutAmbiguousRemoteBranchName::
Advice shown when the argument to
linkgit:git-checkout[1] ambiguously resolves to a
remote tracking branch on more than one remote in
situations where an unambiguous argument would have
otherwise caused a remote-tracking branch to be
checkout & worktree: introduce checkout.defaultRemote Introduce a checkout.defaultRemote setting which can be used to designate a remote to prefer (via checkout.defaultRemote=origin) when running e.g. "git checkout master" to mean origin/master, even though there's other remotes that have the "master" branch. I want this because it's very handy to use this workflow to checkout a repository and create a topic branch, then get back to a "master" as retrieved from upstream: ( cd /tmp && rm -rf tbdiff && git clone git@github.com:trast/tbdiff.git && cd tbdiff && git branch -m topic && git checkout master ) That will output: Branch 'master' set up to track remote branch 'master' from 'origin'. Switched to a new branch 'master' But as soon as a new remote is added (e.g. just to inspect something from someone else) the DWIMery goes away: ( cd /tmp && rm -rf tbdiff && git clone git@github.com:trast/tbdiff.git && cd tbdiff && git branch -m topic && git remote add avar git@github.com:avar/tbdiff.git && git fetch avar && git checkout master ) Will output (without the advice output added earlier in this series): error: pathspec 'master' did not match any file(s) known to git. The new checkout.defaultRemote config allows me to say that whenever that ambiguity comes up I'd like to prefer "origin", and it'll still work as though the only remote I had was "origin". Also adjust the advice.checkoutAmbiguousRemoteBranchName message to mention this new config setting to the user, the full output on my git.git is now (the last paragraph is new): $ ./git --exec-path=$PWD checkout master error: pathspec 'master' did not match any file(s) known to git. hint: 'master' matched more than one remote tracking branch. hint: We found 26 remotes with a reference that matched. So we fell back hint: on trying to resolve the argument as a path, but failed there too! hint: hint: If you meant to check out a remote tracking branch on, e.g. 'origin', hint: you can do so by fully qualifying the name with the --track option: hint: hint: git checkout --track origin/<name> hint: hint: If you'd like to always have checkouts of an ambiguous <name> prefer hint: one remote, e.g. the 'origin' remote, consider setting hint: checkout.defaultRemote=origin in your config. I considered splitting this into checkout.defaultRemote and worktree.defaultRemote, but it's probably less confusing to break our own rules that anything shared between config should live in core.* than have two config settings, and I couldn't come up with a short name under core.* that made sense (core.defaultRemoteForCheckout?). See also 70c9ac2f19 ("DWIM "git checkout frotz" to "git checkout -b frotz origin/frotz"", 2009-10-18) which introduced this DWIM feature to begin with, and 4e85333197 ("worktree: make add <path> <branch> dwim", 2017-11-26) which added it to git-worktree. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-05 14:40:49 +00:00
checked out. See the `checkout.defaultRemote`
configuration variable for how to set a given remote
to used by default in some situations where this
advice would be printed.
amWorkDir::
Advice that shows the location of the patch file when
linkgit:git-am[1] fails to apply it.
rmHints::
In case of failure in the output of linkgit:git-rm[1],
show directions on how to proceed from the current state.
add: warn when adding an embedded repository It's an easy mistake to add a repository inside another repository, like: git clone $url git add . The resulting entry is a gitlink, but there's no matching .gitmodules entry. Trying to use "submodule init" (or clone with --recursive) doesn't do anything useful. Prior to v2.13, such an entry caused git-submodule to barf entirely. In v2.13, the entry is considered "inactive" and quietly ignored. Either way, no clone of your repository can do anything useful with the gitlink without the user manually adding the submodule config. In most cases, the user probably meant to either add a real submodule, or they forgot to put the embedded repository in their .gitignore file. Let's issue a warning when we see this case. There are a few things to note: - the warning will go in the git-add porcelain; anybody wanting to do low-level manipulation of the index is welcome to create whatever funny states they want. - we detect the case by looking for a newly added gitlink; updates via "git add submodule" are perfectly reasonable, and this avoids us having to investigate .gitmodules entirely - there's a command-line option to suppress the warning. This is needed for git-submodule itself (which adds the entry before adding any submodule config), but also provides a mechanism for other scripts doing submodule-like things. We could make this a hard error instead of a warning. However, we do add lots of sub-repos in our test suite. It's not _wrong_ to do so. It just creates a state where users may be surprised. Pointing them in the right direction with a gentle hint is probably the best option. There is a config knob that can disable the (long) hint. But I intentionally omitted a config knob to disable the warning entirely. Whether the warning is sensible or not is generally about context, not about the user's preferences. If there's a tool or workflow that adds gitlinks without matching .gitmodules, it should probably be taught about the new command-line option, rather than blanket-disabling the warning. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-14 10:58:22 +00:00
addEmbeddedRepo::
Advice on what to do when you've accidentally added one
git repo inside of another.
ignoredHook::
Advice shown if a hook is ignored because the hook is not
set as executable.
waitingForEditor::
Print a message to the terminal whenever Git is waiting for
editor input from the user.
--
core.fileMode::
Tells Git if the executable bit of files in the working tree
is to be honored.
+
Some filesystems lose the executable bit when a file that is
marked as executable is checked out, or checks out a
non-executable file with executable bit on.
linkgit:git-clone[1] or linkgit:git-init[1] probe the filesystem
to see if it handles the executable bit correctly
and this variable is automatically set as necessary.
+
A repository, however, may be on a filesystem that handles
the filemode correctly, and this variable is set to 'true'
when created, but later may be made accessible from another
environment that loses the filemode (e.g. exporting ext4 via
CIFS mount, visiting a Cygwin created repository with
Git for Windows or Eclipse).
In such a case it may be necessary to set this variable to 'false'.
See linkgit:git-update-index[1].
+
The default is true (when core.filemode is not specified in the config file).
mingw: introduce the 'core.hideDotFiles' setting On Unix (and Linux), files and directories whose names start with a dot are usually not shown by default. This convention is used by Git: the .git/ directory should be left alone by regular users, and only accessed through Git itself. On Windows, no such convention exists. Instead, there is an explicit flag to mark files or directories as hidden. In the early days, Git for Windows did not mark the .git/ directory (or for that matter, any file or directory whose name starts with a dot) hidden. This lead to quite a bit of confusion, and even loss of data. Consequently, Git for Windows introduced the core.hideDotFiles setting, with three possible values: true, false, and dotGitOnly, defaulting to marking only the .git/ directory as hidden. The rationale: users do not need to access .git/ directly, and indeed (as was demonstrated) should not really see that directory, either. However, not all dot files should be hidden by default, as e.g. Eclipse does not show them (and the user would therefore be unable to see, say, a .gitattributes file). In over five years since the last attempt to bring this patch into core Git, a slightly buggy version of this patch has served Git for Windows' users well: no single report indicated problems with the hidden .git/ directory, and the stream of problems caused by the previously non-hidden .git/ directory simply stopped. The bugs have been fixed during the process of getting this patch upstream. Note that there is a funny quirk we have to pay attention to when creating hidden files: we use Win32's _wopen() function which transmogrifies its arguments and hands off to Win32's CreateFile() function. That latter function errors out with ERROR_ACCESS_DENIED (the equivalent of EACCES) when the equivalent of the O_CREAT flag was passed and the file attributes (including the hidden flag) do not match an existing file's. And _wopen() accepts no parameter that would be transmogrified into said hidden flag. Therefore, we simply try again without O_CREAT. A slightly different method is required for our fopen()/freopen() function as we cannot even *remove* the implicit O_CREAT flag. Therefore, we briefly mark existing files as unhidden when opening them via fopen()/freopen(). The ERROR_ACCESS_DENIED error can also be triggered by opening a file that is marked as a system file (which is unlikely to be tracked in Git), and by trying to create a file that has *just* been deleted and is awaiting the last open handles to be released (which would be handled better by the "Try again?" logic, a story for a different patch series, though). In both cases, it does not matter much if we try again without the O_CREAT flag, read: it does not hurt, either. For details how ERROR_ACCESS_DENIED can be triggered, see https://msdn.microsoft.com/en-us/library/windows/desktop/aa363858 Original-patch-by: Erik Faye-Lund <kusmabite@gmail.com> Initial-Test-By: Pat Thoyts <patthoyts@users.sourceforge.net> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-11 08:43:37 +00:00
core.hideDotFiles::
(Windows-only) If true, mark newly-created directories and files whose
name starts with a dot as hidden. If 'dotGitOnly', only the `.git/`
directory is hidden, but no other files starting with a dot. The
default mode is 'dotGitOnly'.
core.ignoreCase::
Internal variable which enables various workarounds to enable
Git to work better on filesystems that are not case sensitive,
like APFS, HFS+, FAT, NTFS, etc. For example, if a directory listing
finds "makefile" when Git expects "Makefile", Git will assume
it is really the same file, and continue to remember it as
"Makefile".
+
The default is false, except linkgit:git-clone[1] or linkgit:git-init[1]
will probe and set core.ignoreCase true if appropriate when the repository
is created.
+
Git relies on the proper configuration of this variable for your operating
and file system. Modifying this value may result in unexpected behavior.
core.precomposeUnicode::
This option is only used by Mac OS implementation of Git.
When core.precomposeUnicode=true, Git reverts the unicode decomposition
git on Mac OS and precomposed unicode Mac OS X mangles file names containing unicode on file systems HFS+, VFAT or SAMBA. When a file using unicode code points outside ASCII is created on a HFS+ drive, the file name is converted into decomposed unicode and written to disk. No conversion is done if the file name is already decomposed unicode. Calling open("\xc3\x84", ...) with a precomposed "Ä" yields the same result as open("\x41\xcc\x88",...) with a decomposed "Ä". As a consequence, readdir() returns the file names in decomposed unicode, even if the user expects precomposed unicode. Unlike on HFS+, Mac OS X stores files on a VFAT drive (e.g. an USB drive) in precomposed unicode, but readdir() still returns file names in decomposed unicode. When a git repository is stored on a network share using SAMBA, file names are send over the wire and written to disk on the remote system in precomposed unicode, but Mac OS X readdir() returns decomposed unicode to be compatible with its behaviour on HFS+ and VFAT. The unicode decomposition causes many problems: - The names "git add" and other commands get from the end user may often be precomposed form (the decomposed form is not easily input from the keyboard), but when the commands read from the filesystem to see what it is going to update the index with already is on the filesystem, readdir() will give decomposed form, which is different. - Similarly "git log", "git mv" and all other commands that need to compare pathnames found on the command line (often but not always precomposed form; a command line input resulting from globbing may be in decomposed) with pathnames found in the tree objects (should be precomposed form to be compatible with other systems and for consistency in general). - The same for names stored in the index, which should be precomposed, that may need to be compared with the names read from readdir(). NFS mounted from Linux is fully transparent and does not suffer from the above. As Mac OS X treats precomposed and decomposed file names as equal, we can - wrap readdir() on Mac OS X to return the precomposed form, and - normalize decomposed form given from the command line also to the precomposed form, to ensure that all pathnames used in Git are always in the precomposed form. This behaviour can be requested by setting "core.precomposedunicode" configuration variable to true. The code in compat/precomposed_utf8.c implements basically 4 new functions: precomposed_utf8_opendir(), precomposed_utf8_readdir(), precomposed_utf8_closedir() and precompose_argv(). The first three are to wrap opendir(3), readdir(3), and closedir(3) functions. The argv[] conversion allows to use the TAB filename completion done by the shell on command line. It tolerates other tools which use readdir() to feed decomposed file names into git. When creating a new git repository with "git init" or "git clone", "core.precomposedunicode" will be set "false". The user needs to activate this feature manually. She typically sets core.precomposedunicode to "true" on HFS and VFAT, or file systems mounted via SAMBA. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-08 13:50:25 +00:00
of filenames done by Mac OS. This is useful when sharing a repository
between Mac OS and Linux or Windows.
(Git for Windows 1.7.10 or higher is needed, or Git under cygwin 1.7).
When false, file names are handled fully transparent by Git,
which is backward compatible with older versions of Git.
git on Mac OS and precomposed unicode Mac OS X mangles file names containing unicode on file systems HFS+, VFAT or SAMBA. When a file using unicode code points outside ASCII is created on a HFS+ drive, the file name is converted into decomposed unicode and written to disk. No conversion is done if the file name is already decomposed unicode. Calling open("\xc3\x84", ...) with a precomposed "Ä" yields the same result as open("\x41\xcc\x88",...) with a decomposed "Ä". As a consequence, readdir() returns the file names in decomposed unicode, even if the user expects precomposed unicode. Unlike on HFS+, Mac OS X stores files on a VFAT drive (e.g. an USB drive) in precomposed unicode, but readdir() still returns file names in decomposed unicode. When a git repository is stored on a network share using SAMBA, file names are send over the wire and written to disk on the remote system in precomposed unicode, but Mac OS X readdir() returns decomposed unicode to be compatible with its behaviour on HFS+ and VFAT. The unicode decomposition causes many problems: - The names "git add" and other commands get from the end user may often be precomposed form (the decomposed form is not easily input from the keyboard), but when the commands read from the filesystem to see what it is going to update the index with already is on the filesystem, readdir() will give decomposed form, which is different. - Similarly "git log", "git mv" and all other commands that need to compare pathnames found on the command line (often but not always precomposed form; a command line input resulting from globbing may be in decomposed) with pathnames found in the tree objects (should be precomposed form to be compatible with other systems and for consistency in general). - The same for names stored in the index, which should be precomposed, that may need to be compared with the names read from readdir(). NFS mounted from Linux is fully transparent and does not suffer from the above. As Mac OS X treats precomposed and decomposed file names as equal, we can - wrap readdir() on Mac OS X to return the precomposed form, and - normalize decomposed form given from the command line also to the precomposed form, to ensure that all pathnames used in Git are always in the precomposed form. This behaviour can be requested by setting "core.precomposedunicode" configuration variable to true. The code in compat/precomposed_utf8.c implements basically 4 new functions: precomposed_utf8_opendir(), precomposed_utf8_readdir(), precomposed_utf8_closedir() and precompose_argv(). The first three are to wrap opendir(3), readdir(3), and closedir(3) functions. The argv[] conversion allows to use the TAB filename completion done by the shell on command line. It tolerates other tools which use readdir() to feed decomposed file names into git. When creating a new git repository with "git init" or "git clone", "core.precomposedunicode" will be set "false". The user needs to activate this feature manually. She typically sets core.precomposedunicode to "true" on HFS and VFAT, or file systems mounted via SAMBA. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-08 13:50:25 +00:00
core.protectHFS::
If set to true, do not allow checkout of paths that would
be considered equivalent to `.git` on an HFS+ filesystem.
Defaults to `true` on Mac OS, and `false` elsewhere.
core.protectNTFS::
If set to true, do not allow checkout of paths that would
cause problems with the NTFS filesystem, e.g. conflict with
8.3 "short" names.
Defaults to `true` on Windows, and `false` elsewhere.
core.fsmonitor::
If set, the value of this variable is used as a command which
will identify all files that may have changed since the
requested date/time. This information is used to speed up git by
avoiding unnecessary processing of files that have not changed.
See the "fsmonitor-watchman" section of linkgit:githooks[5].
core.trustctime::
If false, the ctime differences between the index and the
working tree are ignored; useful when the inode change time
is regularly modified by something outside Git (file system
crawlers and some backup systems).
See linkgit:git-update-index[1]. True by default.
core.splitIndex::
If true, the split-index feature of the index will be used.
See linkgit:git-update-index[1]. False by default.
config: add core.untrackedCache When we know that mtime on directory as given by the environment is usable for the purpose of untracked cache, we may want the untracked cache to be always used without any mtime test or kernel name check being performed. Also when we know that mtime is not usable for the purpose of untracked cache, for example because the repo is shared over a network file system, we may want the untracked-cache to be automatically removed from the index. Allow the user to express such preference by setting the 'core.untrackedCache' configuration variable, which can take 'keep', 'false', or 'true' and default to 'keep'. When read_index_from() is called, it now adds or removes the untracked cache in the index to respect the value of this variable. So it does nothing if the value is `keep` or if the variable is unset; it adds the untracked cache if the value is `true`; and it removes the cache if the value is `false`. `git update-index --[no-|force-]untracked-cache` still adds the untracked cache to, or removes it, from the index, but this shows a warning if it goes against the value of core.untrackedCache, because the next time the index is read the untracked cache will be added or removed if the configuration is set to do so. Also `--untracked-cache` used to check that the underlying operating system and file system change `st_mtime` field of a directory if files are added or deleted in that directory. But because those tests take a long time, `--untracked-cache` no longer performs them. Instead, there is now `--test-untracked-cache` to perform the tests. This change makes `--untracked-cache` the same as `--force-untracked-cache`. This last change is backward incompatible and should be mentioned in the release notes. Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Torsten Bögershausen <tboegi@web.de> Helped-by: Stefan Beller <sbeller@google.com> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> read-cache: Duy'sfixup Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-27 06:58:05 +00:00
core.untrackedCache::
Determines what to do about the untracked cache feature of the
index. It will be kept, if this variable is unset or set to
`keep`. It will automatically be added if set to `true`. And
it will automatically be removed, if set to `false`. Before
setting it to `true`, you should check that mtime is working
properly on your system.
See linkgit:git-update-index[1]. `keep` by default.
core.checkStat::
Determines which stat fields to match between the index
and work tree. The user can set this to 'default' or
'minimal'. Default (or explicitly 'default'), is to check
all fields, including the sub-second part of mtime and ctime.
core.quotePath::
Commands that output paths (e.g. 'ls-files', 'diff'), will
quote "unusual" characters in the pathname by enclosing the
pathname in double-quotes and escaping those characters with
backslashes in the same way C escapes control characters (e.g.
`\t` for TAB, `\n` for LF, `\\` for backslash) or bytes with
values larger than 0x80 (e.g. octal `\302\265` for "micro" in
UTF-8). If this variable is set to false, bytes higher than
0x80 are not considered "unusual" any more. Double-quotes,
backslash and control characters are always escaped regardless
of the setting of this variable. A simple space character is
not considered "unusual". Many commands can output pathnames
completely verbatim using the `-z` option. The default value
is true.
core.eol::
Sets the line ending type to use in the working directory for
files that have the `text` property set when core.autocrlf is false.
Alternatives are 'lf', 'crlf' and 'native', which uses the platform's
native line ending. The default value is `native`. See
linkgit:gitattributes[5] for more information on end-of-line
conversion.
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
core.safecrlf::
If true, makes Git check if converting `CRLF` is reversible when
end-of-line conversion is active. Git will verify if a command
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
modifies a file in the work tree either directly or indirectly.
For example, committing a file followed by checking out the
same file should yield the original file in the work tree. If
this is not the case for the current setting of
`core.autocrlf`, Git will reject the file. The variable can
be set to "warn", in which case Git will only warn about an
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
irreversible conversion but continue the operation.
+
CRLF conversion bears a slight chance of corrupting data.
When it is enabled, Git will convert CRLF to LF during commit and LF to
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
CRLF during checkout. A file that contains a mixture of LF and
CRLF before the commit cannot be recreated by Git. For text
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
files this is the right thing to do: it corrects line endings
such that we have only LF line endings in the repository.
But for binary files that are accidentally classified as text the
conversion can corrupt data.
+
If you recognize such corruption early you can easily fix it by
setting the conversion type explicitly in .gitattributes. Right
after committing you still have the original file in your work
tree and this file is not yet corrupted. You can explicitly tell
Git that this file is binary and Git will handle the file
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
appropriately.
+
Unfortunately, the desired effect of cleaning up text files with
mixed line endings and the undesired effect of corrupting binary
files cannot be distinguished. In both cases CRLFs are removed
in an irreversible way. For text files this is the right thing
to do because CRLFs are line endings, while for binary files
converting CRLFs corrupts data.
+
Note, this safety check does not mean that a checkout will generate a
file identical to the original file for a different setting of
`core.eol` and `core.autocrlf`, but only for the current one. For
example, a text file with `LF` would be accepted with `core.eol=lf`
and could later be checked out with `core.eol=crlf`, in which case the
safecrlf: Add mechanism to warn about irreversible crlf conversions CRLF conversion bears a slight chance of corrupting data. autocrlf=true will convert CRLF to LF during commit and LF to CRLF during checkout. A file that contains a mixture of LF and CRLF before the commit cannot be recreated by git. For text files this is the right thing to do: it corrects line endings such that we have only LF line endings in the repository. But for binary files that are accidentally classified as text the conversion can corrupt data. If you recognize such corruption early you can easily fix it by setting the conversion type explicitly in .gitattributes. Right after committing you still have the original file in your work tree and this file is not yet corrupted. You can explicitly tell git that this file is binary and git will handle the file appropriately. Unfortunately, the desired effect of cleaning up text files with mixed line endings and the undesired effect of corrupting binary files cannot be distinguished. In both cases CRLFs are removed in an irreversible way. For text files this is the right thing to do because CRLFs are line endings, while for binary files converting CRLFs corrupts data. This patch adds a mechanism that can either warn the user about an irreversible conversion or can even refuse to convert. The mechanism is controlled by the variable core.safecrlf, with the following values: - false: disable safecrlf mechanism - warn: warn about irreversible conversions - true: refuse irreversible conversions The default is to warn. Users are only affected by this default if core.autocrlf is set. But the current default of git is to leave core.autocrlf unset, so users will not see warnings unless they deliberately chose to activate the autocrlf mechanism. The safecrlf mechanism's details depend on the git command. The general principles when safecrlf is active (not false) are: - we warn/error out if files in the work tree can modified in an irreversible way without giving the user a chance to backup the original file. - for read-only operations that do not modify files in the work tree we do not not print annoying warnings. There are exceptions. Even though... - "git add" itself does not touch the files in the work tree, the next checkout would, so the safety triggers; - "git apply" to update a text file with a patch does touch the files in the work tree, but the operation is about text files and CRLF conversion is about fixing the line ending inconsistencies, so the safety does not trigger; - "git diff" itself does not touch the files in the work tree, it is often run to inspect the changes you intend to next "git add". To catch potential problems early, safety triggers. The concept of a safety check was originally proposed in a similar way by Linus Torvalds. Thanks to Dimitry Potapov for insisting on getting the naked LF/autocrlf=true case right. Signed-off-by: Steffen Prohaska <prohaska@zib.de>
2008-02-06 11:25:58 +00:00
resulting file would contain `CRLF`, although the original file
contained `LF`. However, in both work trees the line endings would be
consistent, that is either all `LF` or all `CRLF`, but never mixed. A
file with mixed line endings would be reported by the `core.safecrlf`
mechanism.
core.autocrlf::
Setting this variable to "true" is the same as setting
the `text` attribute to "auto" on all files and core.eol to "crlf".
Set to true if you want to have `CRLF` line endings in your
working directory and the repository has LF line endings.
This variable can be set to 'input',
in which case no output conversion is performed.
core.checkRoundtripEncoding::
A comma and/or whitespace separated list of encodings that Git
performs UTF-8 round trip checks on if they are used in an
`working-tree-encoding` attribute (see linkgit:gitattributes[5]).
The default value is `SHIFT-JIS`.
core.symlinks::
If false, symbolic links are checked out as small plain files that
contain the link text. linkgit:git-update-index[1] and
linkgit:git-add[1] will not change the recorded type to regular
file. Useful on filesystems like FAT that do not support
symbolic links.
+
The default is true, except linkgit:git-clone[1] or linkgit:git-init[1]
will probe and set core.symlinks false if appropriate when the repository
is created.
core.gitProxy::
A "proxy command" to execute (as 'command host port') instead
of establishing direct connection to the remote server when
using the Git protocol for fetching. If the variable value is
in the "COMMAND for DOMAIN" format, the command is applied only
on hostnames ending with the specified domain string. This variable
may be set multiple times and is matched in the given order;
the first match wins.
+
Can be overridden by the `GIT_PROXY_COMMAND` environment variable
(which always applies universally, without the special "for"
handling).
+
The special string `none` can be used as the proxy command to
specify that no proxy be used for a given domain pattern.
This is useful for excluding servers inside a firewall from
proxy use, while defaulting to a common proxy for external domains.
core.sshCommand::
If this variable is set, `git fetch` and `git push` will
use the specified command instead of `ssh` when they need to
connect to a remote system. The command is in the same form as
the `GIT_SSH_COMMAND` environment variable and is overridden
when the environment variable is set.
core.ignoreStat::
If true, Git will avoid using lstat() calls to detect if files have
changed by setting the "assume-unchanged" bit for those tracked files
which it has updated identically in both the index and working tree.
+
When files are modified outside of Git, the user will need to stage
the modified files explicitly (e.g. see 'Examples' section in
linkgit:git-update-index[1]).
Git will not normally detect changes to those files.
+
This is useful on systems where lstat() calls are very slow, such as
CIFS/Microsoft Windows.
+
False by default.
core.preferSymlinkRefs::
Instead of the default "symref" format for HEAD
and other symbolic reference files, use symbolic links.
This is sometimes needed to work with old scripts that
expect HEAD to be a symbolic link.
core.bare::
If true this repository is assumed to be 'bare' and has no
working directory associated with it. If this is the case a
number of commands that require a working directory will be
disabled, such as linkgit:git-add[1] or linkgit:git-merge[1].
+
This setting is automatically guessed by linkgit:git-clone[1] or
linkgit:git-init[1] when the repository was created. By default a
repository that ends in "/.git" is assumed to be not bare (bare =
false), while all other repositories are assumed to be bare (bare
= true).
introduce GIT_WORK_TREE to specify the work tree setup_gdg is used as abbreviation for setup_git_directory_gently. The work tree can be specified using the environment variable GIT_WORK_TREE and the config option core.worktree (the environment variable has precendence over the config option). Additionally there is a command line option --work-tree which sets the environment variable. setup_gdg does the following now: GIT_DIR unspecified repository in .git directory parent directory of the .git directory is used as work tree, GIT_WORK_TREE is ignored GIT_DIR unspecified repository in cwd GIT_DIR is set to cwd see the cases with GIT_DIR specified what happens next and also see the note below GIT_DIR specified GIT_WORK_TREE/core.worktree unspecified cwd is used as work tree GIT_DIR specified GIT_WORK_TREE/core.worktree specified the specified work tree is used Note on the case where GIT_DIR is unspecified and repository is in cwd: GIT_WORK_TREE is used but is_inside_git_dir is always true. I did it this way because setup_gdg might be called multiple times (e.g. when doing alias expansion) and in successive calls setup_gdg should do the same thing every time. Meaning of is_bare/is_inside_work_tree/is_inside_git_dir: (1) is_bare_repository A repository is bare if core.bare is true or core.bare is unspecified and the name suggests it is bare (directory not named .git). The bare option disables a few protective checks which are useful with a working tree. Currently this changes if a repository is bare: updates of HEAD are allowed git gc packs the refs the reflog is disabled by default (2) is_inside_work_tree True if the cwd is inside the associated working tree (if there is one), false otherwise. (3) is_inside_git_dir True if the cwd is inside the git directory, false otherwise. Before this patch is_inside_git_dir was always true for bare repositories. When setup_gdg finds a repository git_config(git_default_config) is always called. This ensure that is_bare_repository makes use of core.bare and does not guess even though core.bare is specified. inside_work_tree and inside_git_dir are set if setup_gdg finds a repository. The is_inside_work_tree and is_inside_git_dir functions will die if they are called before a successful call to setup_gdg. Signed-off-by: Matthias Lederhofer <matled@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-06 07:10:42 +00:00
core.worktree::
Set the path to the root of the working tree.
If `GIT_COMMON_DIR` environment variable is set, core.worktree
is ignored and not used for determining the root of working tree.
This can be overridden by the `GIT_WORK_TREE` environment
variable and the `--work-tree` command-line option.
The value can be an absolute path or relative to the path to
the .git directory, which is either specified by --git-dir
or GIT_DIR, or automatically discovered.
If --git-dir or GIT_DIR is specified but none of
--work-tree, GIT_WORK_TREE and core.worktree is specified,
the current working directory is regarded as the top level
of your working tree.
+
Note that this variable is honored even when set in a configuration
file in a ".git" subdirectory of a directory and its value differs
from the latter directory (e.g. "/path/to/.git/config" has
core.worktree set to "/different/path"), which is most likely a
misconfiguration. Running Git commands in the "/path/to" directory will
still use "/different/path" as the root of the work tree and can cause
confusion unless you know what you are doing (e.g. you are creating a
read-only snapshot of the same index to a location different from the
repository's usual working tree).
introduce GIT_WORK_TREE to specify the work tree setup_gdg is used as abbreviation for setup_git_directory_gently. The work tree can be specified using the environment variable GIT_WORK_TREE and the config option core.worktree (the environment variable has precendence over the config option). Additionally there is a command line option --work-tree which sets the environment variable. setup_gdg does the following now: GIT_DIR unspecified repository in .git directory parent directory of the .git directory is used as work tree, GIT_WORK_TREE is ignored GIT_DIR unspecified repository in cwd GIT_DIR is set to cwd see the cases with GIT_DIR specified what happens next and also see the note below GIT_DIR specified GIT_WORK_TREE/core.worktree unspecified cwd is used as work tree GIT_DIR specified GIT_WORK_TREE/core.worktree specified the specified work tree is used Note on the case where GIT_DIR is unspecified and repository is in cwd: GIT_WORK_TREE is used but is_inside_git_dir is always true. I did it this way because setup_gdg might be called multiple times (e.g. when doing alias expansion) and in successive calls setup_gdg should do the same thing every time. Meaning of is_bare/is_inside_work_tree/is_inside_git_dir: (1) is_bare_repository A repository is bare if core.bare is true or core.bare is unspecified and the name suggests it is bare (directory not named .git). The bare option disables a few protective checks which are useful with a working tree. Currently this changes if a repository is bare: updates of HEAD are allowed git gc packs the refs the reflog is disabled by default (2) is_inside_work_tree True if the cwd is inside the associated working tree (if there is one), false otherwise. (3) is_inside_git_dir True if the cwd is inside the git directory, false otherwise. Before this patch is_inside_git_dir was always true for bare repositories. When setup_gdg finds a repository git_config(git_default_config) is always called. This ensure that is_bare_repository makes use of core.bare and does not guess even though core.bare is specified. inside_work_tree and inside_git_dir are set if setup_gdg finds a repository. The is_inside_work_tree and is_inside_git_dir functions will die if they are called before a successful call to setup_gdg. Signed-off-by: Matthias Lederhofer <matled@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-06 07:10:42 +00:00
core.logAllRefUpdates::
Enable the reflog. Updates to a ref <ref> is logged to the file
"`$GIT_DIR/logs/<ref>`", by appending the new and old
SHA-1, the date/time and the reason of the update, but
only when the file exists. If this configuration
variable is set to `true`, missing "`$GIT_DIR/logs/<ref>`"
file is automatically created for branch heads (i.e. under
`refs/heads/`), remote refs (i.e. under `refs/remotes/`),
note refs (i.e. under `refs/notes/`), and the symbolic ref `HEAD`.
If it is set to `always`, then a missing reflog is automatically
created for any ref under `refs/`.
+
This information can be used to determine what commit
was the tip of a branch "2 days ago".
+
This value is true by default in a repository that has
a working directory associated with it, and false by
default in a bare repository.
core.repositoryFormatVersion::
Internal variable identifying the repository format and layout
version.
core.sharedRepository::
When 'group' (or 'true'), the repository is made shareable between
several users in a group (making sure all the files and objects are
group-writable). When 'all' (or 'world' or 'everybody'), the
repository will be readable by all users, additionally to being
group-shareable. When 'umask' (or 'false'), Git will use permissions
reported by umask(2). When '0xxx', where '0xxx' is an octal number,
files in the repository will have this mode value. '0xxx' will override
user's umask value (whereas the other options will only override
requested parts of the user's umask value). Examples: '0660' will make
the repo read/write-able for the owner and group, but inaccessible to
others (equivalent to 'group' unless umask is e.g. '0022'). '0640' is a
repository that is group-readable but not group-writable.
See linkgit:git-init[1]. False by default.
core.warnAmbiguousRefs::
If true, Git will warn you if the ref name you passed it is ambiguous
and might match multiple refs in the repository. True by default.
core.compression::
Custom compression levels for objects and packs Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 20:56:50 +00:00
An integer -1..9, indicating a default compression level.
-1 is the zlib default. 0 means no compression,
and 1..9 are various speed/size tradeoffs, 9 being slowest.
If set, this provides a default to other compression variables,
such as `core.looseCompression` and `pack.compression`.
Custom compression levels for objects and packs Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 20:56:50 +00:00
core.looseCompression::
An integer -1..9, indicating the compression level for objects that
Custom compression levels for objects and packs Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 20:56:50 +00:00
are not in a pack file. -1 is the zlib default. 0 means no
compression, and 1..9 are various speed/size tradeoffs, 9 being
Custom compression levels for objects and packs Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 20:56:50 +00:00
slowest. If not set, defaults to core.compression. If that is
not set, defaults to 1 (best speed).
Fully activate the sliding window pack access. This finally turns on the sliding window behavior for packfile data access by mapping limited size windows and chaining them under the packed_git->windows list. We consider a given byte offset to be within the window only if there would be at least 20 bytes (one hash worth of data) accessible after the requested offset. This range selection relates to the contract that use_pack() makes with its callers, allowing them to access one hash or one object header without needing to call use_pack() for every byte of data obtained. In the worst case scenario we will map the same page of data twice into memory: once at the end of one window and once again at the start of the next window. This duplicate page mapping will happen only when an object header or a delta base reference is spanned over the end of a window and is always limited to just one page of duplication, as no sane operating system will ever have a page size smaller than a hash. I am assuming that the possible wasted page of virtual address space is going to perform faster than the alternatives, which would be to copy the object header or ref delta into a temporary buffer prior to parsing, or to check the window range on every byte during header parsing. We may decide to revisit this decision in the future since this is just a gut instinct decision and has not actually been proven out by experimental testing. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-23 07:34:28 +00:00
core.packedGitWindowSize::
Number of bytes of a pack file to map into memory in a
single mapping operation. Larger window sizes may allow
your system to process a smaller number of large pack files
more quickly. Smaller window sizes will negatively affect
performance due to increased calls to the operating system's
Fully activate the sliding window pack access. This finally turns on the sliding window behavior for packfile data access by mapping limited size windows and chaining them under the packed_git->windows list. We consider a given byte offset to be within the window only if there would be at least 20 bytes (one hash worth of data) accessible after the requested offset. This range selection relates to the contract that use_pack() makes with its callers, allowing them to access one hash or one object header without needing to call use_pack() for every byte of data obtained. In the worst case scenario we will map the same page of data twice into memory: once at the end of one window and once again at the start of the next window. This duplicate page mapping will happen only when an object header or a delta base reference is spanned over the end of a window and is always limited to just one page of duplication, as no sane operating system will ever have a page size smaller than a hash. I am assuming that the possible wasted page of virtual address space is going to perform faster than the alternatives, which would be to copy the object header or ref delta into a temporary buffer prior to parsing, or to check the window range on every byte during header parsing. We may decide to revisit this decision in the future since this is just a gut instinct decision and has not actually been proven out by experimental testing. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-23 07:34:28 +00:00
memory manager, but may improve performance when accessing
a large number of large pack files.
+
Default is 1 MiB if NO_MMAP was set at compile time, otherwise 32
MiB on 32 bit platforms and 1 GiB on 64 bit platforms. This should
be reasonable for all users/operating systems. You probably do
not need to adjust this value.
+
Common unit suffixes of 'k', 'm', or 'g' are supported.
Fully activate the sliding window pack access. This finally turns on the sliding window behavior for packfile data access by mapping limited size windows and chaining them under the packed_git->windows list. We consider a given byte offset to be within the window only if there would be at least 20 bytes (one hash worth of data) accessible after the requested offset. This range selection relates to the contract that use_pack() makes with its callers, allowing them to access one hash or one object header without needing to call use_pack() for every byte of data obtained. In the worst case scenario we will map the same page of data twice into memory: once at the end of one window and once again at the start of the next window. This duplicate page mapping will happen only when an object header or a delta base reference is spanned over the end of a window and is always limited to just one page of duplication, as no sane operating system will ever have a page size smaller than a hash. I am assuming that the possible wasted page of virtual address space is going to perform faster than the alternatives, which would be to copy the object header or ref delta into a temporary buffer prior to parsing, or to check the window range on every byte during header parsing. We may decide to revisit this decision in the future since this is just a gut instinct decision and has not actually been proven out by experimental testing. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-12-23 07:34:28 +00:00
core.packedGitLimit::
Maximum number of bytes to map simultaneously into memory
from pack files. If Git needs to access more than this many
bytes at once to complete an operation it will unmap existing
regions to reclaim virtual address space within the process.
+
Default is 256 MiB on 32 bit platforms and 32 TiB (effectively
unlimited) on 64 bit platforms.
This should be reasonable for all users/operating systems, except on
the largest projects. You probably do not need to adjust this value.
+
Common unit suffixes of 'k', 'm', or 'g' are supported.
core.deltaBaseCacheLimit::
Maximum number of bytes to reserve for caching base objects
that may be referenced by multiple deltified objects. By storing the
entire decompressed base objects in a cache Git is able
to avoid unpacking and decompressing frequently used base
objects multiple times.
+
Default is 96 MiB on all platforms. This should be reasonable
for all users/operating systems, except on the largest projects.
You probably do not need to adjust this value.
+
Common unit suffixes of 'k', 'm', or 'g' are supported.
core.bigFileThreshold::
Files larger than this size are stored deflated, without
attempting delta compression. Storing large files without
delta compression avoids excessive memory usage, at the
slight expense of increased disk usage. Additionally files
larger than this size are always treated as binary.
+
Default is 512 MiB on all platforms. This should be reasonable
for most projects as source code and other text files can still
be delta compressed, but larger binary media files won't be.
+
Common unit suffixes of 'k', 'm', or 'g' are supported.
core.excludesFile::
Specifies the pathname to the file that contains patterns to
describe paths that are not meant to be tracked, in addition
to '.gitignore' (per-directory) and '.git/info/exclude'.
Defaults to `$XDG_CONFIG_HOME/git/ignore`.
If `$XDG_CONFIG_HOME` is either not set or empty, `$HOME/.config/git/ignore`
is used instead. See linkgit:gitignore[5].
core.askPass::
Some commands (e.g. svn and http interfaces) that interactively
ask for a password can be told to use an external program given
via the value of this variable. Can be overridden by the `GIT_ASKPASS`
environment variable. If not set, fall back to the value of the
`SSH_ASKPASS` environment variable or, failing that, a simple password
prompt. The external program shall be given a suitable prompt as
command-line argument and write the password on its STDOUT.
core.attributesFile::
In addition to '.gitattributes' (per-directory) and
'.git/info/attributes', Git looks into this file for attributes
(see linkgit:gitattributes[5]). Path expansions are made the same
way as for `core.excludesFile`. Its default value is
`$XDG_CONFIG_HOME/git/attributes`. If `$XDG_CONFIG_HOME` is either not
set or empty, `$HOME/.config/git/attributes` is used instead.
core.hooksPath::
By default Git will look for your hooks in the
'$GIT_DIR/hooks' directory. Set this to different path,
e.g. '/etc/git/hooks', and Git will try to find your hooks in
that directory, e.g. '/etc/git/hooks/pre-receive' instead of
in '$GIT_DIR/hooks/pre-receive'.
+
The path can be either absolute or relative. A relative path is
taken as relative to the directory where the hooks are run (see
the "DESCRIPTION" section of linkgit:githooks[5]).
+
This configuration variable is useful in cases where you'd like to
centrally configure your Git hooks instead of configuring them on a
per-repository basis, or as a more flexible and centralized
alternative to having an `init.templateDir` where you've changed
default hooks.
core.editor::
Commands such as `commit` and `tag` that let you edit
messages by launching an editor use the value of this
variable when it is set, and the environment variable
`GIT_EDITOR` is not set. See linkgit:git-var[1].
core.commentChar::
Commands such as `commit` and `tag` that let you edit
messages consider a line that begins with this character
commented, and removes them after the editor returns
(default '#').
+
If set to "auto", `git-commit` would select a character that is not
the beginning character of any line in existing commit messages.
core.filesRefLockTimeout::
The length of time, in milliseconds, to retry when trying to
lock an individual reference. Value 0 means not to retry at
all; -1 means to try indefinitely. Default is 100 (i.e.,
retry for 100ms).
core.packedRefsTimeout::
The length of time, in milliseconds, to retry when trying to
lock the `packed-refs` file. Value 0 means not to retry at
all; -1 means to try indefinitely. Default is 1000 (i.e.,
retry for 1 second).
sequence.editor::
Text editor used by `git rebase -i` for editing the rebase instruction file.
The value is meant to be interpreted by the shell when it is used.
It can be overridden by the `GIT_SEQUENCE_EDITOR` environment variable.
When not configured the default commit message editor is used instead.
core.pager::
Text viewer for use by Git commands (e.g., 'less'). The value
is meant to be interpreted by the shell. The order of preference
is the `$GIT_PAGER` environment variable, then `core.pager`
configuration, then `$PAGER`, and then the default chosen at
compile time (usually 'less').
+
When the `LESS` environment variable is unset, Git sets it to `FRX`
(if `LESS` environment variable is set, Git does not change it at
all). If you want to selectively override Git's default setting
for `LESS`, you can set `core.pager` to e.g. `less -S`. This will
be passed to the shell by Git, which will translate the final
command to `LESS=FRX less -S`. The environment does not set the
`S` option but the command line does, instructing less to truncate
long lines. Similarly, setting `core.pager` to `less -+F` will
deactivate the `F` option specified by the environment from the
command-line, deactivating the "quit if one screen" behavior of
`less`. One can specifically activate some flags for particular
commands: for example, setting `pager.blame` to `less -S` enables
line truncation only for `git blame`.
+
Likewise, when the `LV` environment variable is unset, Git sets it
to `-c`. You can override this setting by exporting `LV` with
another value or setting `core.pager` to `lv +c`.
core.whitespace::
A comma separated list of common whitespace problems to
notice. 'git diff' will use `color.diff.whitespace` to
highlight them, and 'git apply --whitespace=error' will
consider them as errors. You can prefix `-` to disable
any of them (e.g. `-trailing-space`):
+
* `blank-at-eol` treats trailing whitespaces at the end of the line
as an error (enabled by default).
* `space-before-tab` treats a space character that appears immediately
before a tab character in the initial indent part of the line as an
error (enabled by default).
* `indent-with-non-tab` treats a line that is indented with space
characters instead of the equivalent tabs as an error (not enabled by
default).
* `tab-in-indent` treats a tab character in the initial indent part of
the line as an error (not enabled by default).
* `blank-at-eof` treats blank lines added at the end of file as an error
(enabled by default).
* `trailing-space` is a short-hand to cover both `blank-at-eol` and
`blank-at-eof`.
* `cr-at-eol` treats a carriage-return at the end of line as
part of the line terminator, i.e. with it, `trailing-space`
does not trigger if the character before such a carriage-return
is not a whitespace (not enabled by default).
* `tabwidth=<n>` tells how many character positions a tab occupies; this
is relevant for `indent-with-non-tab` and when Git fixes `tab-in-indent`
errors. The default tab width is 8. Allowed values are 1 to 63.
core.fsyncObjectFiles::
This boolean will enable 'fsync()' when writing object files.
+
This is a total waste of time and effort on a filesystem that orders
data writes properly, but can be useful for filesystems that do not use
journalling (traditional UNIX filesystems) or that only journal metadata
and not file contents (OS X's HFS+, or Linux ext3 with "data=writeback").
core.preloadIndex::
Enable parallel index preload for operations like 'git diff'
+
This can speed up operations like 'git diff' and 'git status' especially
on filesystems like NFS that have weak caching semantics and thus
relatively high IO latencies. When enabled, Git will do the
index comparison to the filesystem data in parallel, allowing
overlapping IO's. Defaults to true.
core.createObject::
You can set this to 'link', in which case a hardlink followed by
a delete of the source are used to make sure that object creation
will not overwrite existing objects.
+
On some file system/operating system combinations, this is unreliable.
Set this config setting to 'rename' there; However, This will remove the
check that makes sure that existing object files will not get overwritten.
core.notesRef::
When showing commit messages, also show notes which are stored in
the given ref. The ref must be fully qualified. If the given
ref does not exist, it is not an error but means that no
notes should be printed.
+
This setting defaults to "refs/notes/commits", and it can be overridden by
the `GIT_NOTES_REF` environment variable. See linkgit:git-notes[1].
gc.commitGraph::
If true, then gc will rewrite the commit-graph file when
linkgit:git-gc[1] is run. When using linkgit:git-gc[1]
'--auto' the commit-graph will be updated if housekeeping is
required. Default is false. See linkgit:git-commit-graph[1]
for details.
core.sparseCheckout::
Enable "sparse checkout" feature. See section "Sparse checkout" in
linkgit:git-read-tree[1] for more information.
core.abbrev::
Set the length object names are abbreviated to. If
unspecified or set to "auto", an appropriate value is
computed based on the approximate number of packed objects
in your repository, which hopefully is enough for
abbreviated object names to stay unique for some time.
The minimum length is 4.
add.ignoreErrors::
add.ignore-errors (deprecated)::
Tells 'git add' to continue adding files when some files cannot be
added due to indexing errors. Equivalent to the `--ignore-errors`
option of linkgit:git-add[1]. `add.ignore-errors` is deprecated,
as it does not follow the usual naming convention for configuration
variables.
alias.*::
Command aliases for the linkgit:git[1] command wrapper - e.g.
after defining "alias.last = cat-file commit HEAD", the invocation
"git last" is equivalent to "git cat-file commit HEAD". To avoid
confusion and troubles with script usage, aliases that
hide existing Git commands are ignored. Arguments are split by
spaces, the usual shell quoting and escaping is supported.
A quote pair or a backslash can be used to quote them.
+
If the alias expansion is prefixed with an exclamation point,
it will be treated as a shell command. For example, defining
"alias.new = !gitk --all --not ORIG_HEAD", the invocation
"git new" is equivalent to running the shell command
"gitk --all --not ORIG_HEAD". Note that shell commands will be
executed from the top-level directory of a repository, which may
not necessarily be the current directory.
`GIT_PREFIX` is set as returned by running 'git rev-parse --show-prefix'
from the original current directory. See linkgit:git-rev-parse[1].
am.keepcr::
If true, git-am will call git-mailsplit for patches in mbox format
with parameter `--keep-cr`. In this case git-mailsplit will
not remove `\r` from lines ending with `\r\n`. Can be overridden
by giving `--no-keep-cr` from the command line.
See linkgit:git-am[1], linkgit:git-mailsplit[1].
am.threeWay::
By default, `git am` will fail if the patch does not apply cleanly. When
set to true, this setting tells `git am` to fall back on 3-way merge if
the patch records the identity of blobs it is supposed to apply to and
we have those blobs available locally (equivalent to giving the `--3way`
option from the command line). Defaults to `false`.
See linkgit:git-am[1].
apply.ignoreWhitespace::
When set to 'change', tells 'git apply' to ignore changes in
whitespace, in the same way as the `--ignore-space-change`
option.
When set to one of: no, none, never, false tells 'git apply' to
respect all whitespace differences.
See linkgit:git-apply[1].
apply.whitespace::
Tells 'git apply' how to handle whitespaces, in the same way
as the `--whitespace` option. See linkgit:git-apply[1].
blame.showRoot::
Do not treat root commits as boundaries in linkgit:git-blame[1].
This option defaults to false.
blame.blankBoundary::
Show blank commit object name for boundary commits in
linkgit:git-blame[1]. This option defaults to false.
blame.showEmail::
Show the author email instead of author name in linkgit:git-blame[1].
This option defaults to false.
blame.date::
Specifies the format used to output dates in linkgit:git-blame[1].
If unset the iso format is used. For supported values,
see the discussion of the `--date` option at linkgit:git-log[1].
branch.autoSetupMerge::
Tells 'git branch' and 'git checkout' to set up new branches
so that linkgit:git-pull[1] will appropriately merge from the
starting point branch. Note that even if this option is not set,
this behavior can be chosen per-branch using the `--track`
and `--no-track` options. The valid settings are: `false` -- no
automatic setup is done; `true` -- automatic setup is done when the
starting point is a remote-tracking branch; `always` --
automatic setup is done when the starting point is either a
local branch or remote-tracking
branch. This option defaults to true.
branch.autoSetupRebase::
When a new branch is created with 'git branch' or 'git checkout'
that tracks another branch, this variable tells Git to set
up pull to rebase instead of merge (see "branch.<name>.rebase").
When `never`, rebase is never automatically set to true.
When `local`, rebase is set to true for tracked branches of
other local branches.
When `remote`, rebase is set to true for tracked branches of
remote-tracking branches.
When `always`, rebase will be set to true for all tracking
branches.
See "branch.autoSetupMerge" for details on how to set up a
branch to track another branch.
This option defaults to never.
branch.<name>.remote::
When on branch <name>, it tells 'git fetch' and 'git push'
which remote to fetch from/push to. The remote to push to
may be overridden with `remote.pushDefault` (for all branches).
The remote to push to, for the current branch, may be further
overridden by `branch.<name>.pushRemote`. If no remote is
configured, or if you are not on any branch, it defaults to
`origin` for fetching and `remote.pushDefault` for pushing.
Additionally, `.` (a period) is the current local repository
(a dot-repository), see `branch.<name>.merge`'s final note below.
branch.<name>.pushRemote::
When on branch <name>, it overrides `branch.<name>.remote` for
pushing. It also overrides `remote.pushDefault` for pushing
from branch <name>. When you pull from one place (e.g. your
upstream) and push to another place (e.g. your own publishing
repository), you would want to set `remote.pushDefault` to
specify the remote to push to for all branches, and use this
option to override it for a specific branch.
branch.<name>.merge::
Defines, together with branch.<name>.remote, the upstream branch
for the given branch. It tells 'git fetch'/'git pull'/'git rebase' which
branch to merge and can also affect 'git push' (see push.default).
When in branch <name>, it tells 'git fetch' the default
refspec to be marked for merging in FETCH_HEAD. The value is
handled like the remote part of a refspec, and must match a
ref which is fetched from the remote given by
"branch.<name>.remote".
The merge information is used by 'git pull' (which at first calls
'git fetch') to lookup the default branch for merging. Without
this option, 'git pull' defaults to merge the first refspec fetched.
Specify multiple values to get an octopus merge.
If you wish to setup 'git pull' so that it merges into <name> from
another branch in the local repository, you can point
branch.<name>.merge to the desired branch, and use the relative path
setting `.` (a period) for branch.<name>.remote.
branch.<name>.mergeOptions::
Sets default options for merging into branch <name>. The syntax and
supported options are the same as those of linkgit:git-merge[1], but
option values containing whitespace characters are currently not
supported.
branch.<name>.rebase::
When true, rebase the branch <name> on top of the fetched branch,
instead of merging the default branch from the default remote when
"git pull" is run. See "pull.rebase" for doing this in a non
branch-specific manner.
+
When `merges`, pass the `--rebase-merges` option to 'git rebase'
so that the local merge commits are included in the rebase (see
linkgit:git-rebase[1] for details).
+
When preserve, also pass `--preserve-merges` along to 'git rebase'
so that locally committed merge commits will not be flattened
by running 'git pull'.
+
When the value is `interactive`, the rebase is run in interactive mode.
+
*NOTE*: this is a possibly dangerous operation; do *not* use
it unless you understand the implications (see linkgit:git-rebase[1]
for details).
branch.<name>.description::
Branch description, can be edited with
`git branch --edit-description`. Branch description is
automatically added in the format-patch cover letter or
request-pull summary.
browser.<tool>.cmd::
Specify the command to invoke the specified browser. The
specified command is evaluated in shell with the URLs passed
Documentation: quote double-dash for AsciiDoc AsciiDoc versions since 5.0.6 treat a double-dash surrounded by spaces (outside of verbatim environments) as a request to insert an em dash. Such versions also treat the three-character sequence "\--", when not followed by another dash, as a request to insert two literal minus signs. Thus from time to time there have been patches to add backslashes to AsciiDoc markup to escape double-dashes that are meant to be represent '--' characters used literally on the command line; see v1.4.0-rc1~174, Fix up docs where "--" isn't displayed correctly, 2006-05-05, for example. AsciiDoc 6.0.3 (2005-04-20) made life harder by also treating double-dashes without surrounding whitespace as markup for an em dash, though only when formatting for backends other than the manpages (e.g., HTML). Many pages needed to be changed to use a backslash before the "--" in names of command-line flags like "--add" (see v0.99.6~37, Update tutorial, 2005-08-30). AsciiDoc 8.3.0 (2008-11-29) refined the em-dash rule to avoid that requirement. Double-dashes without surrounding spaces are not rendered as em dashes any more unless bordered on both sides by alphanumeric characters. The unescaped markup for option names (e.g., "--add") works fine, and many instances of this style have leaked into Documentation/; git's HTML documentation contains many spurious em dashes when formatted by an older toolchain. (This patch will not change that.) The upshot: "--" as an isolated word and in phrases like "git web--browse" must be escaped if it is not to be rendered as an em dash by current asciidoc. Use "\--" to avoid such misformatting in sentences in which "--" represents a literal double-minus command line argument that separates options and revs from pathspecs, and use "{litdd}" in cases where the double-dash is embedded in the command name. The latter is just for consistency with v1.7.3-rc0~13^2 (Work around em-dash handling in newer AsciiDoc, 2010-08-23). List of lines to fix found by grepping manpages for "(em". Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Improved-by: Junio C Hamano <gitster@pobox.com> Improved-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-29 05:35:10 +00:00
as arguments. (See linkgit:git-web{litdd}browse[1].)
browser.<tool>.path::
Override the path for the given tool that may be used to
browse HTML help (see `-w` option in linkgit:git-help[1]) or a
working repository in gitweb (see linkgit:git-instaweb[1]).
checkout & worktree: introduce checkout.defaultRemote Introduce a checkout.defaultRemote setting which can be used to designate a remote to prefer (via checkout.defaultRemote=origin) when running e.g. "git checkout master" to mean origin/master, even though there's other remotes that have the "master" branch. I want this because it's very handy to use this workflow to checkout a repository and create a topic branch, then get back to a "master" as retrieved from upstream: ( cd /tmp && rm -rf tbdiff && git clone git@github.com:trast/tbdiff.git && cd tbdiff && git branch -m topic && git checkout master ) That will output: Branch 'master' set up to track remote branch 'master' from 'origin'. Switched to a new branch 'master' But as soon as a new remote is added (e.g. just to inspect something from someone else) the DWIMery goes away: ( cd /tmp && rm -rf tbdiff && git clone git@github.com:trast/tbdiff.git && cd tbdiff && git branch -m topic && git remote add avar git@github.com:avar/tbdiff.git && git fetch avar && git checkout master ) Will output (without the advice output added earlier in this series): error: pathspec 'master' did not match any file(s) known to git. The new checkout.defaultRemote config allows me to say that whenever that ambiguity comes up I'd like to prefer "origin", and it'll still work as though the only remote I had was "origin". Also adjust the advice.checkoutAmbiguousRemoteBranchName message to mention this new config setting to the user, the full output on my git.git is now (the last paragraph is new): $ ./git --exec-path=$PWD checkout master error: pathspec 'master' did not match any file(s) known to git. hint: 'master' matched more than one remote tracking branch. hint: We found 26 remotes with a reference that matched. So we fell back hint: on trying to resolve the argument as a path, but failed there too! hint: hint: If you meant to check out a remote tracking branch on, e.g. 'origin', hint: you can do so by fully qualifying the name with the --track option: hint: hint: git checkout --track origin/<name> hint: hint: If you'd like to always have checkouts of an ambiguous <name> prefer hint: one remote, e.g. the 'origin' remote, consider setting hint: checkout.defaultRemote=origin in your config. I considered splitting this into checkout.defaultRemote and worktree.defaultRemote, but it's probably less confusing to break our own rules that anything shared between config should live in core.* than have two config settings, and I couldn't come up with a short name under core.* that made sense (core.defaultRemoteForCheckout?). See also 70c9ac2f19 ("DWIM "git checkout frotz" to "git checkout -b frotz origin/frotz"", 2009-10-18) which introduced this DWIM feature to begin with, and 4e85333197 ("worktree: make add <path> <branch> dwim", 2017-11-26) which added it to git-worktree. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-05 14:40:49 +00:00
checkout.defaultRemote::
When you run 'git checkout <something>' and only have one
remote, it may implicitly fall back on checking out and
tracking e.g. 'origin/<something>'. This stops working as soon
as you have more than one remote with a '<something>'
reference. This setting allows for setting the name of a
preferred remote that should always win when it comes to
disambiguation. The typical use-case is to set this to
`origin`.
+
Currently this is used by linkgit:git-checkout[1] when 'git checkout
<something>' will checkout the '<something>' branch on another remote,
and by linkgit:git-worktree[1] when 'git worktree add' refers to a
remote branch. This setting might be used for other checkout-like
commands or functionality in the future.
clean.requireForce::
A boolean to make git-clean do nothing unless given -f,
-i or -n. Defaults to true.
color.advice::
A boolean to enable/disable color in hints (e.g. when a push
failed, see `advice.*` for a list). May be set to `always`,
`false` (or `never`) or `auto` (or `true`), in which case colors
are used only when the error output goes to a terminal. If
unset, then the value of `color.ui` is used (`auto` by default).
color.advice.hint::
Use customized color for hints.
color.branch::
A boolean to enable/disable color in the output of
linkgit:git-branch[1]. May be set to `always`,
`false` (or `never`) or `auto` (or `true`), in which case colors are used
only when the output is to a terminal. If unset, then the
value of `color.ui` is used (`auto` by default).
color.branch.<slot>::
Use customized color for branch coloration. `<slot>` is one of
`current` (the current branch), `local` (a local branch),
`remote` (a remote-tracking branch in refs/remotes/),
`upstream` (upstream tracking branch), `plain` (other
refs).
color.diff::
Whether to use ANSI escape sequences to add color to patches.
If this is set to `always`, linkgit:git-diff[1],
linkgit:git-log[1], and linkgit:git-show[1] will use color
for all patches. If it is set to `true` or `auto`, those
commands will only use color when output is to the terminal.
If unset, then the value of `color.ui` is used (`auto` by
default).
+
This does not affect linkgit:git-format-patch[1] or the
'git-diff-{asterisk}' plumbing commands. Can be overridden on the
command line with the `--color[=<when>]` option.
diff.colorMoved::
If set to either a valid `<mode>` or a true value, moved lines
in a diff are colored differently, for details of valid modes
see '--color-moved' in linkgit:git-diff[1]. If simply set to
true the default color mode will be used. When set to false,
moved lines are not colored.
diff.colorMovedWS::
When moved lines are colored using e.g. the `diff.colorMoved` setting,
this option controls the `<mode>` how spaces are treated
for details of valid modes see '--color-moved-ws' in linkgit:git-diff[1].
color.diff.<slot>::
Use customized color for diff colorization. `<slot>` specifies
which part of the patch to use the specified color, and is one
of `context` (context text - `plain` is a historical synonym),
`meta` (metainformation), `frag`
(hunk header), 'func' (function in hunk header), `old` (removed lines),
`new` (added lines), `commit` (commit headers), `whitespace`
(highlighting whitespace errors), `oldMoved` (deleted lines),
`newMoved` (added lines), `oldMovedDimmed`, `oldMovedAlternative`,
`oldMovedAlternativeDimmed`, `newMovedDimmed`, `newMovedAlternative`
and `newMovedAlternativeDimmed` (See the '<mode>'
setting of '--color-moved' in linkgit:git-diff[1] for details).
color.decorate.<slot>::
Use customized color for 'git log --decorate' output. `<slot>` is one
of `branch`, `remoteBranch`, `tag`, `stash` or `HEAD` for local
branches, remote-tracking branches, tags, stash and HEAD, respectively
and `grafted` for grafted commits.
color.grep::
When set to `always`, always highlight matches. When `false` (or
`never`), never. When set to `true` or `auto`, use color only
when the output is written to the terminal. If unset, then the
value of `color.ui` is used (`auto` by default).
color.grep.<slot>::
Use customized color for grep colorization. `<slot>` specifies which
part of the line to use the specified color, and is one of
+
--
`context`;;
non-matching text in context lines (when using `-A`, `-B`, or `-C`)
`filename`;;
filename prefix (when not using `-h`)
`function`;;
function name lines (when using `-p`)
`lineNumber`;;
line number prefix (when using `-n`)
`column`;;
column number prefix (when using `--column`)
`match`;;
matching text (same as setting `matchContext` and `matchSelected`)
`matchContext`;;
matching text in context lines
`matchSelected`;;
matching text in selected lines
`selected`;;
non-matching text in selected lines
`separator`;;
separators between fields on a line (`:`, `-`, and `=`)
and between hunks (`--`)
--
color.interactive::
When set to `always`, always use colors for interactive prompts
and displays (such as those used by "git-add --interactive" and
"git-clean --interactive"). When false (or `never`), never.
When set to `true` or `auto`, use colors only when the output is
to the terminal. If unset, then the value of `color.ui` is
used (`auto` by default).
color.interactive.<slot>::
Use customized color for 'git add --interactive' and 'git clean
--interactive' output. `<slot>` may be `prompt`, `header`, `help`
or `error`, for four distinct types of normal output from
interactive commands.
color.pager::
A boolean to enable/disable colored output when the pager is in
use (default is true).
color.push::
A boolean to enable/disable color in push errors. May be set to
`always`, `false` (or `never`) or `auto` (or `true`), in which
case colors are used only when the error output goes to a terminal.
If unset, then the value of `color.ui` is used (`auto` by default).
color.push.error::
Use customized color for push errors.
color.showBranch::
A boolean to enable/disable color in the output of
linkgit:git-show-branch[1]. May be set to `always`,
`false` (or `never`) or `auto` (or `true`), in which case colors are used
only when the output is to a terminal. If unset, then the
value of `color.ui` is used (`auto` by default).
color.status::
A boolean to enable/disable color in the output of
linkgit:git-status[1]. May be set to `always`,
`false` (or `never`) or `auto` (or `true`), in which case colors are used
only when the output is to a terminal. If unset, then the
value of `color.ui` is used (`auto` by default).
color.status.<slot>::
Use customized color for status colorization. `<slot>` is
one of `header` (the header text of the status message),
`added` or `updated` (files which are added but not committed),
`changed` (files which are changed but not added in the index),
`untracked` (files which are not tracked by Git),
`branch` (the current branch),
`nobranch` (the color the 'no branch' warning is shown in, defaulting
to red),
`localBranch` or `remoteBranch` (the local and remote branch names,
respectively, when branch and tracking information is displayed in the
status short-format), or
`unmerged` (files which have unmerged changes).
color.blame.repeatedLines::
Use the customized color for the part of git-blame output that
is repeated meta information per line (such as commit id,
author name, date and timezone). Defaults to cyan.
color.blame.highlightRecent::
This can be used to color the metadata of a blame line depending
on age of the line.
+
This setting should be set to a comma-separated list of color and date settings,
starting and ending with a color, the dates should be set from oldest to newest.
The metadata will be colored given the colors if the the line was introduced
before the given timestamp, overwriting older timestamped colors.
+
Instead of an absolute timestamp relative timestamps work as well, e.g.
2.weeks.ago is valid to address anything older than 2 weeks.
+
It defaults to 'blue,12 month ago,white,1 month ago,red', which colors
everything older than one year blue, recent changes between one month and
one year old are kept white, and lines introduced within the last month are
colored red.
blame.coloring::
This determines the coloring scheme to be applied to blame
output. It can be 'repeatedLines', 'highlightRecent',
or 'none' which is the default.
color.transport::
A boolean to enable/disable color when pushes are rejected. May be
set to `always`, `false` (or `never`) or `auto` (or `true`), in which
case colors are used only when the error output goes to a terminal.
If unset, then the value of `color.ui` is used (`auto` by default).
color.transport.rejected::
Use customized color when a push was rejected.
color.ui::
This variable determines the default value for variables such
as `color.diff` and `color.grep` that control the use of color
per command family. Its scope will expand as more commands learn
configuration to set a default for the `--color` option. Set it
to `false` or `never` if you prefer Git commands not to use
color unless enabled explicitly with some other configuration
or the `--color` option. Set it to `always` if you want all
output not intended for machine consumption to use color, to
`true` or `auto` (this is the default since Git 1.8.4) if you
want such output to use color when written to the terminal.
column.ui::
Specify whether supported commands should output in columns.
This variable consists of a list of tokens separated by spaces
or commas:
+
These options control when the feature should be enabled
(defaults to 'never'):
+
--
`always`;;
always show in columns
`never`;;
never show in columns
`auto`;;
show in columns if the output is to the terminal
--
+
These options control layout (defaults to 'column'). Setting any
of these implies 'always' if none of 'always', 'never', or 'auto' are
specified.
+
--
`column`;;
fill columns before rows
`row`;;
fill rows before columns
`plain`;;
show in one column
--
+
Finally, these options can be combined with a layout option (defaults
to 'nodense'):
+
--
`dense`;;
make unequal size columns to utilize more space
`nodense`;;
make equal size columns
--
column.branch::
Specify whether to output branch listing in `git branch` in columns.
See `column.ui` for details.
column.clean::
Specify the layout when list items in `git clean -i`, which always
shows files and directories in columns. See `column.ui` for details.
column.status::
Specify whether to output untracked files in `git status` in columns.
See `column.ui` for details.
column.tag::
Specify whether to output tag listing in `git tag` in columns.
See `column.ui` for details.
commit.cleanup::
This setting overrides the default of the `--cleanup` option in
`git commit`. See linkgit:git-commit[1] for details. Changing the
default can be useful when you always want to keep lines that begin
with comment character `#` in your log message, in which case you
would do `git config commit.cleanup whitespace` (note that you will
have to remove the help lines that begin with `#` in the commit log
template yourself, if you do this).
commit.gpgSign::
A boolean to specify whether all commits should be GPG signed.
Use of this option when doing operations such as rebase can
result in a large number of commits being signed. It may be
convenient to use an agent to avoid typing your GPG passphrase
several times.
commit.status::
A boolean to enable/disable inclusion of status information in the
commit message template when using an editor to prepare the commit
message. Defaults to true.
commit.template::
Specify the pathname of a file to use as the template for
new commit messages.
commit.verbose::
A boolean or int to specify the level of verbose with `git commit`.
See linkgit:git-commit[1].
credential.helper::
Specify an external helper to be called when a username or
password credential is needed; the helper may consult external
storage to avoid prompting the user for the credentials. Note
that multiple helpers may be defined. See linkgit:gitcredentials[7]
for details.
credential.useHttpPath::
When acquiring credentials, consider the "path" component of an http
or https URL to be important. Defaults to false. See
linkgit:gitcredentials[7] for more information.
credential.username::
If no username is set for a network authentication, use this username
by default. See credential.<context>.* below, and
linkgit:gitcredentials[7].
credential.<url>.*::
Any of the credential.* options above can be applied selectively to
some credentials. For example "credential.https://example.com.username"
would set the default username only for https connections to
example.com. See linkgit:gitcredentials[7] for details on how URLs are
matched.
credentialCache.ignoreSIGHUP::
Tell git-credential-cache--daemon to ignore SIGHUP, instead of quitting.
completion.commands::
This is only used by git-completion.bash to add or remove
commands from the list of completed commands. Normally only
porcelain commands and a few select others are completed. You
can add more commands, separated by space, in this
variable. Prefixing the command with '-' will remove it from
the existing list.
include::diff-config.txt[]
difftool.<tool>.path::
Override the path for the given tool. This is useful in case
your tool is not in the PATH.
difftool.<tool>.cmd::
Specify the command to invoke the specified diff tool.
The specified command is evaluated in shell with the following
variables available: 'LOCAL' is set to the name of the temporary
file containing the contents of the diff pre-image and 'REMOTE'
is set to the name of the temporary file containing the contents
of the diff post-image.
difftool.prompt::
Prompt before each invocation of the diff tool.
fastimport.unpackLimit::
If the number of objects imported by linkgit:git-fast-import[1]
is below this limit, then the objects will be unpacked into
loose object files. However if the number of imported objects
equals or exceeds this limit then the pack will be stored as a
pack. Storing the pack from a fast-import can make the import
operation complete faster, especially on slow filesystems. If
not set, the value of `transfer.unpackLimit` is used instead.
fetch.recurseSubmodules::
This option can be either set to a boolean value or to 'on-demand'.
Setting it to a boolean changes the behavior of fetch and pull to
unconditionally recurse into submodules when set to true or to not
recurse at all when set to false. When set to 'on-demand' (the default
value), fetch and pull will only recurse into a populated submodule
when its superproject retrieves a commit that updates the submodule's
reference.
fetch.fsckObjects::
If it is set to true, git-fetch-pack will check all fetched
objects. It will abort in the case of a malformed object or a
broken link. The result of an abort are only dangling objects.
Defaults to false. If not set, the value of `transfer.fsckObjects`
is used instead.
fetch.unpackLimit::
If the number of objects fetched over the Git native
transfer is below this
limit, then the objects will be unpacked into loose object
files. However if the number of received objects equals or
exceeds this limit then the received pack will be stored as
a pack, after adding any missing delta bases. Storing the
pack from a push can make the push operation complete faster,
especially on slow filesystems. If not set, the value of
`transfer.unpackLimit` is used instead.
fetch.prune::
If true, fetch will automatically behave as if the `--prune`
option was given on the command line. See also `remote.<name>.prune`
and the PRUNING section of linkgit:git-fetch[1].
fetch: add a --prune-tags option and fetch.pruneTags config Add a --prune-tags option to git-fetch, along with fetch.pruneTags config option and a -P shorthand (-p is --prune). This allows for doing any of: git fetch -p -P git fetch --prune --prune-tags git fetch -p -P origin git fetch --prune --prune-tags origin Or simply: git config fetch.prune true && git config fetch.pruneTags true && git fetch Instead of the much more verbose: git fetch --prune origin 'refs/tags/*:refs/tags/*' '+refs/heads/*:refs/remotes/origin/*' Before this feature it was painful to support the use-case of pulling from a repo which is having both its branches *and* tags deleted regularly, and have our local references to reflect upstream. At work we create deployment tags in the repo for each rollout, and there's *lots* of those, so they're archived within weeks for performance reasons. Without this change it's hard to centrally configure such repos in /etc/gitconfig (on servers that are only used for working with them). You need to set fetch.prune=true globally, and then for each repo: git -C {} config --replace-all remote.origin.fetch "refs/tags/*:refs/tags/*" "^\+*refs/tags/\*:refs/tags/\*$" Now I can simply set fetch.pruneTags=true in /etc/gitconfig as well, and users running "git pull" will automatically get the pruning semantics I want. Even though "git remote" has corresponding "prune" and "update --prune" subcommands I'm intentionally not adding a corresponding prune-tags or "update --prune --prune-tags" mode to that command. It's advertised (as noted in my recent "git remote doc: correct dangerous lies about what prune does") as only modifying remote tracking references, whereas any --prune-tags option is always going to modify what from the user's perspective is a local copy of the tag, since there's no such thing as a remote tracking tag. Ideally add_prune_tags_to_fetch_refspec() would be something that would use ALLOC_GROW() to grow the 'fetch` member of the 'remote' struct. Instead I'm realloc-ing remote->fetch and adding the tag_refspec to the end. The reason is that parse_{fetch,push}_refspec which allocate the refspec (ultimately remote->fetch) struct are called many places that don't have access to a 'remote' struct. It would be hard to change all their callsites to be amenable to carry around the bookkeeping variables required for dynamic allocation. All the other callers of the API first incrementally construct the string version of the refspec in remote->fetch_refspec via add_fetch_refspec(), before finally calling parse_fetch_refspec() via some variation of remote_get(). It's less of a pain to deal with the one special case that needs to modify already constructed refspecs than to chase down and change all the other callsites. The API I'm adding is intentionally not generalized because if we add more of these we'd probably want to re-visit how this is done. See my "Re: [BUG] git remote prune removes local tags, depending on fetch config" (87po6ahx87.fsf@evledraar.gmail.com; https://public-inbox.org/git/87po6ahx87.fsf@evledraar.gmail.com/) for more background info. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-09 20:32:15 +00:00
fetch.pruneTags::
If true, fetch will automatically behave as if the
`refs/tags/*:refs/tags/*` refspec was provided when pruning,
if not set already. This allows for setting both this option
and `fetch.prune` to maintain a 1=1 mapping to upstream
refs. See also `remote.<name>.pruneTags` and the PRUNING
section of linkgit:git-fetch[1].
fetch.output::
Control how ref update status is printed. Valid values are
`full` and `compact`. Default value is `full`. See section
OUTPUT in linkgit:git-fetch[1] for detail.
fetch.negotiationAlgorithm::
Control how information about the commits in the local repository is
sent when negotiating the contents of the packfile to be sent by the
server. Set to "skipping" to use an algorithm that skips commits in an
effort to converge faster, but may result in a larger-than-necessary
packfile; any other value instructs Git to use the default algorithm
that never skips commits (unless the server has acknowledged it or one
of its descendants).
format.attach::
Enable multipart/mixed attachments as the default for
'format-patch'. The value can also be a double quoted string
which will enable attachments as the default and set the
value as the boundary. See the --attach option in
linkgit:git-format-patch[1].
format.from::
Provides the default value for the `--from` option to format-patch.
Accepts a boolean value, or a name and email address. If false,
format-patch defaults to `--no-from`, using commit authors directly in
the "From:" field of patch mails. If true, format-patch defaults to
`--from`, using your committer identity in the "From:" field of patch
mails and including a "From:" field in the body of the patch mail if
different. If set to a non-boolean value, format-patch uses that
value instead of your committer identity. Defaults to false.
format.numbered::
A boolean which can enable or disable sequence numbers in patch
subjects. It defaults to "auto" which enables it only if there
is more than one patch. It can be enabled or disabled for all
messages by setting it to "true" or "false". See --numbered
option in linkgit:git-format-patch[1].
format.headers::
Additional email headers to include in a patch to be submitted
by mail. See linkgit:git-format-patch[1].
format.to::
format.cc::
Additional recipients to include in a patch to be submitted
by mail. See the --to and --cc options in
linkgit:git-format-patch[1].
format.subjectPrefix::
The default for format-patch is to output files with the '[PATCH]'
subject prefix. Use this variable to change that prefix.
format.signature::
The default for format-patch is to output a signature containing
the Git version number. Use this variable to change that default.
Set this variable to the empty string ("") to suppress
signature generation.
format.signatureFile::
Works just like format.signature except the contents of the
file specified by this variable will be used as the signature.
format.suffix::
The default for format-patch is to output files with the suffix
`.patch`. Use this variable to change that suffix (make sure to
include the dot if you want it).
format.pretty::
The default pretty format for log/show/whatchanged command,
See linkgit:git-log[1], linkgit:git-show[1],
linkgit:git-whatchanged[1].
format.thread::
The default threading style for 'git format-patch'. Can be
a boolean value, or `shallow` or `deep`. `shallow` threading
makes every mail a reply to the head of the series,
where the head is chosen from the cover letter, the
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
`--in-reply-to`, and the first patch mail, in this order.
`deep` threading makes every mail a reply to the previous one.
A true boolean value is the same as `shallow`, and a false
value disables threading.
format.signOff::
A boolean value which lets you enable the `-s/--signoff` option of
format-patch by default. *Note:* Adding the Signed-off-by: line to a
patch should be a conscious act and means that you certify you have
the rights to submit this work under the same open source license.
Please see the 'SubmittingPatches' document for further discussion.
format.coverLetter::
A boolean that controls whether to generate a cover-letter when
format-patch is invoked, but in addition can be set to "auto", to
generate a cover-letter only when there's more than one patch.
format.outputDirectory::
Set a custom directory to store the resulting files instead of the
current working directory.
format.useAutoBase::
A boolean value which lets you enable the `--base=auto` option of
format-patch by default.
filter.<driver>.clean::
The command which is used to convert the content of a worktree
file to a blob upon checkin. See linkgit:gitattributes[5] for
details.
filter.<driver>.smudge::
The command which is used to convert the content of a blob
object to a worktree file upon checkout. See
linkgit:gitattributes[5] for details.
fsck.<msg-id>::
Allows overriding the message type (error, warn or ignore) of a
specific message ID such as `missingEmail`.
+
For convenience, fsck prefixes the error/warning with the message ID,
e.g. "missingEmail: invalid author/committer line - missing email" means
that setting `fsck.missingEmail = ignore` will hide that issue.
+
This feature is intended to support working with legacy repositories
which cannot be repaired without disruptive changes.
fsck.skipList::
The path to a sorted list of object names (i.e. one SHA-1 per
line) that are known to be broken in a non-fatal way and should
be ignored. This feature is useful when an established project
should be accepted despite early commits containing errors that
can be safely ignored such as invalid committer email addresses.
Note: corrupt objects cannot be skipped with this setting.
gc --aggressive: make --depth configurable When 1c192f3 (gc --aggressive: make it really aggressive - 2007-12-06) made --depth=250 the default value, it didn't really explain the reason behind, especially the pros and cons of --depth=250. An old mail from Linus below explains it at length. Long story short, --depth=250 is a disk saver and a performance killer. Not everybody agrees on that aggressiveness. Let the user configure it. From: Linus Torvalds <torvalds@linux-foundation.org> Subject: Re: [PATCH] gc --aggressive: make it really aggressive Date: Thu, 6 Dec 2007 08:19:24 -0800 (PST) Message-ID: <alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org> Gmane-URL: http://article.gmane.org/gmane.comp.gcc.devel/94637 On Thu, 6 Dec 2007, Harvey Harrison wrote: > > 7:41:25elapsed 86%CPU Heh. And this is why you want to do it exactly *once*, and then just export the end result for others ;) > -r--r--r-- 1 hharrison hharrison 324094684 2007-12-06 07:26 pack-1d46...pack But yeah, especially if you allow longer delta chains, the end result can be much smaller (and what makes the one-time repack more expensive is the window size, not the delta chain - you could make the delta chains longer with no cost overhead at packing time) HOWEVER. The longer delta chains do make it potentially much more expensive to then use old history. So there's a trade-off. And quite frankly, a delta depth of 250 is likely going to cause overflows in the delta cache (which is only 256 entries in size *and* it's a hash, so it's going to start having hash conflicts long before hitting the 250 depth limit). So when I said "--depth=250 --window=250", I chose those numbers more as an example of extremely aggressive packing, and I'm not at all sure that the end result is necessarily wonderfully usable. It's going to save disk space (and network bandwidth - the delta's will be re-used for the network protocol too!), but there are definitely downsides too, and using long delta chains may simply not be worth it in practice. (And some of it might just want to have git tuning, ie if people think that long deltas are worth it, we could easily just expand on the delta hash, at the cost of some more memory used!) That said, the good news is that working with *new* history will not be affected negatively, and if you want to be _really_ sneaky, there are ways to say "create a pack that contains the history up to a version one year ago, and be very aggressive about those old versions that we still want to have around, but do a separate pack for newer stuff using less aggressive parameters" So this is something that can be tweaked, although we don't really have any really nice interfaces for stuff like that (ie the git delta cache size is hardcoded in the sources and cannot be set in the config file, and the "pack old history more aggressively" involves some manual scripting and knowing how "git pack-objects" works rather than any nice simple command line switch). So the thing to take away from this is: - git is certainly flexible as hell - .. but to get the full power you may need to tweak things - .. happily you really only need to have one person to do the tweaking, and the tweaked end results will be available to others that do not need to know/care. And whether the difference between 320MB and 500MB is worth any really involved tweaking (considering the potential downsides), I really don't know. Only testing will tell. Linus Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-16 13:35:03 +00:00
gc.aggressiveDepth::
The depth parameter used in the delta compression
algorithm used by 'git gc --aggressive'. This defaults
gc: default aggressive depth to 50 This commit message is long and has lots of background and numbers. The summary is: the current default of 250 doesn't save much space, and costs CPU. It's not a good tradeoff. Read on for details. The "--aggressive" flag to git-gc does three things: 1. use "-f" to throw out existing deltas and recompute from scratch 2. use "--window=250" to look harder for deltas 3. use "--depth=250" to make longer delta chains Items (1) and (2) are good matches for an "aggressive" repack. They ask the repack to do more computation work in the hopes of getting a better pack. You pay the costs during the repack, and other operations see only the benefit. Item (3) is not so clear. Allowing longer chains means fewer restrictions on the deltas, which means potentially finding better ones and saving some space. But it also means that operations which access the deltas have to follow longer chains, which affects their performance. So it's a tradeoff, and it's not clear that the tradeoff is even a good one. The existing "250" numbers for "--aggressive" come originally from this thread: http://public-inbox.org/git/alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org/ where Linus says: So when I said "--depth=250 --window=250", I chose those numbers more as an example of extremely aggressive packing, and I'm not at all sure that the end result is necessarily wonderfully usable. It's going to save disk space (and network bandwidth - the delta's will be re-used for the network protocol too!), but there are definitely downsides too, and using long delta chains may simply not be worth it in practice. There are some numbers in that thread, but they're mostly focused on the improved window size, and measure the improvement from --depth=250 and --window=250 together. E.g.: http://public-inbox.org/git/9e4733910712062006l651571f3w7f76ce64c6650dff@mail.gmail.com/ talks about the improved run-time of "git-blame", which comes from the reduced pack size. But most of that reduction is coming from --window=250, whereas most of the extra costs come from --depth=250. There's a link in that thread showing that increasing the depth beyond 50 doesn't seem to help much with the size: https://vcscompare.blogspot.com/2008/06/git-repack-parameters.html but again, no discussion of the timing impact. In an earlier thread from Ted Ts'o which discussed setting the non-aggressive default (from 10 to 50): http://public-inbox.org/git/20070509134958.GA21489%40thunk.org/ we have more numbers, with the conclusion that going past 50 does not help size much, and hurts the speed of normal operations. So from that, we might guess that 50 is actually a sweet spot, even for aggressive, if we interpret aggressive to "spend time now to make a better pack". It is not clear that "--depth=250" is actually a better pack. It may be slightly _smaller_, but it carries a run-time penalty. Here are some more recent timings I did to verify that. They show three things: - the size of the resulting pack (so disk saved to store, bandwidth saved on clones/fetches) - the cost of "rev-list --objects --all", which shows the effect of the delta chains on trees (commits typically don't delta, and the command doesn't touch the blobs at all) - the cost of "log -Sfoo", which will additionally access each blob All cases were repacked with "git repack -adf --depth=$d --window=250" (so basically, what would happen if we tweaked the "gc --aggressive" default depth). The timings are all wall-clock best-of-3. The machine itself has plenty of RAM compared to the repositories (which is probably typical of most workstations these days), so we're really measuring CPU usage, as the whole thing will be in disk cache after the first run. The core.deltaBaseCacheLimit is at its default of 96MiB. It's possible that tweaking it would have some impact on the tests, as some of them (especially "log -S" on a large repo) are likely to overflow that. But bumping that carries a run-time memory cost, so for these tests, I focused on what we could do just with the on-disk pack tradeoffs. Each test is done for four depths: 250 (the current value), 50 (the current default that tested well previously), 100 (to show something on the larger side, which previous tests showed was not a good tradeoff), and 10 (the very old default, which previous tests showed was worse than 50). Here are the numbers for linux.git: depth | size | % | rev-list | % | log -Sfoo | % -------+-------+-------+----------+--------+-----------+------- 250 | 967MB | n/a | 48.159s | n/a | 378.088 | n/a 100 | 971MB | +0.4% | 41.471s | -13.9% | 342.060 | -9.5% 50 | 979MB | +1.2% | 37.778s | -21.6% | 311.040s | -17.7% 10 | 1.1GB | +6.6% | 32.518s | -32.5% | 279.890s | -25.9% and for git.git: depth | size | % | rev-list | % | log -Sfoo | % -------+-------+-------+----------+--------+-----------+------- 250 | 48MB | n/a | 2.215s | n/a | 20.922s | n/a 100 | 49MB | +0.5% | 2.140s | -3.4% | 17.736s | -15.2% 50 | 49MB | +1.7% | 2.099s | -5.2% | 15.418s | -26.3% 10 | 53MB | +9.3% | 2.001s | -9.7% | 12.677s | -39.4% You can see that that the CPU savings for regular operations improves as we decrease the depth. The savings are less for "rev-list" on a smaller repository than they are for blob-accessing operations, or even rev-list on a larger repository. This may mean that a larger delta cache would help (though setting core.deltaBaseCacheLimit by itself doesn't). But we can also see that the space savings are not that great as the depth goes higher. Saving 5-10% between 10 and 50 is probably worth the CPU tradeoff. Saving 1% to go from 50 to 100, or another 0.5% to go from 100 to 250 is probably not. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-11 16:13:09 +00:00
to 50.
gc --aggressive: make --depth configurable When 1c192f3 (gc --aggressive: make it really aggressive - 2007-12-06) made --depth=250 the default value, it didn't really explain the reason behind, especially the pros and cons of --depth=250. An old mail from Linus below explains it at length. Long story short, --depth=250 is a disk saver and a performance killer. Not everybody agrees on that aggressiveness. Let the user configure it. From: Linus Torvalds <torvalds@linux-foundation.org> Subject: Re: [PATCH] gc --aggressive: make it really aggressive Date: Thu, 6 Dec 2007 08:19:24 -0800 (PST) Message-ID: <alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org> Gmane-URL: http://article.gmane.org/gmane.comp.gcc.devel/94637 On Thu, 6 Dec 2007, Harvey Harrison wrote: > > 7:41:25elapsed 86%CPU Heh. And this is why you want to do it exactly *once*, and then just export the end result for others ;) > -r--r--r-- 1 hharrison hharrison 324094684 2007-12-06 07:26 pack-1d46...pack But yeah, especially if you allow longer delta chains, the end result can be much smaller (and what makes the one-time repack more expensive is the window size, not the delta chain - you could make the delta chains longer with no cost overhead at packing time) HOWEVER. The longer delta chains do make it potentially much more expensive to then use old history. So there's a trade-off. And quite frankly, a delta depth of 250 is likely going to cause overflows in the delta cache (which is only 256 entries in size *and* it's a hash, so it's going to start having hash conflicts long before hitting the 250 depth limit). So when I said "--depth=250 --window=250", I chose those numbers more as an example of extremely aggressive packing, and I'm not at all sure that the end result is necessarily wonderfully usable. It's going to save disk space (and network bandwidth - the delta's will be re-used for the network protocol too!), but there are definitely downsides too, and using long delta chains may simply not be worth it in practice. (And some of it might just want to have git tuning, ie if people think that long deltas are worth it, we could easily just expand on the delta hash, at the cost of some more memory used!) That said, the good news is that working with *new* history will not be affected negatively, and if you want to be _really_ sneaky, there are ways to say "create a pack that contains the history up to a version one year ago, and be very aggressive about those old versions that we still want to have around, but do a separate pack for newer stuff using less aggressive parameters" So this is something that can be tweaked, although we don't really have any really nice interfaces for stuff like that (ie the git delta cache size is hardcoded in the sources and cannot be set in the config file, and the "pack old history more aggressively" involves some manual scripting and knowing how "git pack-objects" works rather than any nice simple command line switch). So the thing to take away from this is: - git is certainly flexible as hell - .. but to get the full power you may need to tweak things - .. happily you really only need to have one person to do the tweaking, and the tweaked end results will be available to others that do not need to know/care. And whether the difference between 320MB and 500MB is worth any really involved tweaking (considering the potential downsides), I really don't know. Only testing will tell. Linus Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-16 13:35:03 +00:00
gc.aggressiveWindow::
The window size parameter used in the delta compression
algorithm used by 'git gc --aggressive'. This defaults
to 250.
gc.auto::
When there are approximately more than this many loose
objects in the repository, `git gc --auto` will pack them.
Some Porcelain commands use this command to perform a
light-weight garbage collection from time to time. The
default value is 6700. Setting this to 0 disables it.
gc.autoPackLimit::
When there are more than this many packs that are not
marked with `*.keep` file in the repository, `git gc
--auto` consolidates them into one larger pack. The
default value is 50. Setting this to 0 disables it.
gc.autoDetach::
Make `git gc --auto` return immediately and run in background
if the system supports it. Default is true.
gc.bigPackThreshold::
If non-zero, all packs larger than this limit are kept when
`git gc` is run. This is very similar to `--keep-base-pack`
except that all packs that meet the threshold are kept, not
just the base pack. Defaults to zero. Common unit suffixes of
'k', 'm', or 'g' are supported.
+
Note that if the number of kept packs is more than gc.autoPackLimit,
this configuration variable is ignored, all packs except the base pack
will be repacked. After this the number of packs should go below
gc.autoPackLimit and gc.bigPackThreshold should be respected again.
gc.logExpiry::
If the file gc.log exists, then `git gc --auto` won't run
unless that file is more than 'gc.logExpiry' old. Default is
"1.day". See `gc.pruneExpire` for more ways to specify its
value.
gc.packRefs::
Running `git pack-refs` in a repository renders it
unclonable by Git versions prior to 1.5.1.2 over dumb
transports such as HTTP. This variable determines whether
'git gc' runs `git pack-refs`. This can be set to `notbare`
to enable it within all non-bare repos or it can be set to a
boolean value. The default is `true`.
gc.pruneExpire::
When 'git gc' is run, it will call 'prune --expire 2.weeks.ago'.
Override the grace period with this config variable. The value
"now" may be used to disable this grace period and always prune
unreachable objects immediately, or "never" may be used to
suppress pruning. This feature helps prevent corruption when
'git gc' runs concurrently with another process writing to the
repository; see the "NOTES" section of linkgit:git-gc[1].
gc: call "prune --expire 2.weeks.ago" by default The only reason we did not call "prune" in git-gc was that it is an inherently dangerous operation: if there is a commit going on, you will prune loose objects that were just created, and are, in fact, needed by the commit object just about to be created. Since it is dangerous, we told users so. That led to many users not even daring to run it when it was actually safe. Besides, they are users, and should not have to remember such details as when to call git-gc with --prune, or to call git-prune directly. Of course, the consequence was that "git gc --auto" gets triggered much more often than we would like, since unreferenced loose objects (such as left-overs from a rebase or a reset --hard) were never pruned. Alas, git-prune recently learnt the option --expire <minimum-age>, which makes it a much safer operation. This allows us to call prune from git-gc, with a grace period of 2 weeks for the unreferenced loose objects (this value was determined in a discussion on the git list as a safe one). If you want to override this grace period, just set the config variable gc.pruneExpire to a different value; an example would be [gc] pruneExpire = 6.months.ago or even "never", if you feel really paranoid. Note that this new behaviour makes "--prune" be a no-op. While adding a test to t5304-prune.sh (since it really tests the implicit call to "prune"), also the original test for "prune --expire" was moved there from t1410-reflog.sh, where it did not belong. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
2008-03-12 20:55:47 +00:00
gc.worktreePruneExpire::
When 'git gc' is run, it calls
'git worktree prune --expire 3.months.ago'.
This config variable can be used to set a different grace
period. The value "now" may be used to disable the grace
period and prune `$GIT_DIR/worktrees` immediately, or "never"
may be used to suppress pruning.
gc.reflogExpire::
gc.<pattern>.reflogExpire::
'git reflog expire' removes reflog entries older than
this time; defaults to 90 days. The value "now" expires all
entries immediately, and "never" suppresses expiration
altogether. With "<pattern>" (e.g.
"refs/stash") in the middle the setting applies only to
the refs that match the <pattern>.
gc.reflogExpireUnreachable::
gc.<pattern>.reflogExpireUnreachable::
'git reflog expire' removes reflog entries older than
this time and are not reachable from the current tip;
defaults to 30 days. The value "now" expires all entries
immediately, and "never" suppresses expiration altogether.
With "<pattern>" (e.g. "refs/stash")
in the middle, the setting applies only to the refs that
match the <pattern>.
gc.rerereResolved::
Records of conflicted merge you resolved earlier are
kept for this many days when 'git rerere gc' is run.
You can also use more human-readable "1.month.ago", etc.
The default is 60 days. See linkgit:git-rerere[1].
gc.rerereUnresolved::
Records of conflicted merge you have not resolved are
kept for this many days when 'git rerere gc' is run.
You can also use more human-readable "1.month.ago", etc.
The default is 15 days. See linkgit:git-rerere[1].
gitcvs.commitMsgAnnotation::
Append this string to each commit message. Set to empty string
to disable this feature. Defaults to "via git-CVS emulator".
gitcvs.enabled::
Whether the CVS server interface is enabled for this repository.
See linkgit:git-cvsserver[1].
gitcvs.logFile::
Path to a log file where the CVS server interface well... logs
various stuff. See linkgit:git-cvsserver[1].
gitcvs.usecrlfattr::
If true, the server will look up the end-of-line conversion
attributes for files to determine the `-k` modes to use. If
the attributes force Git to treat a file as text,
the `-k` mode will be left blank so CVS clients will
treat it as text. If they suppress text conversion, the file
will be set with '-kb' mode, which suppresses any newline munging
the client might otherwise do. If the attributes do not allow
the file type to be determined, then `gitcvs.allBinary` is
used. See linkgit:gitattributes[5].
gitcvs.allBinary::
This is used if `gitcvs.usecrlfattr` does not resolve
the correct '-kb' mode to use. If true, all
unresolved files are sent to the client in
mode '-kb'. This causes the client to treat them
as binary files, which suppresses any newline munging it
otherwise might do. Alternatively, if it is set to "guess",
then the contents of the file are examined to decide if
it is binary, similar to `core.autocrlf`.
gitcvs.dbName::
Database used by git-cvsserver to cache revision information
derived from the Git repository. The exact meaning depends on the
used database driver, for SQLite (which is the default driver) this
is a filename. Supports variable substitution (see
linkgit:git-cvsserver[1] for details). May not contain semicolons (`;`).
Default: '%Ggitcvs.%m.sqlite'
gitcvs.dbDriver::
Used Perl DBI driver. You can specify any available driver
for this here, but it might not work. git-cvsserver is tested
with 'DBD::SQLite', reported to work with 'DBD::Pg', and
reported *not* to work with 'DBD::mysql'. Experimental feature.
May not contain double colons (`:`). Default: 'SQLite'.
See linkgit:git-cvsserver[1].
gitcvs.dbUser, gitcvs.dbPass::
Database user and password. Only useful if setting `gitcvs.dbDriver`,
since SQLite has no concept of database users and/or passwords.
'gitcvs.dbUser' supports variable substitution (see
linkgit:git-cvsserver[1] for details).
gitcvs.dbTableNamePrefix::
Database table name prefix. Prepended to the names of any
database tables used, allowing a single database to be used
for several repositories. Supports variable substitution (see
linkgit:git-cvsserver[1] for details). Any non-alphabetic
characters will be replaced with underscores.
All gitcvs variables except for `gitcvs.usecrlfattr` and
`gitcvs.allBinary` can also be specified as
'gitcvs.<access_method>.<varname>' (where 'access_method'
is one of "ext" and "pserver") to make them apply only for the given
access method.
gitweb.category::
gitweb.description::
gitweb.owner::
gitweb.url::
See linkgit:gitweb[1] for description.
gitweb.avatar::
gitweb.blame::
gitweb.grep::
gitweb.highlight::
gitweb.patches::
gitweb.pickaxe::
gitweb.remote_heads::
gitweb.showSizes::
gitweb.snapshot::
See linkgit:gitweb.conf[5] for description.
grep.lineNumber::
If set to true, enable `-n` option by default.
grep.column::
If set to true, enable the `--column` option by default.
grep.patternType::
Set the default matching behavior. Using a value of 'basic', 'extended',
'fixed', or 'perl' will enable the `--basic-regexp`, `--extended-regexp`,
`--fixed-strings`, or `--perl-regexp` option accordingly, while the
value 'default' will return to the default matching behavior.
grep.extendedRegexp::
If set to true, enable `--extended-regexp` option by default. This
option is ignored when the `grep.patternType` option is set to a value
other than 'default'.
grep.threads::
Number of grep worker threads to use.
See `grep.threads` in linkgit:git-grep[1] for more information.
grep.fallbackToNoIndex::
If set to true, fall back to git grep --no-index if git grep
is executed outside of a git repository. Defaults to false.
gpg.program::
Use this custom program instead of "`gpg`" found on `$PATH` when
making or verifying a PGP signature. The program must support the
same command-line interface as GPG, namely, to verify a detached
signature, "`gpg --verify $file - <$signature`" is run, and the
program is expected to signal a good signature by exiting with
code 0, and to generate an ASCII-armored detached signature, the
standard input of "`gpg -bsau $key`" is fed with the contents to be
signed, and the program is expected to send the result to its
standard output.
gui.commitMsgWidth::
Defines how wide the commit message window is in the
linkgit:git-gui[1]. "75" is the default.
gui.diffContext::
Specifies how many context lines should be used in calls to diff
made by the linkgit:git-gui[1]. The default is "5".
gui.displayUntracked::
Determines if linkgit:git-gui[1] shows untracked files
in the file list. The default is "true".
gui.encoding::
Specifies the default encoding to use for displaying of
file contents in linkgit:git-gui[1] and linkgit:gitk[1].
It can be overridden by setting the 'encoding' attribute
for relevant files (see linkgit:gitattributes[5]).
If this option is not set, the tools default to the
locale encoding.
gui.matchTrackingBranch::
Determines if new branches created with linkgit:git-gui[1] should
default to tracking remote branches with matching names or
not. Default: "false".
gui.newBranchTemplate::
Is used as suggested name when creating new branches using the
linkgit:git-gui[1].
gui.pruneDuringFetch::
"true" if linkgit:git-gui[1] should prune remote-tracking branches when
performing a fetch. The default value is "false".
gui.trustmtime::
Determines if linkgit:git-gui[1] should trust the file modification
timestamp or not. By default the timestamps are not trusted.
gui.spellingDictionary::
Specifies the dictionary used for spell checking commit messages in
the linkgit:git-gui[1]. When set to "none" spell checking is turned
off.
gui.fastCopyBlame::
If true, 'git gui blame' uses `-C` instead of `-C -C` for original
location detection. It makes blame significantly faster on huge
repositories at the expense of less thorough copy detection.
gui.copyBlameThreshold::
Specifies the threshold to use in 'git gui blame' original location
detection, measured in alphanumeric characters. See the
linkgit:git-blame[1] manual for more information on copy detection.
gui.blamehistoryctx::
Specifies the radius of history context in days to show in
linkgit:gitk[1] for the selected commit, when the `Show History
Context` menu item is invoked from 'git gui blame'. If this
variable is set to zero, the whole history is shown.
guitool.<name>.cmd::
Specifies the shell command line to execute when the corresponding item
of the linkgit:git-gui[1] `Tools` menu is invoked. This option is
mandatory for every tool. The command is executed from the root of
the working directory, and in the environment it receives the name of
the tool as `GIT_GUITOOL`, the name of the currently selected file as
'FILENAME', and the name of the current branch as 'CUR_BRANCH' (if
the head is detached, 'CUR_BRANCH' is empty).
guitool.<name>.needsFile::
Run the tool only if a diff is selected in the GUI. It guarantees
that 'FILENAME' is not empty.
guitool.<name>.noConsole::
Run the command silently, without creating a window to display its
output.
guitool.<name>.noRescan::
Don't rescan the working directory for changes after the tool
finishes execution.
guitool.<name>.confirm::
Show a confirmation dialog before actually running the tool.
guitool.<name>.argPrompt::
Request a string argument from the user, and pass it to the tool
through the `ARGS` environment variable. Since requesting an
argument implies confirmation, the 'confirm' option has no effect
if this is enabled. If the option is set to 'true', 'yes', or '1',
the dialog uses a built-in generic prompt; otherwise the exact
value of the variable is used.
guitool.<name>.revPrompt::
Request a single valid revision from the user, and set the
`REVISION` environment variable. In other aspects this option
is similar to 'argPrompt', and can be used together with it.
guitool.<name>.revUnmerged::
Show only unmerged branches in the 'revPrompt' subdialog.
This is useful for tools similar to merge or rebase, but not
for things like checkout or reset.
guitool.<name>.title::
Specifies the title to use for the prompt dialog. The default
is the tool name.
guitool.<name>.prompt::
Specifies the general prompt string to display at the top of
the dialog, before subsections for 'argPrompt' and 'revPrompt'.
The default value includes the actual command.
help.browser::
Specify the browser that will be used to display help in the
'web' format. See linkgit:git-help[1].
help.format::
Override the default help format used by linkgit:git-help[1].
Values 'man', 'info', 'web' and 'html' are supported. 'man' is
the default. 'web' and 'html' are the same.
help.autoCorrect::
Automatically correct and execute mistyped commands after
waiting for the given number of deciseconds (0.1 sec). If more
than one command can be deduced from the entered text, nothing
will be executed. If the value of this option is negative,
the corrected command will be executed immediately. If the
value is 0 - the command will be just shown but not executed.
This is the default.
help.htmlPath::
Specify the path where the HTML documentation resides. File system paths
and URLs are supported. HTML pages will be prefixed with this path when
help is displayed in the 'web' format. This defaults to the documentation
path of your Git installation.
http.proxy::
Override the HTTP proxy, normally configured using the 'http_proxy',
'https_proxy', and 'all_proxy' environment variables (see `curl(1)`). In
addition to the syntax understood by curl, it is possible to specify a
proxy string with a user name but no password, in which case git will
attempt to acquire one in the same way it does for other credentials. See
linkgit:gitcredentials[7] for more information. The syntax thus is
'[protocol://][user[:password]@]proxyhost[:port]'. This can be overridden
on a per-remote basis; see remote.<name>.proxy
http.proxyAuthMethod::
Set the method with which to authenticate against the HTTP proxy. This
only takes effect if the configured proxy string contains a user name part
(i.e. is of the form 'user@host' or 'user@host:port'). This can be
overridden on a per-remote basis; see `remote.<name>.proxyAuthMethod`.
Both can be overridden by the `GIT_HTTP_PROXY_AUTHMETHOD` environment
variable. Possible values are:
+
--
* `anyauth` - Automatically pick a suitable authentication method. It is
assumed that the proxy answers an unauthenticated request with a 407
status code and one or more Proxy-authenticate headers with supported
authentication methods. This is the default.
* `basic` - HTTP Basic authentication
* `digest` - HTTP Digest authentication; this prevents the password from being
transmitted to the proxy in clear text
* `negotiate` - GSS-Negotiate authentication (compare the --negotiate option
of `curl(1)`)
* `ntlm` - NTLM authentication (compare the --ntlm option of `curl(1)`)
--
http.emptyAuth::
Attempt authentication without seeking a username or password. This
can be used to attempt GSS-Negotiate authentication without specifying
a username in the URL, as libcurl normally requires a username for
authentication.
http.delegation::
Control GSSAPI credential delegation. The delegation is disabled
by default in libcurl since version 7.21.7. Set parameter to tell
the server what it is allowed to delegate when it comes to user
credentials. Used with GSS/kerberos. Possible values are:
+
--
* `none` - Don't allow any delegation.
* `policy` - Delegates if and only if the OK-AS-DELEGATE flag is set in the
Kerberos service ticket, which is a matter of realm policy.
* `always` - Unconditionally allow the server to delegate.
--
http.extraHeader::
Pass an additional HTTP header when communicating with a server. If
more than one such entry exists, all of them are added as extra
headers. To allow overriding the settings inherited from the system
config, an empty value will reset the extra headers to the empty list.
http.cookieFile::
The pathname of a file containing previously stored cookie lines,
which should be used
in the Git http session, if they match the server. The file format
of the file to read cookies from should be plain HTTP headers or
the Netscape/Mozilla cookie file format (see `curl(1)`).
NOTE that the file specified with http.cookieFile is used only as
input unless http.saveCookies is set.
http.saveCookies::
If set, store cookies received during requests to the file specified by
http.cookieFile. Has no effect if http.cookieFile is unset.
http.sslVersion::
The SSL version to use when negotiating an SSL connection, if you
want to force the default. The available and default version
depend on whether libcurl was built against NSS or OpenSSL and the
particular configuration of the crypto library in use. Internally
this sets the 'CURLOPT_SSL_VERSION' option; see the libcurl
documentation for more details on the format of this option and
for the ssl version supported. Actually the possible values of
this option are:
- sslv2
- sslv3
- tlsv1
- tlsv1.0
- tlsv1.1
- tlsv1.2
- tlsv1.3
+
Can be overridden by the `GIT_SSL_VERSION` environment variable.
To force git to use libcurl's default ssl version and ignore any
explicit http.sslversion option, set `GIT_SSL_VERSION` to the
empty string.
http.sslCipherList::
A list of SSL ciphers to use when negotiating an SSL connection.
The available ciphers depend on whether libcurl was built against
NSS or OpenSSL and the particular configuration of the crypto
library in use. Internally this sets the 'CURLOPT_SSL_CIPHER_LIST'
option; see the libcurl documentation for more details on the format
of this list.
+
Can be overridden by the `GIT_SSL_CIPHER_LIST` environment variable.
To force git to use libcurl's default cipher list and ignore any
explicit http.sslCipherList option, set `GIT_SSL_CIPHER_LIST` to the
empty string.
http.sslVerify::
Whether to verify the SSL certificate when fetching or pushing
over HTTPS. Defaults to true. Can be overridden by the
`GIT_SSL_NO_VERIFY` environment variable.
http.sslCert::
File containing the SSL certificate when fetching or pushing
over HTTPS. Can be overridden by the `GIT_SSL_CERT` environment
variable.
http.sslKey::
File containing the SSL private key when fetching or pushing
over HTTPS. Can be overridden by the `GIT_SSL_KEY` environment
variable.
http.sslCertPasswordProtected::
Enable Git's password prompt for the SSL certificate. Otherwise
OpenSSL will prompt the user, possibly many times, if the
certificate or private key is encrypted. Can be overridden by the
`GIT_SSL_CERT_PASSWORD_PROTECTED` environment variable.
http.sslCAInfo::
File containing the certificates to verify the peer with when
fetching or pushing over HTTPS. Can be overridden by the
`GIT_SSL_CAINFO` environment variable.
http.sslCAPath::
Path containing files with the CA certificates to verify the peer
with when fetching or pushing over HTTPS. Can be overridden
by the `GIT_SSL_CAPATH` environment variable.
http.pinnedpubkey::
Public key of the https service. It may either be the filename of
a PEM or DER encoded public key file or a string starting with
'sha256//' followed by the base64 encoded sha256 hash of the
public key. See also libcurl 'CURLOPT_PINNEDPUBLICKEY'. git will
exit with an error if this option is set but not supported by
cURL.
http.sslTry::
Attempt to use AUTH SSL/TLS and encrypted data transfers
when connecting via regular FTP protocol. This might be needed
if the FTP server requires it for security reasons or you wish
to connect securely whenever remote FTP server supports it.
Default is false since it might trigger certificate verification
errors on misconfigured servers.
http.maxRequests::
How many HTTP requests to launch in parallel. Can be overridden
by the `GIT_HTTP_MAX_REQUESTS` environment variable. Default is 5.
http.minSessions::
The number of curl sessions (counted across slots) to be kept across
requests. They will not be ended with curl_easy_cleanup() until
http_cleanup() is invoked. If USE_CURL_MULTI is not defined, this
value will be capped at 1. Defaults to 1.
http.postBuffer::
Maximum size in bytes of the buffer used by smart HTTP
transports when POSTing data to the remote system.
For requests larger than this buffer size, HTTP/1.1 and
Transfer-Encoding: chunked is used to avoid creating a
massive pack file locally. Default is 1 MiB, which is
sufficient for most requests.
http.lowSpeedLimit, http.lowSpeedTime::
If the HTTP transfer speed is less than 'http.lowSpeedLimit'
for longer than 'http.lowSpeedTime' seconds, the transfer is aborted.
Can be overridden by the `GIT_HTTP_LOW_SPEED_LIMIT` and
`GIT_HTTP_LOW_SPEED_TIME` environment variables.
http.noEPSV::
A boolean which disables using of EPSV ftp command by curl.
This can helpful with some "poor" ftp servers which don't
support EPSV mode. Can be overridden by the `GIT_CURL_FTP_NO_EPSV`
environment variable. Default is false (curl will use EPSV).
http.userAgent::
The HTTP USER_AGENT string presented to an HTTP server. The default
value represents the version of the client Git such as git/1.7.1.
This option allows you to override this value to a more common value
such as Mozilla/4.0. This may be necessary, for instance, if
connecting through a firewall that restricts HTTP connections to a set
of common USER_AGENT strings (but not including those like git/1.7.1).
Can be overridden by the `GIT_HTTP_USER_AGENT` environment variable.
http: make redirects more obvious We instruct curl to always follow HTTP redirects. This is convenient, but it creates opportunities for malicious servers to create confusing situations. For instance, imagine Alice is a git user with access to a private repository on Bob's server. Mallory runs her own server and wants to access objects from Bob's repository. Mallory may try a few tricks that involve asking Alice to clone from her, build on top, and then push the result: 1. Mallory may simply redirect all fetch requests to Bob's server. Git will transparently follow those redirects and fetch Bob's history, which Alice may believe she got from Mallory. The subsequent push seems like it is just feeding Mallory back her own objects, but is actually leaking Bob's objects. There is nothing in git's output to indicate that Bob's repository was involved at all. The downside (for Mallory) of this attack is that Alice will have received Bob's entire repository, and is likely to notice that when building on top of it. 2. If Mallory happens to know the sha1 of some object X in Bob's repository, she can instead build her own history that references that object. She then runs a dumb http server, and Alice's client will fetch each object individually. When it asks for X, Mallory redirects her to Bob's server. The end result is that Alice obtains objects from Bob, but they may be buried deep in history. Alice is less likely to notice. Both of these attacks are fairly hard to pull off. There's a social component in getting Mallory to convince Alice to work with her. Alice may be prompted for credentials in accessing Bob's repository (but not always, if she is using a credential helper that caches). Attack (1) requires a certain amount of obliviousness on Alice's part while making a new commit. Attack (2) requires that Mallory knows a sha1 in Bob's repository, that Bob's server supports dumb http, and that the object in question is loose on Bob's server. But we can probably make things a bit more obvious without any loss of functionality. This patch does two things to that end. First, when we encounter a whole-repo redirect during the initial ref discovery, we now inform the user on stderr, making attack (1) much more obvious. Second, the decision to follow redirects is now configurable. The truly paranoid can set the new http.followRedirects to false to avoid any redirection entirely. But for a more practical default, we will disallow redirects only after the initial ref discovery. This is enough to thwart attacks similar to (2), while still allowing the common use of redirects at the repository level. Since c93c92f30 (http: update base URLs when we see redirects, 2013-09-28) we re-root all further requests from the redirect destination, which should generally mean that no further redirection is necessary. As an escape hatch, in case there really is a server that needs to redirect individual requests, the user can set http.followRedirects to "true" (and this can be done on a per-server basis via http.*.followRedirects config). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06 18:24:41 +00:00
http.followRedirects::
Whether git should follow HTTP redirects. If set to `true`, git
will transparently follow any redirect issued by a server it
encounters. If set to `false`, git will treat all redirects as
errors. If set to `initial`, git will follow redirects only for
the initial request to a remote, but not for subsequent
follow-up HTTP requests. Since git uses the redirected URL as
the base for the follow-up requests, this is generally
sufficient. The default is `initial`.
http.<url>.*::
Any of the http.* options above can be applied selectively to some URLs.
For a config key to match a URL, each element of the config key is
compared to that of the URL, in the following order:
+
--
. Scheme (e.g., `https` in `https://example.com/`). This field
must match exactly between the config key and the URL.
. Host/domain name (e.g., `example.com` in `https://example.com/`).
This field must match between the config key and the URL. It is
possible to specify a `*` as part of the host name to match all subdomains
at this level. `https://*.example.com/` for example would match
`https://foo.example.com/`, but not `https://foo.bar.example.com/`.
. Port number (e.g., `8080` in `http://example.com:8080/`).
This field must match exactly between the config key and the URL.
Omitted port numbers are automatically converted to the correct
default for the scheme before matching.
. Path (e.g., `repo.git` in `https://example.com/repo.git`). The
path field of the config key must match the path field of the URL
either exactly or as a prefix of slash-delimited path elements. This means
a config key with path `foo/` matches URL path `foo/bar`. A prefix can only
match on a slash (`/`) boundary. Longer matches take precedence (so a config
key with path `foo/bar` is a better match to URL path `foo/bar` than a config
key with just path `foo/`).
. User name (e.g., `user` in `https://user@example.com/repo.git`). If
the config key has a user name it must match the user name in the
URL exactly. If the config key does not have a user name, that
config key will match a URL with any user name (including none),
but at a lower precedence than a config key with a user name.
--
+
The list above is ordered by decreasing precedence; a URL that matches
a config key's path is preferred to one that matches its user name. For example,
if the URL is `https://user@example.com/foo/bar` a config key match of
`https://example.com/foo` will be preferred over a config key match of
`https://user@example.com`.
+
All URLs are normalized before attempting any matching (the password part,
if embedded in the URL, is always ignored for matching purposes) so that
equivalent URLs that are simply spelled differently will match properly.
Environment variable settings always override any matches. The URLs that are
matched against are those given directly to Git commands. This means any URLs
visited as a result of a redirection do not participate in matching.
ssh.variant::
ssh: 'auto' variant to select between 'ssh' and 'simple' Android's "repo" tool is a tool for managing a large codebase consisting of multiple smaller repositories, similar to Git's submodule feature. Starting with Git 94b8ae5a (ssh: introduce a 'simple' ssh variant, 2017-10-16), users noticed that it stopped handling the port in ssh:// URLs. The cause: when it encounters ssh:// URLs, repo pre-connects to the server and sets GIT_SSH to a helper ".repo/repo/git_ssh" that reuses that connection. Before 94b8ae5a, the helper was assumed to support OpenSSH options for lack of a better guess and got passed a -p option to set the port. After that patch, it uses the new default of a simple helper that does not accept an option to set the port. The next release of "repo" will set GIT_SSH_VARIANT to "ssh" to avoid that. But users of old versions and of other similar GIT_SSH implementations would not get the benefit of that fix. So update the default to use OpenSSH options again, with a twist. As observed in 94b8ae5a, we cannot assume that $GIT_SSH always handles OpenSSH options: common helpers such as travis-ci's dpl[*] are configured using GIT_SSH and do not accept OpenSSH options. So make the default a new variant "auto", with the following behavior: 1. First, check for a recognized basename, like today. 2. If the basename is not recognized, check whether $GIT_SSH supports OpenSSH options by running $GIT_SSH -G <options> <host> This returns status 0 and prints configuration in OpenSSH if it recognizes all <options> and returns status 255 if it encounters an unrecognized option. A wrapper script like exec ssh -- "$@" would fail with ssh: Could not resolve hostname -g: Name or service not known , correctly reflecting that it does not support OpenSSH options. The command is run with stdin, stdout, and stderr redirected to /dev/null so even a command that expects a terminal would exit immediately. 3. Based on the result from step (2), behave like "ssh" (if it succeeded) or "simple" (if it failed). This way, the default ssh variant for unrecognized commands can handle both the repo and dpl cases as intended. This autodetection has been running on Google workstations since 2017-10-23 with no reported negative effects. [*] https://github.com/travis-ci/dpl/blob/6c3fddfda1f2a85944c544446b068bac0a77c049/lib/dpl/provider.rb#L215 Reported-by: William Yan <wyan@google.com> Improved-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-20 21:30:04 +00:00
By default, Git determines the command line arguments to use
based on the basename of the configured SSH command (configured
using the environment variable `GIT_SSH` or `GIT_SSH_COMMAND` or
the config setting `core.sshCommand`). If the basename is
unrecognized, Git will attempt to detect support of OpenSSH
options by first invoking the configured SSH command with the
`-G` (print configuration) option and will subsequently use
OpenSSH options (if that is successful) or no options besides
the host and remote command (if it fails).
+
The config variable `ssh.variant` can be set to override this detection.
Valid values are `ssh` (to use OpenSSH options), `plink`, `putty`,
`tortoiseplink`, `simple` (no options except the host and remote command).
The default auto-detection can be explicitly requested using the value
`auto`. Any other value is treated as `ssh`. This setting can also be
overridden via the environment variable `GIT_SSH_VARIANT`.
ssh: introduce a 'simple' ssh variant When using the 'ssh' transport, the '-o' option is used to specify an environment variable which should be set on the remote end. This allows git to send additional information when contacting the server, requesting the use of a different protocol version via the 'GIT_PROTOCOL' environment variable like so: "-o SendEnv=GIT_PROTOCOL". Unfortunately not all ssh variants support the sending of environment variables to the remote end. To account for this, only use the '-o' option for ssh variants which are OpenSSH compliant. This is done by checking that the basename of the ssh command is 'ssh' or the ssh variant is overridden to be 'ssh' (via the ssh.variant config). Other options like '-p' and '-P', which are used to specify a specific port to use, or '-4' and '-6', which are used to indicate that IPV4 or IPV6 addresses should be used, may also not be supported by all ssh variants. Currently if an ssh command's basename wasn't 'plink' or 'tortoiseplink' git assumes that the command is an OpenSSH variant. Since user configured ssh commands may not be OpenSSH compliant, tighten this constraint and assume a variant of 'simple' if the basename of the command doesn't match the variants known to git. The new ssh variant 'simple' will only have the host and command to execute ([username@]host command) passed as parameters to the ssh command. Update the Documentation to better reflect the command-line options sent to ssh commands based on their variant. Reported-by: Jeffrey Yasskin <jyasskin@google.com> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16 17:55:31 +00:00
+
The current command-line parameters used for each variant are as
follows:
+
--
* `ssh` - [-p port] [-4] [-6] [-o option] [username@]host command
* `simple` - [username@]host command
* `plink` or `putty` - [-P port] [-4] [-6] [username@]host command
* `tortoiseplink` - [-P port] [-4] [-6] -batch [username@]host command
--
+
Except for the `simple` variant, command-line parameters are likely to
change as git gains new features.
i18n.commitEncoding::
Character encoding the commit messages are stored in; Git itself
does not care per se, but this information is necessary e.g. when
importing commits from emails or in the gitk graphical history
browser (and possibly at other places in the future or in other
porcelains). See e.g. linkgit:git-mailinfo[1]. Defaults to 'utf-8'.
i18n.logOutputEncoding::
Character encoding the commit messages are converted to when
running 'git log' and friends.
imap::
The configuration variables in the 'imap' section are described
in linkgit:git-imap-send[1].
index.version::
Specify the version with which new index files should be
initialized. This does not affect existing repositories.
init.templateDir::
Specify the directory from which templates will be copied.
(See the "TEMPLATE DIRECTORY" section of linkgit:git-init[1].)
instaweb.browser::
Specify the program that will be used to browse your working
repository in gitweb. See linkgit:git-instaweb[1].
instaweb.httpd::
The HTTP daemon command-line to start gitweb on your working
repository. See linkgit:git-instaweb[1].
instaweb.local::
If true the web server started by linkgit:git-instaweb[1] will
be bound to the local IP (127.0.0.1).
instaweb.modulePath::
The default module path for linkgit:git-instaweb[1] to use
instead of /usr/lib/apache2/modules. Only used if httpd
is Apache.
instaweb.port::
The port number to bind the gitweb httpd to. See
linkgit:git-instaweb[1].
interactive.singleKey::
In interactive commands, allow the user to provide one-letter
input with a single key (i.e., without hitting enter).
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
Currently this is used by the `--patch` mode of
linkgit:git-add[1], linkgit:git-checkout[1], linkgit:git-commit[1],
linkgit:git-reset[1], and linkgit:git-stash[1]. Note that this
setting is silently ignored if portable keystroke input
is not available; requires the Perl module Term::ReadKey.
add--interactive: allow custom diff highlighting programs The patch hunk selector of add--interactive knows how ask git for colorized diffs, and correlate them with the uncolored diffs we apply. But there's not any way for somebody who uses a diff-filter tool like contrib's diff-highlight to see their normal highlighting. This patch lets users define an arbitrary shell command to pipe the colorized diff through. The exact output shouldn't matter (since we just show the result to humans) as long as it is line-compatible with the original diff (so that hunk-splitting can split the colorized version, too). I left two minor issues with the new system that I don't think are worth fixing right now, but could be done later: 1. We only filter colorized diffs. Theoretically a user could want to filter a non-colorized diff, but I find it unlikely in practice. Users who are doing things like diff-highlighting are likely to want color, too. 2. add--interactive will re-colorize a diff which has been hand-edited, but it won't have run through the filter. Fixing this is conceptually easy (just pipe the diff through the filter), but practically hard to do without using tempfiles (it would need to feed data to and read the result from the filter without deadlocking; this raises portability questions with respect to Windows). I've punted on both issues for now, and if somebody really cares later, they can do a patch on top. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-27 05:37:06 +00:00
interactive.diffFilter::
When an interactive command (such as `git add --patch`) shows
a colorized diff, git will pipe the diff through the shell
command defined by this configuration variable. The command may
mark up the diff further for human consumption, provided that it
retains a one-to-one correspondence with the lines in the
original diff. Defaults to disabled (no filtering).
log.abbrevCommit::
If true, makes linkgit:git-log[1], linkgit:git-show[1], and
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
linkgit:git-whatchanged[1] assume `--abbrev-commit`. You may
override this option with `--no-abbrev-commit`.
log.date::
Set the default date-time mode for the 'log' command.
Setting a value for log.date is similar to using 'git log''s
`--date` option. See linkgit:git-log[1] for details.
log.decorate::
Print out the ref names of any commits that are shown by the log
command. If 'short' is specified, the ref name prefixes 'refs/heads/',
'refs/tags/' and 'refs/remotes/' will not be printed. If 'full' is
specified, the full ref name (including prefix) will be printed.
If 'auto' is specified, then if the output is going to a terminal,
the ref names are shown as if 'short' were given, otherwise no ref
names are shown. This is the same as the `--decorate` option
of the `git log`.
log.follow::
If `true`, `git log` will act as if the `--follow` option was used when
a single <path> is given. This has the same limitations as `--follow`,
i.e. it cannot be used to follow multiple files and does not work well
on non-linear history.
log.graphColors::
A list of colors, separated by commas, that can be used to draw
history lines in `git log --graph`.
log.showRoot::
If true, the initial commit will be shown as a big creation event.
This is equivalent to a diff against an empty tree.
Tools like linkgit:git-log[1] or linkgit:git-whatchanged[1], which
normally hide the root commit will now show it. True by default.
log.showSignature::
If true, makes linkgit:git-log[1], linkgit:git-show[1], and
linkgit:git-whatchanged[1] assume `--show-signature`.
log.mailmap::
If true, makes linkgit:git-log[1], linkgit:git-show[1], and
linkgit:git-whatchanged[1] assume `--use-mailmap`.
mailinfo.scissors::
If true, makes linkgit:git-mailinfo[1] (and therefore
linkgit:git-am[1]) act by default as if the --scissors option
was provided on the command-line. When active, this features
removes everything from the message body before a scissors
line (i.e. consisting mainly of ">8", "8<" and "-").
mailmap.file::
The location of an augmenting mailmap file. The default
mailmap, located in the root of the repository, is loaded
first, then the mailmap file pointed to by this variable.
The location of the mailmap file may be in a repository
subdirectory, or somewhere outside of the repository itself.
See linkgit:git-shortlog[1] and linkgit:git-blame[1].
mailmap.blob::
Like `mailmap.file`, but consider the value as a reference to a
blob in the repository. If both `mailmap.file` and
`mailmap.blob` are given, both are parsed, with entries from
`mailmap.file` taking precedence. In a bare repository, this
defaults to `HEAD:.mailmap`. In a non-bare repository, it
defaults to empty.
man.viewer::
Specify the programs that may be used to display help in the
'man' format. See linkgit:git-help[1].
man.<tool>.cmd::
Specify the command to invoke the specified man viewer. The
specified command is evaluated in shell with the man page
passed as argument. (See linkgit:git-help[1].)
man.<tool>.path::
Override the path for the given tool that may be used to
display help in the 'man' format. See linkgit:git-help[1].
include::merge-config.txt[]
mergetool.<tool>.path::
Override the path for the given tool. This is useful in case
your tool is not in the PATH.
mergetool.<tool>.cmd::
Specify the command to invoke the specified merge tool. The
specified command is evaluated in shell with the following
variables available: 'BASE' is the name of a temporary file
containing the common base of the files to be merged, if available;
'LOCAL' is the name of a temporary file containing the contents of
the file on the current branch; 'REMOTE' is the name of a temporary
file containing the contents of the file from the branch being
merged; 'MERGED' contains the name of the file to which the merge
tool should write the results of a successful merge.
mergetool.<tool>.trustExitCode::
For a custom merge command, specify whether the exit code of
the merge command can be used to determine whether the merge was
successful. If this is not set to true then the merge target file
timestamp is checked and the merge assumed to have been successful
if the file has been updated, otherwise the user is prompted to
indicate the success of the merge.
mergetool.meld.hasOutput::
Older versions of `meld` do not support the `--output` option.
Git will attempt to detect whether `meld` supports `--output`
by inspecting the output of `meld --help`. Configuring
`mergetool.meld.hasOutput` will make Git skip these checks and
use the configured value instead. Setting `mergetool.meld.hasOutput`
to `true` tells Git to unconditionally use the `--output` option,
and `false` avoids using `--output`.
mergetool.keepBackup::
After performing a merge, the original file with conflict markers
can be saved as a file with a `.orig` extension. If this variable
is set to `false` then this file is not preserved. Defaults to
`true` (i.e. keep the backup files).
mergetool.keepTemporaries::
When invoking a custom merge tool, Git uses a set of temporary
files to pass to the tool. If the tool returns an error and this
variable is set to `true`, then these temporary files will be
preserved, otherwise they will be removed after the tool has
exited. Defaults to `false`.
mergetool.writeToTemp::
Git writes temporary 'BASE', 'LOCAL', and 'REMOTE' versions of
conflicting files in the worktree by default. Git will attempt
to use a temporary directory for these files when set `true`.
Defaults to `false`.
mergetool.prompt::
Prompt before each invocation of the merge resolution program.
notes.mergeStrategy::
Which merge strategy to choose by default when resolving notes
conflicts. Must be one of `manual`, `ours`, `theirs`, `union`, or
`cat_sort_uniq`. Defaults to `manual`. See "NOTES MERGE STRATEGIES"
section of linkgit:git-notes[1] for more information on each strategy.
notes.<name>.mergeStrategy::
Which merge strategy to choose when doing a notes merge into
refs/notes/<name>. This overrides the more general
"notes.mergeStrategy". See the "NOTES MERGE STRATEGIES" section in
linkgit:git-notes[1] for more information on the available strategies.
notes.displayRef::
The (fully qualified) refname from which to show notes when
showing commit messages. The value of this variable can be set
to a glob, in which case notes from all matching refs will be
shown. You may also specify this configuration variable
several times. A warning will be issued for refs that do not
exist, but a glob that does not match any refs is silently
ignored.
+
This setting can be overridden with the `GIT_NOTES_DISPLAY_REF`
environment variable, which must be a colon separated list of refs or
globs.
+
The effective value of "core.notesRef" (possibly overridden by
GIT_NOTES_REF) is also implicitly added to the list of refs to be
displayed.
notes.rewrite.<command>::
When rewriting commits with <command> (currently `amend` or
`rebase`) and this variable is set to `true`, Git
automatically copies your notes from the original to the
rewritten commit. Defaults to `true`, but see
"notes.rewriteRef" below.
notes.rewriteMode::
When copying notes during a rewrite (see the
"notes.rewrite.<command>" option), determines what to do if
the target commit already has a note. Must be one of
`overwrite`, `concatenate`, `cat_sort_uniq`, or `ignore`.
Defaults to `concatenate`.
+
This setting can be overridden with the `GIT_NOTES_REWRITE_MODE`
environment variable.
notes.rewriteRef::
When copying notes during a rewrite, specifies the (fully
qualified) ref whose notes should be copied. The ref may be a
glob, in which case notes in all matching refs will be copied.
You may also specify this configuration several times.
+
Does not have a default value; you must configure this variable to
enable note rewriting. Set it to `refs/notes/commits` to enable
rewriting for the default commit notes.
+
This setting can be overridden with the `GIT_NOTES_REWRITE_REF`
environment variable, which must be a colon separated list of refs or
globs.
pack.window::
The size of the window used by linkgit:git-pack-objects[1] when no
window size is given on the command line. Defaults to 10.
pack.depth::
The maximum delta depth used by linkgit:git-pack-objects[1] when no
maximum depth is given on the command line. Defaults to 50.
Maximum value is 4095.
pack.windowMemory::
The maximum size of memory that is consumed by each thread
in linkgit:git-pack-objects[1] for pack window memory when
no limit is given on the command line. The value can be
suffixed with "k", "m", or "g". When left unconfigured (or
set explicitly to 0), there will be no limit.
Custom compression levels for objects and packs Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 20:56:50 +00:00
pack.compression::
An integer -1..9, indicating the compression level for objects
in a pack file. -1 is the zlib default. 0 means no
compression, and 1..9 are various speed/size tradeoffs, 9 being
slowest. If not set, defaults to core.compression. If that is
not set, defaults to -1, the zlib default, which is "a default
compromise between speed and compression (currently equivalent
to level 6)."
+
Note that changing the compression level will not automatically recompress
all existing objects. You can force recompression by passing the -F option
to linkgit:git-repack[1].
Custom compression levels for objects and packs Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-05-09 20:56:50 +00:00
pack.island::
An extended regular expression configuring a set of delta
islands. See "DELTA ISLANDS" in linkgit:git-pack-objects[1]
for details.
pack.islandCore::
Specify an island name which gets to have its objects be
packed first. This creates a kind of pseudo-pack at the front
of one pack, so that the objects from the specified island are
hopefully faster to copy into any pack that should be served
to a user requesting these objects. In practice this means
that the island specified should likely correspond to what is
the most commonly cloned in the repo. See also "DELTA ISLANDS"
in linkgit:git-pack-objects[1].
pack.deltaCacheSize::
The maximum memory in bytes used for caching deltas in
linkgit:git-pack-objects[1] before writing them out to a pack.
This cache is used to speed up the writing object phase by not
having to recompute the final delta result once the best match
for all objects is found. Repacking large repositories on machines
which are tight with memory might be badly impacted by this though,
especially if this cache pushes the system into swapping.
A value of 0 means no limit. The smallest size of 1 byte may be
used to virtually disable this cache. Defaults to 256 MiB.
pack.deltaCacheLimit::
The maximum size of a delta, that is cached in
linkgit:git-pack-objects[1]. This cache is used to speed up the
writing object phase by not having to recompute the final delta
result once the best match for all objects is found.
Defaults to 1000. Maximum value is 65535.
pack.threads::
Specifies the number of threads to spawn when searching for best
delta matches. This requires that linkgit:git-pack-objects[1]
be compiled with pthreads otherwise this option is ignored with a
warning. This is meant to reduce packing time on multiprocessor
machines. The required amount of memory for the delta search window
is however multiplied by the number of threads.
Specifying 0 will cause Git to auto-detect the number of CPU's
and set the number of threads accordingly.
pack.indexVersion::
Specify the default pack index version. Valid values are 1 for
legacy pack index used by Git versions prior to 1.5.2, and 2 for
the new pack index with capabilities for packs larger than 4 GB
as well as proper protection against the repacking of corrupted
packs. Version 2 is the default. Note that version 2 is enforced
and this config option ignored whenever the corresponding pack is
larger than 2 GB.
+
If you have an old Git that does not understand the version 2 `*.idx` file,
transport: drop support for git-over-rsync The git-over-rsync protocol is inefficient and broken, and has been for a long time. It transfers way more objects than it needs (grabbing all of the remote's "objects/", regardless of which objects we need). It does its own ad-hoc parsing of loose and packed refs from the remote, but doesn't properly override packed refs with loose ones, leading to garbage results (e.g., expecting the other side to have an object pointed to by a stale packed-refs entry, or complaining that the other side has two copies of the refs[1]). This latter breakage means that nobody could have successfully pulled from a moderately active repository since cd547b4 (fetch/push: readd rsync support, 2007-10-01). We never made an official deprecation notice in the release notes for git's rsync protocol, but the tutorial has marked it as such since 914328a (Update tutorial., 2005-08-30). And on the mailing list as far back as Oct 2005, we can find Junio mentioning it as having "been deprecated for quite some time."[2,3,4]. So it was old news then; cogito had deprecated the transport in July of 2005[5] (though it did come back briefly when Linus broke git-http-pull!). Of course some people professed their love of rsync through 2006, but Linus clarified in his usual gentle manner[6]: > Thanks! This is why I still use rsync, even though > everybody and their mother tells me "Linus says rsync is > deprecated." No. You're using rsync because you're actively doing something _wrong_. The deprecation sentiment was reinforced in 2008, with a mention that cloning via rsync is broken (with no fix)[7]. Even the commit porting rsync over to C from shell (cd547b4) lists it as deprecated! So between the 10 years of informal warnings, and the fact that it has been severely broken since 2007, it's probably safe to simply remove it without further deprecation warnings. [1] http://article.gmane.org/gmane.comp.version-control.git/285101 [2] http://article.gmane.org/gmane.comp.version-control.git/10093 [3] http://article.gmane.org/gmane.comp.version-control.git/17734 [4] http://article.gmane.org/gmane.comp.version-control.git/18911 [5] http://article.gmane.org/gmane.comp.version-control.git/5617 [6] http://article.gmane.org/gmane.comp.version-control.git/19354 [7] http://article.gmane.org/gmane.comp.version-control.git/103635 Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-30 07:21:26 +00:00
cloning or fetching over a non native protocol (e.g. "http")
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
that will copy both `*.pack` file and corresponding `*.idx` file from the
other side may give you a repository that cannot be accessed with your
older version of Git. If the `*.pack` file is smaller than 2 GB, however,
you can use linkgit:git-index-pack[1] on the *.pack file to regenerate
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
the `*.idx` file.
pack.packSizeLimit::
The maximum size of a pack. This setting only affects
packing to a file when repacking, i.e. the git:// protocol
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
is unaffected. It can be overridden by the `--max-pack-size`
option of linkgit:git-repack[1]. Reaching this limit results
in the creation of multiple packfiles; which in turn prevents
bitmaps from being created.
The minimum size allowed is limited to 1 MiB.
The default is unlimited.
Common unit suffixes of 'k', 'm', or 'g' are
supported.
pack-objects: use bitmaps when packing objects In this patch, we use the bitmap API to perform the `Counting Objects` phase in pack-objects, rather than a traditional walk through the object graph. For a reasonably-packed large repo, the time to fetch and clone is often dominated by the full-object revision walk during the Counting Objects phase. Using bitmaps can reduce the CPU time required on the server (and therefore start sending the actual pack data with less delay). For bitmaps to be used, the following must be true: 1. We must be packing to stdout (as a normal `pack-objects` from `upload-pack` would do). 2. There must be a .bitmap index containing at least one of the "have" objects that the client is asking for. 3. Bitmaps must be enabled (they are enabled by default, but can be disabled by setting `pack.usebitmaps` to false, or by using `--no-use-bitmap-index` on the command-line). If any of these is not true, we fall back to doing a normal walk of the object graph. Here are some sample timings from a full pack of `torvalds/linux` (i.e. something very similar to what would be generated for a clone of the repository) that show the speedup produced by various methods: [existing graph traversal] $ time git pack-objects --all --stdout --no-use-bitmap-index \ </dev/null >/dev/null Counting objects: 3237103, done. Compressing objects: 100% (508752/508752), done. Total 3237103 (delta 2699584), reused 3237103 (delta 2699584) real 0m44.111s user 0m42.396s sys 0m3.544s [bitmaps only, without partial pack reuse; note that pack reuse is automatic, so timing this required a patch to disable it] $ time git pack-objects --all --stdout </dev/null >/dev/null Counting objects: 3237103, done. Compressing objects: 100% (508752/508752), done. Total 3237103 (delta 2699584), reused 3237103 (delta 2699584) real 0m5.413s user 0m5.604s sys 0m1.804s [bitmaps with pack reuse (what you get with this patch)] $ time git pack-objects --all --stdout </dev/null >/dev/null Reusing existing pack: 3237103, done. Total 3237103 (delta 0), reused 0 (delta 0) real 0m1.636s user 0m1.460s sys 0m0.172s Signed-off-by: Vicent Marti <tanoku@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-21 14:00:09 +00:00
pack.useBitmaps::
When true, git will use pack bitmaps (if available) when packing
to stdout (e.g., during the server side of a fetch). Defaults to
true. You should not generally need to turn this off unless
you are debugging pack bitmaps.
pack.writeBitmaps (deprecated)::
This is a deprecated synonym for `repack.writeBitmaps`.
pack-objects: implement bitmap writing This commit extends more the functionality of `pack-objects` by allowing it to write out a `.bitmap` index next to any written packs, together with the `.idx` index that currently gets written. If bitmap writing is enabled for a given repository (either by calling `pack-objects` with the `--write-bitmap-index` flag or by having `pack.writebitmaps` set to `true` in the config) and pack-objects is writing a packfile that would normally be indexed (i.e. not piping to stdout), we will attempt to write the corresponding bitmap index for the packfile. Bitmap index writing happens after the packfile and its index has been successfully written to disk (`finish_tmp_packfile`). The process is performed in several steps: 1. `bitmap_writer_set_checksum`: this call stores the partial checksum for the packfile being written; the checksum will be written in the resulting bitmap index to verify its integrity 2. `bitmap_writer_build_type_index`: this call uses the array of `struct object_entry` that has just been sorted when writing out the actual packfile index to disk to generate 4 type-index bitmaps (one for each object type). These bitmaps have their nth bit set if the given object is of the bitmap's type. E.g. the nth bit of the Commits bitmap will be 1 if the nth object in the packfile index is a commit. This is a very cheap operation because the bitmap writing code has access to the metadata stored in the `struct object_entry` array, and hence the real type for each object in the packfile. 3. `bitmap_writer_reuse_bitmaps`: if there exists an existing bitmap index for one of the packfiles we're trying to repack, this call will efficiently rebuild the existing bitmaps so they can be reused on the new index. All the existing bitmaps will be stored in a `reuse` hash table, and the commit selection phase will prioritize these when selecting, as they can be written directly to the new index without having to perform a revision walk to fill the bitmap. This can greatly speed up the repack of a repository that already has bitmaps. 4. `bitmap_writer_select_commits`: if bitmap writing is enabled for a given `pack-objects` run, the sequence of commits generated during the Counting Objects phase will be stored in an array. We then use that array to build up the list of selected commits. Writing a bitmap in the index for each object in the repository would be cost-prohibitive, so we use a simple heuristic to pick the commits that will be indexed with bitmaps. The current heuristics are a simplified version of JGit's original implementation. We select a higher density of commits depending on their age: the 100 most recent commits are always selected, after that we pick 1 commit of each 100, and the gap increases as the commits grow older. On top of that, we make sure that every single branch that has not been merged (all the tips that would be required from a clone) gets their own bitmap, and when selecting commits between a gap, we tend to prioritize the commit with the most parents. Do note that there is no right/wrong way to perform commit selection; different selection algorithms will result in different commits being selected, but there's no such thing as "missing a commit". The bitmap walker algorithm implemented in `prepare_bitmap_walk` is able to adapt to missing bitmaps by performing manual walks that complete the bitmap: the ideal selection algorithm, however, would select the commits that are more likely to be used as roots for a walk in the future (e.g. the tips of each branch, and so on) to ensure a bitmap for them is always available. 5. `bitmap_writer_build`: this is the computationally expensive part of bitmap generation. Based on the list of commits that were selected in the previous step, we perform several incremental walks to generate the bitmap for each commit. The walks begin from the oldest commit, and are built up incrementally for each branch. E.g. consider this dag where A, B, C, D, E, F are the selected commits, and a, b, c, e are a chunk of simplified history that will not receive bitmaps. A---a---B--b--C--c--D \ E--e--F We start by building the bitmap for A, using A as the root for a revision walk and marking all the objects that are reachable until the walk is over. Once this bitmap is stored, we reuse the bitmap walker to perform the walk for B, assuming that once we reach A again, the walk will be terminated because A has already been SEEN on the previous walk. This process is repeated for C, and D, but when we try to generate the bitmaps for E, we can reuse neither the current walk nor the bitmap we have generated so far. What we do now is resetting both the walk and clearing the bitmap, and performing the walk from scratch using E as the origin. This new walk, however, does not need to be completed. Once we hit B, we can lookup the bitmap we have already stored for that commit and OR it with the existing bitmap we've composed so far, allowing us to limit the walk early. After all the bitmaps have been generated, another iteration through the list of commits is performed to find the best XOR offsets for compression before writing them to disk. Because of the incremental nature of these bitmaps, XORing one of them with its predecesor results in a minimal "bitmap delta" most of the time. We can write this delta to the on-disk bitmap index, and then re-compose the original bitmaps by XORing them again when loaded. This is a phase very similar to pack-object's `find_delta` (using bitmaps instead of objects, of course), except the heuristics have been greatly simplified: we only check the 10 bitmaps before any given one to find best compressing one. This gives good results in practice, because there is locality in the ordering of the objects (and therefore bitmaps) in the packfile. 6. `bitmap_writer_finish`: the last step in the process is serializing to disk all the bitmap data that has been generated in the two previous steps. The bitmap is written to a tmp file and then moved atomically to its final destination, using the same process as `pack-write.c:write_idx_file`. Signed-off-by: Vicent Marti <tanoku@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-21 14:00:16 +00:00
pack-bitmap: implement optional name_hash cache When we use pack bitmaps rather than walking the object graph, we end up with the list of objects to include in the packfile, but we do not know the path at which any tree or blob objects would be found. In a recently packed repository, this is fine. A fetch would use the paths only as a heuristic in the delta compression phase, and a fully packed repository should not need to do much delta compression. As time passes, though, we may acquire more objects on top of our large bitmapped pack. If clients fetch frequently, then they never even look at the bitmapped history, and all works as usual. However, a client who has not fetched since the last bitmap repack will have "have" tips in the bitmapped history, but "want" newer objects. The bitmaps themselves degrade gracefully in this circumstance. We manually walk the more recent bits of history, and then use bitmaps when we hit them. But we would also like to perform delta compression between the newer objects and the bitmapped objects (both to delta against what we know the user already has, but also between "new" and "old" objects that the user is fetching). The lack of pathnames makes our delta heuristics much less effective. This patch adds an optional cache of the 32-bit name_hash values to the end of the bitmap file. If present, a reader can use it to match bitmapped and non-bitmapped names during delta compression. Here are perf results for p5310: Test origin/master HEAD^ HEAD ------------------------------------------------------------------------------------------------- 5310.2: repack to disk 36.81(37.82+1.43) 47.70(48.74+1.41) +29.6% 47.75(48.70+1.51) +29.7% 5310.3: simulated clone 30.78(29.70+2.14) 1.08(0.97+0.10) -96.5% 1.07(0.94+0.12) -96.5% 5310.4: simulated fetch 3.16(6.10+0.08) 3.54(10.65+0.06) +12.0% 1.70(3.07+0.06) -46.2% 5310.6: partial bitmap 36.76(43.19+1.81) 6.71(11.25+0.76) -81.7% 4.08(6.26+0.46) -88.9% You can see that the time spent on an incremental fetch goes down, as our delta heuristics are able to do their work. And we save time on the partial bitmap clone for the same reason. Signed-off-by: Vicent Marti <tanoku@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-21 14:00:45 +00:00
pack.writeBitmapHashCache::
When true, git will include a "hash cache" section in the bitmap
index (if one is written). This cache can be used to feed git's
delta heuristics, potentially leading to better deltas between
bitmapped and non-bitmapped objects (e.g., when serving a fetch
between an older, bitmapped pack and objects that have been
pushed since the last gc). The downside is that it consumes 4
bytes per object of disk space, and that JGit's bitmap
implementation does not understand it, causing it to complain if
Git and JGit are used on the same repository. Defaults to false.
pager.<cmd>::
If the value is boolean, turns on or off pagination of the
output of a particular Git subcommand when writing to a tty.
Otherwise, turns on pagination for the subcommand using the
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
pager specified by the value of `pager.<cmd>`. If `--paginate`
or `--no-pager` is specified on the command line, it takes
precedence over this option. To disable pagination for all
commands, set `core.pager` or `GIT_PAGER` to `cat`.
pretty.<name>::
Alias for a --pretty= format string, as specified in
linkgit:git-log[1]. Any aliases defined here can be used just
as the built-in pretty formats could. For example,
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
running `git config pretty.changelog "format:* %H %s"`
would cause the invocation `git log --pretty=changelog`
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
to be equivalent to running `git log "--pretty=format:* %H %s"`.
Note that an alias with the same name as a built-in format
will be silently ignored.
transport: add protocol policy config option Previously the `GIT_ALLOW_PROTOCOL` environment variable was used to specify a whitelist of protocols to be used in clone/fetch/push commands. This patch introduces new configuration options for more fine-grained control for allowing/disallowing protocols. This also has the added benefit of allowing easier construction of a protocol whitelist on systems where setting an environment variable is non-trivial. Now users can specify a policy to be used for each type of protocol via the 'protocol.<name>.allow' config option. A default policy for all unconfigured protocols can be set with the 'protocol.allow' config option. If no user configured default is made git will allow known-safe protocols (http, https, git, ssh, file), disallow known-dangerous protocols (ext), and have a default policy of `user` for all other protocols. The supported policies are `always`, `never`, and `user`. The `user` policy can be used to configure a protocol to be usable when explicitly used by a user, while disallowing it for commands which run clone/fetch/push commands without direct user intervention (e.g. recursive initialization of submodules). Commands which can potentially clone/fetch/push from untrusted repositories without user intervention can export `GIT_PROTOCOL_FROM_USER` with a value of '0' to prevent protocols configured to the `user` policy from being used. Fix remote-ext tests to use the new config to allow the ext protocol to be tested. Based on a patch by Jeff King <peff@peff.net> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-14 22:39:52 +00:00
protocol.allow::
If set, provide a user defined default policy for all protocols which
don't explicitly have a policy (`protocol.<name>.allow`). By default,
if unset, known-safe protocols (http, https, git, ssh, file) have a
default policy of `always`, known-dangerous protocols (ext) have a
default policy of `never`, and all other protocols have a default
policy of `user`. Supported policies:
+
--
* `always` - protocol is always able to be used.
* `never` - protocol is never able to be used.
* `user` - protocol is only able to be used when `GIT_PROTOCOL_FROM_USER` is
either unset or has a value of 1. This policy should be used when you want a
protocol to be directly usable by the user but don't want it used by commands which
execute clone/fetch/push commands without user input, e.g. recursive
submodule initialization.
--
protocol.<name>.allow::
Set a policy to be used by protocol `<name>` with clone/fetch/push
commands. See `protocol.allow` above for the available policies.
+
The protocol names currently used by git are:
+
--
- `file`: any local file-based path (including `file://` URLs,
or local paths)
- `git`: the anonymous git protocol over a direct TCP
connection (or proxy, if configured)
- `ssh`: git over ssh (including `host:path` syntax,
`ssh://`, etc).
- `http`: git over http, both "smart http" and "dumb http".
Note that this does _not_ include `https`; if you want to configure
both, you must do so individually.
- any external helpers are named by their protocol (e.g., use
`hg` to allow the `git-remote-hg` helper)
--
protocol.version::
Experimental. If set, clients will attempt to communicate with a
server using the specified protocol version. If unset, no
attempt will be made by the client to communicate using a
particular protocol version, this results in protocol version 0
being used.
Supported versions:
+
--
* `0` - the original wire protocol.
* `1` - the original wire protocol with the addition of a version string
in the initial response from the server.
--
pull.ff::
By default, Git does not create an extra merge commit when merging
a commit that is a descendant of the current commit. Instead, the
tip of the current branch is fast-forwarded. When set to `false`,
this variable tells Git to create an extra merge commit in such
a case (equivalent to giving the `--no-ff` option from the command
line). When set to `only`, only such fast-forward merges are
allowed (equivalent to giving the `--ff-only` option from the
command line). This setting overrides `merge.ff` when pulling.
pull.rebase::
When true, rebase branches on top of the fetched branch, instead
of merging the default branch from the default remote when "git
pull" is run. See "branch.<name>.rebase" for setting this on a
per-branch basis.
+
When `merges`, pass the `--rebase-merges` option to 'git rebase'
so that the local merge commits are included in the rebase (see
linkgit:git-rebase[1] for details).
+
When preserve, also pass `--preserve-merges` along to 'git rebase'
so that locally committed merge commits will not be flattened
by running 'git pull'.
+
When the value is `interactive`, the rebase is run in interactive mode.
+
*NOTE*: this is a possibly dangerous operation; do *not* use
it unless you understand the implications (see linkgit:git-rebase[1]
for details).
pull.octopus::
The default merge strategy to use when pulling multiple branches
at once.
pull.twohead::
The default merge strategy to use when pulling a single branch.
push.default::
Defines the action `git push` should take if no refspec is
explicitly given. Different values are well-suited for
specific workflows; for instance, in a purely central workflow
(i.e. the fetch source is equal to the push destination),
`upstream` is probably what you want. Possible values are:
+
--
* `nothing` - do not push anything (error out) unless a refspec is
explicitly given. This is primarily meant for people who want to
avoid mistakes by always being explicit.
* `current` - push the current branch to update a branch with the same
name on the receiving end. Works in both central and non-central
workflows.
* `upstream` - push the current branch back to the branch whose
changes are usually integrated into the current branch (which is
called `@{upstream}`). This mode only makes sense if you are
pushing to the same repository you would normally pull from
(i.e. central workflow).
* `tracking` - This is a deprecated synonym for `upstream`.
* `simple` - in centralized workflow, work like `upstream` with an
added safety to refuse to push if the upstream branch's name is
different from the local one.
+
When pushing to a remote that is different from the remote you normally
pull from, work as `current`. This is the safest option and is suited
for beginners.
+
This mode has become the default in Git 2.0.
* `matching` - push all branches having the same name on both ends.
This makes the repository you are pushing to remember the set of
branches that will be pushed out (e.g. if you always push 'maint'
and 'master' there and no other branches, the repository you push
to will have these two branches, and your local 'maint' and
'master' will be pushed there).
+
To use this mode effectively, you have to make sure _all_ the
branches you would push out are ready to be pushed out before
running 'git push', as the whole point of this mode is to allow you
to push all of the branches in one go. If you usually finish work
on only one branch and push out the result, while other branches are
unfinished, this mode is not for you. Also this mode is not
suitable for pushing into a shared central repository, as other
people may add new branches there, or update the tip of existing
branches outside your control.
+
This used to be the default, but not since Git 2.0 (`simple` is the
new default).
--
push.followTags::
If set to true enable `--follow-tags` option by default. You
may override this configuration at time of push by specifying
`--no-follow-tags`.
push.gpgSign::
May be set to a boolean value, or the string 'if-asked'. A true
value causes all pushes to be GPG signed, as if `--signed` is
passed to linkgit:git-push[1]. The string 'if-asked' causes
pushes to be signed if the server supports it, as if
`--signed=if-asked` is passed to 'git push'. A false value may
override a value from a lower-priority config file. An explicit
command-line flag always overrides this config option.
push.pushOption::
When no `--push-option=<option>` argument is given from the
command line, `git push` behaves as if each <value> of
this variable is given as `--push-option=<value>`.
+
This is a multi-valued variable, and an empty value can be used in a
higher priority configuration file (e.g. `.git/config` in a
repository) to clear the values inherited from a lower priority
configuration files (e.g. `$HOME/.gitconfig`).
+
--
Example:
/etc/gitconfig
push.pushoption = a
push.pushoption = b
~/.gitconfig
push.pushoption = c
repo/.git/config
push.pushoption =
push.pushoption = b
This will result in only b (a and c are cleared).
--
push.recurseSubmodules::
Make sure all submodule commits used by the revisions to be pushed
are available on a remote-tracking branch. If the value is 'check'
then Git will verify that all submodule commits that changed in the
revisions to be pushed are available on at least one remote of the
submodule. If any commits are missing, the push will be aborted and
exit with non-zero status. If the value is 'on-demand' then all
submodules that changed in the revisions to be pushed will be
pushed. If on-demand was not able to push all necessary revisions
it will also be aborted and exit with non-zero status. If the value
is 'no' then default behavior of ignoring submodules when pushing
is retained. You may override this configuration at time of push by
specifying '--recurse-submodules=check|on-demand|no'.
include::rebase-config.txt[]
receive.advertiseAtomic::
By default, git-receive-pack will advertise the atomic push
capability to its clients. If you don't want to advertise this
capability, set this variable to false.
receive.advertisePushOptions::
When set to true, git-receive-pack will advertise the push options
capability to its clients. False by default.
receive.autogc::
By default, git-receive-pack will run "git-gc --auto" after
receiving data from git-push and updating refs. You can stop
it by setting this variable to false.
receive.certNonceSeed::
By setting this variable to a string, `git receive-pack`
will accept a `git push --signed` and verifies it by using
a "nonce" protected by HMAC using this string as a secret
key.
receive.certNonceSlop::
signed push: allow stale nonce in stateless mode When operating with the stateless RPC mode, we will receive a nonce issued by another instance of us that advertised our capability and refs some time ago. Update the logic to check received nonce to detect this case, compute how much time has passed since the nonce was issued and report the status with a new environment variable GIT_PUSH_CERT_NONCE_SLOP to the hooks. GIT_PUSH_CERT_NONCE_STATUS will report "SLOP" in such a case. The hooks are free to decide how large a slop it is willing to accept. Strictly speaking, the "nonce" is not really a "nonce" anymore in the stateless RPC mode, as it will happily take any "nonce" issued by it (which is protected by HMAC and its secret key) as long as it is fresh enough. The degree of this security degradation, relative to the native protocol, is about the same as the "we make sure that the 'git push' decided to update our refs with new objects based on the freshest observation of our refs by making sure the values they claim the original value of the refs they ask us to update exactly match the current state" security is loosened to accomodate the stateless RPC mode in the existing code without this series, so there is no need for those who are already using smart HTTP to push to their repositories to be alarmed any more than they already are. In addition, the server operator can set receive.certnonceslop configuration variable to specify how stale a nonce can be (in seconds). When this variable is set, and if the nonce received in the certificate that passes the HMAC check was less than that many seconds old, hooks are given "OK" in GIT_PUSH_CERT_NONCE_STATUS (instead of "SLOP") and the received nonce value is given in GIT_PUSH_CERT_NONCE, which makes it easier for a simple-minded hook to check if the certificate we received is recent enough. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-05 17:46:04 +00:00
When a `git push --signed` sent a push certificate with a
"nonce" that was issued by a receive-pack serving the same
repository within this many seconds, export the "nonce"
found in the certificate to `GIT_PUSH_CERT_NONCE` to the
hooks (instead of what the receive-pack asked the sending
side to include). This may allow writing checks in
`pre-receive` and `post-receive` a bit easier. Instead of
checking `GIT_PUSH_CERT_NONCE_SLOP` environment variable
that records by how many seconds the nonce is stale to
decide if they want to accept the certificate, they only
can check `GIT_PUSH_CERT_NONCE_STATUS` is `OK`.
receive.fsckObjects::
If it is set to true, git-receive-pack will check all received
objects. It will abort in the case of a malformed object or a
broken link. The result of an abort are only dangling objects.
Defaults to false. If not set, the value of `transfer.fsckObjects`
is used instead.
receive.fsck.<msg-id>::
When `receive.fsckObjects` is set to true, errors can be switched
to warnings and vice versa by configuring the `receive.fsck.<msg-id>`
setting where the `<msg-id>` is the fsck message ID and the value
is one of `error`, `warn` or `ignore`. For convenience, fsck prefixes
the error/warning with the message ID, e.g. "missingEmail: invalid
author/committer line - missing email" means that setting
`receive.fsck.missingEmail = ignore` will hide that issue.
+
This feature is intended to support working with legacy repositories
which would not pass pushing when `receive.fsckObjects = true`, allowing
the host to accept repositories with certain known issues but still catch
other issues.
receive.fsck.skipList::
The path to a sorted list of object names (i.e. one SHA-1 per
line) that are known to be broken in a non-fatal way and should
be ignored. This feature is useful when an established project
should be accepted despite early commits containing errors that
can be safely ignored such as invalid committer email addresses.
Note: corrupt objects cannot be skipped with this setting.
receive-pack: send keepalives during quiet periods After a client has sent us the complete pack, we may spend some time processing the data and running hooks. If the client asked us to be quiet, receive-pack won't send any progress data during the index-pack or connectivity-check steps. And hooks may or may not produce their own progress output. In these cases, the network connection is totally silent from both ends. Git itself doesn't care about this (it will wait forever), but other parts of the system (e.g., firewalls, load-balancers, etc) might hang up the connection. So we'd like to send some sort of keepalive to let the network and the client side know that we're still alive and processing. We can use the same trick we did in 05e9515 (upload-pack: send keepalive packets during pack computation, 2013-09-08). Namely, we will send an empty sideband data packet every `N` seconds that we do not relay any stderr data over the sideband channel. As with 05e9515, this means that we won't bother sending keepalives when there's actual progress data, but will kick in when it has been disabled (or if there is a lull in the progress data). The concept is simple, but the details are subtle enough that they need discussing here. Before the client sends us the pack, we don't want to do any keepalives. We'll have sent our ref advertisement, and we're waiting for them to send us the pack (and tell us that they support sidebands at all). While we're receiving the pack from the client (or waiting for it to start), there's no need for keepalives; it's up to them to keep the connection active by sending data. Moreover, it would be wrong for us to do so. When we are the server in the smart-http protocol, we must treat our connection as half-duplex. So any keepalives we send while receiving the pack would potentially be buffered by the webserver. Not only does this make them useless (since they would not be delivered in a timely manner), but it could actually cause a deadlock if we fill up the buffer with keepalives. (It wouldn't be wrong to send keepalives in this phase for a full-duplex connection like ssh; it's simply pointless, as it is the client's responsibility to speak). As soon as we've gotten all of the pack data, then the client is waiting for us to speak, and we should start keepalives immediately. From here until the end of the connection, we send one any time we are not otherwise sending data. But there's a catch. Receive-pack doesn't know the moment we've gotten all the data. It passes the descriptor to index-pack, who reads all of the data, and then starts resolving the deltas. We have to communicate that back. To make this work, we instruct the sideband muxer to enable keepalives in three phases: 1. In the beginning, not at all. 2. While reading from index-pack, wait for a signal indicating end-of-input, and then start them. 3. Afterwards, always. The signal from index-pack in phase 2 has to come over the stderr channel which the muxer is reading. We can't use an extra pipe because the portable run-command interface only gives us stderr and stdout. Stdout is already used to pass the .keep filename back to receive-pack. We could also send a signal there, but then we would find out about it in the main thread. And the keepalive needs to be done by the async muxer thread (since it's the one writing sideband data back to the client). And we can't reliably signal the async thread from the main thread, because the async code sometimes uses threads and sometimes uses forked processes. Therefore the signal must come over the stderr channel, where it may be interspersed with other random human-readable messages from index-pack. This patch makes the signal a single NUL byte. This is easy to parse, should not appear in any normal stderr output, and we don't have to worry about any timing issues (like seeing half the signal bytes in one read(), and half in a subsequent one). This is a bit ugly, but it's simple to code and should work reliably. Another option would be to stop using an async thread for muxing entirely, and just poll() both stderr and stdout of index-pack from the main thread. This would work for index-pack (because we aren't doing anything useful in the main thread while it runs anyway). But it would make the connectivity check and the hook muxers much more complicated, as they need to simultaneously feed the sub-programs while reading their stderr. The index-pack phase is the only one that needs this signaling, so it could simply behave differently than the other two. That would mean having two separate implementations of copy_to_sideband (and the keepalive code), though. And it still doesn't get rid of the signaling; it just means we can write a nicer message like "END_OF_INPUT" or something on stdout, since we don't have to worry about separating it from the stderr cruft. One final note: this signaling trick is only done with index-pack, not with unpack-objects. There's no point in doing it for the latter, because by definition it only kicks in for a small number of objects, where keepalives are not as useful (and this conveniently lets us avoid duplicating the implementation). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-15 10:43:47 +00:00
receive.keepAlive::
After receiving the pack from the client, `receive-pack` may
produce no output (if `--quiet` was specified) while processing
the pack, causing some networks to drop the TCP connection.
With this option set, if `receive-pack` does not transmit
any data in this phase for `receive.keepAlive` seconds, it will
send a short keepalive packet. The default is 5 seconds; set
to 0 to disable keepalives entirely.
receive.unpackLimit::
If the number of objects received in a push is below this
limit then the objects will be unpacked into loose object
files. However if the number of received objects equals or
exceeds this limit then the received pack will be stored as
a pack, after adding any missing delta bases. Storing the
pack from a push can make the push operation complete faster,
especially on slow filesystems. If not set, the value of
`transfer.unpackLimit` is used instead.
receive.maxInputSize::
If the size of the incoming pack stream is larger than this
limit, then git-receive-pack will error out, instead of
accepting the pack file. If not set or set to 0, then the size
is unlimited.
receive.denyDeletes::
If set to true, git-receive-pack will deny a ref update that deletes
the ref. Use this to prevent such a ref deletion via a push.
receive.denyDeleteCurrent::
If set to true, git-receive-pack will deny a ref update that
deletes the currently checked out branch of a non-bare repository.
receive.denyCurrentBranch::
If set to true or "refuse", git-receive-pack will deny a ref update
to the currently checked out branch of a non-bare repository.
Such a push is potentially dangerous because it brings the HEAD
out of sync with the index and working tree. If set to "warn",
print a warning of such a push to stderr, but allow the push to
proceed. If set to false or "ignore", allow such pushes with no
message. Defaults to "refuse".
+
Another option is "updateInstead" which will update the working
tree if pushing into the current branch. This option is
intended for synchronizing working directories when one side is not easily
accessible via interactive ssh (e.g. a live web site, hence the requirement
that the working directory be clean). This mode also comes in handy when
developing inside a VM to test and fix code on different Operating Systems.
+
By default, "updateInstead" will refuse the push if the working tree or
the index have any difference from the HEAD, but the `push-to-checkout`
hook can be used to customize this. See linkgit:githooks[5].
receive.denyNonFastForwards::
If set to true, git-receive-pack will deny a ref update which is
not a fast-forward. Use this to prevent such an update via a push,
even if that push is forced. This configuration variable is
set when initializing a shared repository.
receive.hideRefs::
This variable is the same as `transfer.hideRefs`, but applies
only to `receive-pack` (and so affects pushes, but not fetches).
An attempt to update or delete a hidden ref by `git push` is
rejected.
upload/receive-pack: allow hiding ref hierarchies A repository may have refs that are only used for its internal bookkeeping purposes that should not be exposed to the others that come over the network. Teach upload-pack to omit some refs from its initial advertisement by paying attention to the uploadpack.hiderefs multi-valued configuration variable. Do the same to receive-pack via the receive.hiderefs variable. As a convenient short-hand, allow using transfer.hiderefs to set the value to both of these variables. Any ref that is under the hierarchies listed on the value of these variable is excluded from responses to requests made by "ls-remote", "fetch", etc. (for upload-pack) and "push" (for receive-pack). Because these hidden refs do not count as OUR_REF, an attempt to fetch objects at the tip of them will be rejected, and because these refs do not get advertised, "git push :" will not see local branches that have the same name as them as "matching" ones to be sent. An attempt to update/delete these hidden refs with an explicit refspec, e.g. "git push origin :refs/hidden/22", is rejected. This is not a new restriction. To the pusher, it would appear that there is no such ref, so its push request will conclude with "Now that I sent you all the data, it is time for you to update the refs. I saw that the ref did not exist when I started pushing, and I want the result to point at this commit". The receiving end will apply the compare-and-swap rule to this request and rejects the push with "Well, your update request conflicts with somebody else; I see there is such a ref.", which is the right thing to do. Otherwise a push to a hidden ref will always be "the last one wins", which is not a good default. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-19 00:08:30 +00:00
receive.updateServerInfo::
If set to true, git-receive-pack will run git-update-server-info
after receiving data from git-push and updating refs.
receive.shallowUpdate::
If set to true, .git/shallow can be updated when new refs
require new shallow roots. Otherwise those refs are rejected.
remote.pushDefault::
The remote to push to by default. Overrides
`branch.<name>.remote` for all branches, and is overridden by
`branch.<name>.pushRemote` for specific branches.
remote.<name>.url::
The URL of a remote repository. See linkgit:git-fetch[1] or
linkgit:git-push[1].
remote.<name>.pushurl::
The push URL of a remote repository. See linkgit:git-push[1].
remote.<name>.proxy::
For remotes that require curl (http, https and ftp), the URL to
the proxy to use for that remote. Set to the empty string to
disable proxying for that remote.
remote.<name>.proxyAuthMethod::
For remotes that require curl (http, https and ftp), the method to use for
authenticating against the proxy in use (probably set in
`remote.<name>.proxy`). See `http.proxyAuthMethod`.
remote.<name>.fetch::
The default set of "refspec" for linkgit:git-fetch[1]. See
linkgit:git-fetch[1].
remote.<name>.push::
The default set of "refspec" for linkgit:git-push[1]. See
linkgit:git-push[1].
remote.<name>.mirror::
If true, pushing to this remote will automatically behave
docs: stop using asciidoc no-inline-literal In asciidoc 7, backticks like `foo` produced a typographic effect, but did not otherwise affect the syntax. In asciidoc 8, backticks introduce an "inline literal" inside which markup is not interpreted. To keep compatibility with existing documents, asciidoc 8 has a "no-inline-literal" attribute to keep the old behavior. We enabled this so that the documentation could be built on either version. It has been several years now, and asciidoc 7 is no longer in wide use. We can now decide whether or not we want inline literals on their own merits, which are: 1. The source is much easier to read when the literal contains punctuation. You can use `master~1` instead of `master{tilde}1`. 2. They are less error-prone. Because of point (1), we tend to make mistakes and forget the extra layer of quoting. This patch removes the no-inline-literal attribute from the Makefile and converts every use of backticks in the documentation to an inline literal (they must be cleaned up, or the example above would literally show "{tilde}" in the output). Problematic sites were found by grepping for '`.*[{\\]' and examined and fixed manually. The results were then verified by comparing the output of "html2text" on the set of generated html pages. Doing so revealed that in addition to making the source more readable, this patch fixes several formatting bugs: - HTML rendering used the ellipsis character instead of literal "..." in code examples (like "git log A...B") - some code examples used the right-arrow character instead of '->' because they failed to quote - api-config.txt did not quote tilde, and the resulting HTML contained a bogus snippet like: <tt><sub></tt> foo <tt></sub>bar</tt> which caused some parsers to choke and omit whole sections of the page. - git-commit.txt confused ``foo`` (backticks inside a literal) with ``foo'' (matched double-quotes) - mentions of `A U Thor <author@example.com>` used to erroneously auto-generate a mailto footnote for author@example.com - the description of --word-diff=plain incorrectly showed the output as "[-removed-] and {added}", not "{+added+}". - using "prime" notation like: commit `C` and its replacement `C'` confused asciidoc into thinking that everything between the first backtick and the final apostrophe were meant to be inside matched quotes - asciidoc got confused by the escaping of some of our asterisks. In particular, `credential.\*` and `credential.<url>.\*` properly escaped the asterisk in the first case, but literally passed through the backslash in the second case. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-26 08:51:57 +00:00
as if the `--mirror` option was given on the command line.
remote.<name>.skipDefaultUpdate::
If true, this remote will be skipped by default when updating
using linkgit:git-fetch[1] or the `update` subcommand of
linkgit:git-remote[1].
remote.<name>.skipFetchAll::
If true, this remote will be skipped by default when updating
using linkgit:git-fetch[1] or the `update` subcommand of
linkgit:git-remote[1].
remote.<name>.receivepack::
The default program to execute on the remote side when pushing. See
option --receive-pack of linkgit:git-push[1].
remote.<name>.uploadpack::
The default program to execute on the remote side when fetching. See
option --upload-pack of linkgit:git-fetch-pack[1].
remote.<name>.tagOpt::
Setting this value to --no-tags disables automatic tag following when
fetching from remote <name>. Setting it to --tags will fetch every
tag from remote <name>, even if they are not reachable from remote
branch heads. Passing these flags directly to linkgit:git-fetch[1] can
override this setting. See options --tags and --no-tags of
linkgit:git-fetch[1].
remote.<name>.vcs::
Setting this to a value <vcs> will cause Git to interact with
the remote with the git-remote-<vcs> helper.
remote.<name>.prune::
When set to true, fetching from this remote by default will also
fetch --prune: prune only based on explicit refspecs The old behavior of "fetch --prune" was to prune whatever was being fetched. In particular, "fetch --prune --tags" caused tags not only to be fetched, but also to be pruned. This is inappropriate because there is only one tags namespace that is shared among the local repository and all remotes. Therefore, if the user defines a local tag and then runs "git fetch --prune --tags", then the local tag is deleted. Moreover, "--prune" and "--tags" can also be configured via fetch.prune / remote.<name>.prune and remote.<name>.tagopt, making it even less obvious that an invocation of "git fetch" could result in tag lossage. Since the command "git remote update" invokes "git fetch", it had the same problem. The command "git remote prune", on the other hand, disregarded the setting of remote.<name>.tagopt, and so its behavior was inconsistent with that of the other commands. So the old behavior made it too easy to lose tags. To fix this problem, change "fetch --prune" to prune references based only on refspecs specified explicitly by the user, either on the command line or via remote.<name>.fetch. Thus, tags are no longer made subject to pruning by the --tags option or the remote.<name>.tagopt setting. However, tags *are* still subject to pruning if they are fetched as part of a refspec, and that is good. For example: * On the command line, git fetch --prune 'refs/tags/*:refs/tags/*' causes tags, and only tags, to be fetched and pruned, and is therefore a simple way for the user to get the equivalent of the old behavior of "--prune --tag". * For a remote that was configured with the "--mirror" option, the configuration is set to include [remote "name"] fetch = +refs/*:refs/* , which causes tags to be subject to pruning along with all other references. This is the behavior that will typically be desired for a mirror. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-30 05:33:00 +00:00
remove any remote-tracking references that no longer exist on the
remote (as if the `--prune` option was given on the command line).
Overrides `fetch.prune` settings, if any.
fetch: add a --prune-tags option and fetch.pruneTags config Add a --prune-tags option to git-fetch, along with fetch.pruneTags config option and a -P shorthand (-p is --prune). This allows for doing any of: git fetch -p -P git fetch --prune --prune-tags git fetch -p -P origin git fetch --prune --prune-tags origin Or simply: git config fetch.prune true && git config fetch.pruneTags true && git fetch Instead of the much more verbose: git fetch --prune origin 'refs/tags/*:refs/tags/*' '+refs/heads/*:refs/remotes/origin/*' Before this feature it was painful to support the use-case of pulling from a repo which is having both its branches *and* tags deleted regularly, and have our local references to reflect upstream. At work we create deployment tags in the repo for each rollout, and there's *lots* of those, so they're archived within weeks for performance reasons. Without this change it's hard to centrally configure such repos in /etc/gitconfig (on servers that are only used for working with them). You need to set fetch.prune=true globally, and then for each repo: git -C {} config --replace-all remote.origin.fetch "refs/tags/*:refs/tags/*" "^\+*refs/tags/\*:refs/tags/\*$" Now I can simply set fetch.pruneTags=true in /etc/gitconfig as well, and users running "git pull" will automatically get the pruning semantics I want. Even though "git remote" has corresponding "prune" and "update --prune" subcommands I'm intentionally not adding a corresponding prune-tags or "update --prune --prune-tags" mode to that command. It's advertised (as noted in my recent "git remote doc: correct dangerous lies about what prune does") as only modifying remote tracking references, whereas any --prune-tags option is always going to modify what from the user's perspective is a local copy of the tag, since there's no such thing as a remote tracking tag. Ideally add_prune_tags_to_fetch_refspec() would be something that would use ALLOC_GROW() to grow the 'fetch` member of the 'remote' struct. Instead I'm realloc-ing remote->fetch and adding the tag_refspec to the end. The reason is that parse_{fetch,push}_refspec which allocate the refspec (ultimately remote->fetch) struct are called many places that don't have access to a 'remote' struct. It would be hard to change all their callsites to be amenable to carry around the bookkeeping variables required for dynamic allocation. All the other callers of the API first incrementally construct the string version of the refspec in remote->fetch_refspec via add_fetch_refspec(), before finally calling parse_fetch_refspec() via some variation of remote_get(). It's less of a pain to deal with the one special case that needs to modify already constructed refspecs than to chase down and change all the other callsites. The API I'm adding is intentionally not generalized because if we add more of these we'd probably want to re-visit how this is done. See my "Re: [BUG] git remote prune removes local tags, depending on fetch config" (87po6ahx87.fsf@evledraar.gmail.com; https://public-inbox.org/git/87po6ahx87.fsf@evledraar.gmail.com/) for more background info. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-09 20:32:15 +00:00
remote.<name>.pruneTags::
When set to true, fetching from this remote by default will also
remove any local tags that no longer exist on the remote if pruning
is activated in general via `remote.<name>.prune`, `fetch.prune` or
`--prune`. Overrides `fetch.pruneTags` settings, if any.
+
See also `remote.<name>.prune` and the PRUNING section of
linkgit:git-fetch[1].
remotes.<group>::
The list of remotes which are fetched by "git remote update
<group>". See linkgit:git-remote[1].
repack.useDeltaBaseOffset::
By default, linkgit:git-repack[1] creates packs that use
delta-base offset. If you need to share your repository with
Git older than version 1.4.4, either directly or via a dumb
protocol such as http, then you need to set this option to
"false" and repack. Access from old Git versions over the
native protocol are unaffected by this option.
repack: add `repack.packKeptObjects` config var The git-repack command always passes `--honor-pack-keep` to pack-objects. This has traditionally been a good thing, as we do not want to duplicate those objects in a new pack, and we are not going to delete the old pack. However, when bitmaps are in use, it is important for a full repack to include all reachable objects, even if they may be duplicated in a .keep pack. Otherwise, we cannot generate the bitmaps, as the on-disk format requires the set of objects in the pack to be fully closed. Even if the repository does not generally have .keep files, a simultaneous push could cause a race condition in which a .keep file exists at the moment of a repack. The repack may try to include those objects in one of two situations: 1. The pushed .keep pack contains objects that were already in the repository (e.g., blobs due to a revert of an old commit). 2. Receive-pack updates the refs, making the objects reachable, but before it removes the .keep file, the repack runs. In either case, we may prefer to duplicate some objects in the new, full pack, and let the next repack (after the .keep file is cleaned up) take care of removing them. This patch introduces both a command-line and config option to disable the `--honor-pack-keep` option. By default, it is triggered when pack.writeBitmaps (or `--write-bitmap-index` is turned on), but specifying it explicitly can override the behavior (e.g., in cases where you prefer .keep files to bitmaps, but only when they are present). Note that this option just disables the pack-objects behavior. We still leave packs with a .keep in place, as we do not necessarily know that we have duplicated all of their objects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-03 20:04:20 +00:00
repack.packKeptObjects::
If set to true, makes `git repack` act as if
`--pack-kept-objects` was passed. See linkgit:git-repack[1] for
details. Defaults to `false` normally, but `true` if a bitmap
index is being written (either via `--write-bitmap-index` or
`repack.writeBitmaps`).
repack.writeBitmaps::
When true, git will write a bitmap index when packing all
objects to disk (e.g., when `git repack -a` is run). This
index can speed up the "counting objects" phase of subsequent
packs created for clones and fetches, at the cost of some disk
space and extra time spent on the initial repack. This has
no effect if multiple packfiles are created.
Defaults to false.
repack: add `repack.packKeptObjects` config var The git-repack command always passes `--honor-pack-keep` to pack-objects. This has traditionally been a good thing, as we do not want to duplicate those objects in a new pack, and we are not going to delete the old pack. However, when bitmaps are in use, it is important for a full repack to include all reachable objects, even if they may be duplicated in a .keep pack. Otherwise, we cannot generate the bitmaps, as the on-disk format requires the set of objects in the pack to be fully closed. Even if the repository does not generally have .keep files, a simultaneous push could cause a race condition in which a .keep file exists at the moment of a repack. The repack may try to include those objects in one of two situations: 1. The pushed .keep pack contains objects that were already in the repository (e.g., blobs due to a revert of an old commit). 2. Receive-pack updates the refs, making the objects reachable, but before it removes the .keep file, the repack runs. In either case, we may prefer to duplicate some objects in the new, full pack, and let the next repack (after the .keep file is cleaned up) take care of removing them. This patch introduces both a command-line and config option to disable the `--honor-pack-keep` option. By default, it is triggered when pack.writeBitmaps (or `--write-bitmap-index` is turned on), but specifying it explicitly can override the behavior (e.g., in cases where you prefer .keep files to bitmaps, but only when they are present). Note that this option just disables the pack-objects behavior. We still leave packs with a .keep in place, as we do not necessarily know that we have duplicated all of their objects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-03 20:04:20 +00:00
rerere.autoUpdate::
When set to true, `git-rerere` updates the index with the
resulting contents after it cleanly resolves conflicts using
previously recorded resolution. Defaults to false.
rerere.enabled::
Activate recording of resolved conflicts, so that identical
conflict hunks can be resolved automatically, should they be
encountered again. By default, linkgit:git-rerere[1] is
enabled if there is an `rr-cache` directory under the
`$GIT_DIR`, e.g. if "rerere" was previously used in the
repository.
sendemail.identity::
A configuration identity. When given, causes values in the
'sendemail.<identity>' subsection to take precedence over
values in the 'sendemail' section. The default identity is
the value of `sendemail.identity`.
sendemail.smtpEncryption::
See linkgit:git-send-email[1] for description. Note that this
setting is not subject to the 'identity' mechanism.
sendemail.smtpssl (deprecated)::
Deprecated alias for 'sendemail.smtpEncryption = ssl'.
sendemail.smtpsslcertpath::
Path to ca-certificates (either a directory or a single file).
Set it to an empty string to disable certificate verification.
sendemail.<identity>.*::
Identity-specific versions of the 'sendemail.*' parameters
found below, taking precedence over those when this
identity is selected, through either the command-line or
`sendemail.identity`.
sendemail.aliasesFile::
sendemail.aliasFileType::
sendemail.annotate::
sendemail.bcc::
sendemail.cc::
sendemail.ccCmd::
sendemail.chainReplyTo::
sendemail.confirm::
sendemail.envelopeSender::
sendemail.from::
sendemail.multiEdit::
sendemail.signedoffbycc::
sendemail.smtpPass::
sendemail.suppresscc::
sendemail.suppressFrom::
sendemail.to::
sendemail.tocmd::
sendemail.smtpDomain::
sendemail.smtpServer::
sendemail.smtpServerPort::
sendemail.smtpServerOption::
sendemail.smtpUser::
sendemail.thread::
sendemail.transferEncoding::
sendemail.validate::
sendemail.xmailer::
See linkgit:git-send-email[1] for description.
sendemail.signedoffcc (deprecated)::
Deprecated alias for `sendemail.signedoffbycc`.
sendemail.smtpBatchSize::
Number of messages to be sent per connection, after that a relogin
will happen. If the value is 0 or undefined, send all messages in
one connection.
See also the `--batch-size` option of linkgit:git-send-email[1].
sendemail.smtpReloginDelay::
Seconds wait before reconnecting to smtp server.
See also the `--relogin-delay` option of linkgit:git-send-email[1].
showbranch.default::
The default set of branches for linkgit:git-show-branch[1].
See linkgit:git-show-branch[1].
splitIndex.maxPercentChange::
When the split index feature is used, this specifies the
percent of entries the split index can contain compared to the
total number of entries in both the split index and the shared
index before a new shared index is written.
The value should be between 0 and 100. If the value is 0 then
a new shared index is always written, if it is 100 a new
shared index is never written.
By default the value is 20, so a new shared index is written
if the number of entries in the split index would be greater
than 20 percent of the total number of entries.
See linkgit:git-update-index[1].
splitIndex.sharedIndexExpire::
When the split index feature is used, shared index files that
were not modified since the time this variable specifies will
be removed when a new shared index file is created. The value
"now" expires all entries immediately, and "never" suppresses
expiration altogether.
The default value is "2.weeks.ago".
Note that a shared index file is considered modified (for the
purpose of expiration) each time a new split-index file is
either created based on it or read from it.
See linkgit:git-update-index[1].
status.relativePaths::
By default, linkgit:git-status[1] shows paths relative to the
current directory. Setting this variable to `false` shows paths
relative to the repository root (this was the default for Git
prior to v1.5.4).
status.short::
Set to true to enable --short by default in linkgit:git-status[1].
The option --no-short takes precedence over this variable.
status.branch::
Set to true to enable --branch by default in linkgit:git-status[1].
The option --no-branch takes precedence over this variable.
status.displayCommentPrefix::
If set to true, linkgit:git-status[1] will insert a comment
prefix before each output line (starting with
`core.commentChar`, i.e. `#` by default). This was the
behavior of linkgit:git-status[1] in Git 1.8.4 and previous.
Defaults to false.
status.renameLimit::
The number of files to consider when performing rename detection
in linkgit:git-status[1] and linkgit:git-commit[1]. Defaults to
the value of diff.renameLimit.
status.renames::
Whether and how Git detects renames in linkgit:git-status[1] and
linkgit:git-commit[1] . If set to "false", rename detection is
disabled. If set to "true", basic rename detection is enabled.
If set to "copies" or "copy", Git will detect copies, as well.
Defaults to the value of diff.renames.
status.showStash::
If set to true, linkgit:git-status[1] will display the number of
entries currently stashed away.
Defaults to false.
status.showUntrackedFiles::
By default, linkgit:git-status[1] and linkgit:git-commit[1] show
files which are not currently tracked by Git. Directories which
contain only untracked files, are shown with the directory name
only. Showing untracked files means that Git needs to lstat() all
the files in the whole repository, which might be slow on some
systems. So, this variable controls how the commands displays
the untracked files. Possible values are:
+
--
* `no` - Show no untracked files.
* `normal` - Show untracked files and directories.
* `all` - Show also individual files in untracked directories.
--
+
If this variable is not specified, it defaults to 'normal'.
This variable can be overridden with the -u|--untracked-files option
of linkgit:git-status[1] and linkgit:git-commit[1].
status.submoduleSummary::
Defaults to false.
If this is set to a non zero number or true (identical to -1 or an
unlimited number), the submodule summary will be enabled and a
summary of commits for modified submodules will be shown (see
--summary-limit option of linkgit:git-submodule[1]). Please note
that the summary output command will be suppressed for all
submodules when `diff.ignoreSubmodules` is set to 'all' or only
status/commit: show staged submodules regardless of ignore config Currently setting submodule.<name>.ignore and/or diff.ignoreSubmodules to "all" suppresses all output of submodule changes for the diff family, status and commit. For status and commit this is really confusing, as it even when the user chooses to record a new commit for an ignored submodule by adding it manually this change won't show up under the to-be-committed changes. To add insult to injury, a later "git commit" will error out with "nothing to commit" when only ignored submodules are staged. Fix that by making wt_status always print staged submodule changes, no matter what ignore settings are configured. The only exception is when the user explicitly uses the "--ignore-submodules=all" command line option, in that case the submodule output is still suppressed. This also makes "git commit" work again when only modifications of ignored submodules are staged, as that command uses the "commitable" member of the wt_status struct to determine if staged changes are present. But this only happens when the commit command uses the wt_status* functions to produce status output for human consumption (when forking an editor or with --dry-run), in all other cases (e.g. when run in a script with '-m') another code path is taken which uses index_differs_from() to determine if any changes are staged which still ignores submodules according to their configuration. This will be fixed in a follow-up commit. Change t7508 to reflect this new behavior and add three new tests to show that a single staged submodule configured to be ignored will be committed when the status output is generated and won't be if not. Also update the documentation of the ignore config options accordingly. Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-04-05 16:59:03 +00:00
for those submodules where `submodule.<name>.ignore=all`. The only
exception to that rule is that status and commit will show staged
submodule changes. To
also view the summary for ignored submodules you can either use
the --ignore-submodules=dirty command-line option or the 'git
submodule summary' command, which shows a similar output but does
not honor these settings.
stash.showPatch::
If this is set to true, the `git stash show` command without an
option will show the stash entry in patch form. Defaults to false.
See description of 'show' command in linkgit:git-stash[1].
stash.showStat::
If this is set to true, the `git stash show` command without an
option will show diffstat of the stash entry. Defaults to true.
See description of 'show' command in linkgit:git-stash[1].
submodule.<name>.url::
The URL for a submodule. This variable is copied from the .gitmodules
file to the git config via 'git submodule init'. The user can change
the configured URL before obtaining the submodule via 'git submodule
submodule: decouple url and submodule interest Currently the submodule.<name>.url config option is used to determine if a given submodule is of interest to the user. This ends up being cumbersome in a world where we want to have different submodules checked out in different worktrees or a more generalized mechanism to select which submodules are of interest. In a future with worktree support for submodules, there will be multiple working trees, each of which may only need a subset of the submodules checked out. The URL (which is where the submodule repository can be obtained) should not differ between different working trees. It may also be convenient for users to more easily specify groups of submodules they are interested in as opposed to running "git submodule init <path>" on each submodule they want checked out in their working tree. To this end two config options are introduced, submodule.active and submodule.<name>.active. The submodule.active config holds a pathspec that specifies which submodules should exist in the working tree. The submodule.<name>.active config is a boolean flag used to indicate if that particular submodule should exist in the working tree. Its important to note that submodule.active functions differently than the other configuration options since it takes a pathspec. This allows users to adopt at least two new workflows: 1. Submodules can be grouped with a leading directory, such that a pathspec e.g. 'lib/' would cover all library-ish modules to allow those who are interested in library-ish modules to set "submodule.active = lib/" just once to say any and all modules in 'lib/' are interesting. 2. Once the pathspec-attribute feature is invented, users can label submodules with attributes to group them, so that a broad pathspec with attribute requirements, e.g. ':(attr:lib)', can be used to say any and all modules with the 'lib' attribute are interesting. Since the .gitattributes file, just like the .gitmodules file, is tracked by the superproject, when a submodule moves in the superproject tree, the project can adjust which path gets the attribute in .gitattributes, just like it can adjust which path has the submodule in .gitmodules. Neither of these two additional configuration options solve the problem of wanting different submodules checked out in different worktrees because multiple worktrees share .git/config. Only once per-worktree configurations become a reality can this be solved, but this is a necessary preparatory step for that future. Given these multiple ways to check if a submodule is of interest, the more fine-grained submodule.<name>.active option has the highest order of precedence followed by the pathspec check against submodule.active. To ensure backwards compatibility, if neither of these options are set, git falls back to checking the submodule.<name>.url option to determine if a submodule is interesting. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17 22:38:01 +00:00
update'. If neither submodule.<name>.active or submodule.active are
set, the presence of this variable is used as a fallback to indicate
whether the submodule is of interest to git commands.
See linkgit:git-submodule[1] and linkgit:gitmodules[5] for details.
submodule.<name>.update::
The method by which a submodule is updated by 'git submodule update',
which is the only affected command, others such as
'git checkout --recurse-submodules' are unaffected. It exists for
historical reasons, when 'git submodule' was the only command to
interact with submodules; settings like `submodule.active`
and `pull.rebase` are more specific. It is populated by
`git submodule init` from the linkgit:gitmodules[5] file.
See description of 'update' command in linkgit:git-submodule[1].
submodule update: add --remote for submodule's upstream changes The current `update` command incorporates the superproject's gitlinked SHA-1 ($sha1) into the submodule HEAD ($subsha1). Depending on the options you use, it may checkout $sha1, rebase the $subsha1 onto $sha1, or merge $sha1 into $subsha1. This helps you keep up with changes in the upstream superproject. However, it's also useful to stay up to date with changes in the upstream subproject. Previous workflows for incorporating such changes include the ungainly: $ git submodule foreach 'git checkout $(git config --file $toplevel/.gitmodules submodule.$name.branch) && git pull' With this patch, all of the useful functionality for incorporating superproject changes can be reused to incorporate upstream subproject updates. When you specify --remote, the target $sha1 is replaced with a $sha1 of the submodule's origin/master tracking branch. If you want to merge a different tracking branch, you can configure the `submodule.<name>.branch` option in `.gitmodules`. You can override the `.gitmodules` configuration setting for a particular superproject by configuring the option in that superproject's default configuration (using the usual configuration hierarchy, e.g. `.git/config`, `~/.gitconfig`, etc.). Previous use of submodule.<name>.branch ======================================= Because we're adding a new configuration option, it's a good idea to check if anyone else is already using the option. The foreach-pull example above was described by Ævar in commit f030c96d8643fa0a1a9b2bd9c2f36a77721fb61f Author: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Date: Fri May 21 16:10:10 2010 +0000 git-submodule foreach: Add $toplevel variable Gerrit uses the same interpretation for the setting, but because Gerrit has direct access to the subproject repositories, it updates the superproject repositories automatically when a subproject changes. Gerrit also accepts the special value '.', which it expands into the superproject's branch name. Although the --remote functionality is using `submodule.<name>.branch` slightly differently, the effect is the same. The foreach-pull example uses the option to record the name of the local branch to checkout before pulls. The tracking branch to be pulled is recorded in `.git/modules/<name>/config`, which was initialized by the module clone during `submodule add` or `submodule init`. Because the branch name stored in `submodule.<name>.branch` was likely the same as the branch name used during the initial `submodule add`, the same branch will be pulled in each workflow. Implementation details ====================== In order to ensure a current tracking branch state, `update --remote` fetches the submodule's remote repository before calculating the SHA-1. However, I didn't change the logic guarding the existing fetch: if test -z "$nofetch" then # Run fetch only if $sha1 isn't present or it # is not reachable from a ref. (clear_local_git_env; cd "$path" && ( (rev=$(git rev-list -n 1 $sha1 --not --all 2>/dev/null) && test -z "$rev") || git-fetch)) || die "$(eval_gettext "Unable to fetch in submodule path '\$path'")" fi There will not be a double-fetch, because the new $sha1 determined after the `--remote` triggered fetch should always exist in the repository. If it doesn't, it's because some racy process removed it from the submodule's repository and we *should* be re-fetching. Signed-off-by: W. Trevor King <wking@tremily.us> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-19 16:03:32 +00:00
submodule.<name>.branch::
The remote branch name for a submodule, used by `git submodule
update --remote`. Set this option to override the value found in
the `.gitmodules` file. See linkgit:git-submodule[1] and
linkgit:gitmodules[5] for details.
submodule.<name>.fetchRecurseSubmodules::
This option can be used to control recursive fetching of this
submodule. It can be overridden by using the --[no-]recurse-submodules
command-line option to "git fetch" and "git pull".
This setting will override that from in the linkgit:gitmodules[5]
file.
Submodules: Add the new "ignore" config option for diff and status The new "ignore" config option controls the default behavior for "git status" and the diff family. It specifies under what circumstances they consider submodules as modified and can be set separately for each submodule. The command line option "--ignore-submodules=" has been extended to accept the new parameter "none" for both status and diff. Users that chose submodules to get rid of long work tree scanning times might want to set the "dirty" option for those submodules. This brings back the pre 1.7.0 behavior, where submodule work trees were never scanned for modifications. By using "--ignore-submodules=none" on the command line the status and diff commands can be told to do a full scan. This option can be set to the following values (which have the same name and meaning as for the "--ignore-submodules" option of status and diff): "all": All changes to the submodule will be ignored. "dirty": Only differences of the commit recorded in the superproject and the submodules HEAD will be considered modifications, all changes to the work tree of the submodule will be ignored. When using this value, the submodule will not be scanned for work tree changes at all, leading to a performance benefit on large submodules. "untracked": Only untracked files in the submodules work tree are ignored, a changed HEAD and/or modified files in the submodule will mark it as modified. "none" (which is the default): Either untracked or modified files in a submodules work tree or a difference between the subdmodules HEAD and the commit recorded in the superproject will make it show up as changed. This value is added as a new parameter for the "--ignore-submodules" option of the diff family and "git status" so the user can override the settings in the configuration. Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-05 22:39:25 +00:00
submodule.<name>.ignore::
Defines under what circumstances "git status" and the diff family show
a submodule as modified. When set to "all", it will never be considered
status/commit: show staged submodules regardless of ignore config Currently setting submodule.<name>.ignore and/or diff.ignoreSubmodules to "all" suppresses all output of submodule changes for the diff family, status and commit. For status and commit this is really confusing, as it even when the user chooses to record a new commit for an ignored submodule by adding it manually this change won't show up under the to-be-committed changes. To add insult to injury, a later "git commit" will error out with "nothing to commit" when only ignored submodules are staged. Fix that by making wt_status always print staged submodule changes, no matter what ignore settings are configured. The only exception is when the user explicitly uses the "--ignore-submodules=all" command line option, in that case the submodule output is still suppressed. This also makes "git commit" work again when only modifications of ignored submodules are staged, as that command uses the "commitable" member of the wt_status struct to determine if staged changes are present. But this only happens when the commit command uses the wt_status* functions to produce status output for human consumption (when forking an editor or with --dry-run), in all other cases (e.g. when run in a script with '-m') another code path is taken which uses index_differs_from() to determine if any changes are staged which still ignores submodules according to their configuration. This will be fixed in a follow-up commit. Change t7508 to reflect this new behavior and add three new tests to show that a single staged submodule configured to be ignored will be committed when the status output is generated and won't be if not. Also update the documentation of the ignore config options accordingly. Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-04-05 16:59:03 +00:00
modified (but it will nonetheless show up in the output of status and
commit when it has been staged), "dirty" will ignore all changes
to the submodules work tree and
Submodules: Add the new "ignore" config option for diff and status The new "ignore" config option controls the default behavior for "git status" and the diff family. It specifies under what circumstances they consider submodules as modified and can be set separately for each submodule. The command line option "--ignore-submodules=" has been extended to accept the new parameter "none" for both status and diff. Users that chose submodules to get rid of long work tree scanning times might want to set the "dirty" option for those submodules. This brings back the pre 1.7.0 behavior, where submodule work trees were never scanned for modifications. By using "--ignore-submodules=none" on the command line the status and diff commands can be told to do a full scan. This option can be set to the following values (which have the same name and meaning as for the "--ignore-submodules" option of status and diff): "all": All changes to the submodule will be ignored. "dirty": Only differences of the commit recorded in the superproject and the submodules HEAD will be considered modifications, all changes to the work tree of the submodule will be ignored. When using this value, the submodule will not be scanned for work tree changes at all, leading to a performance benefit on large submodules. "untracked": Only untracked files in the submodules work tree are ignored, a changed HEAD and/or modified files in the submodule will mark it as modified. "none" (which is the default): Either untracked or modified files in a submodules work tree or a difference between the subdmodules HEAD and the commit recorded in the superproject will make it show up as changed. This value is added as a new parameter for the "--ignore-submodules" option of the diff family and "git status" so the user can override the settings in the configuration. Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-05 22:39:25 +00:00
takes only differences between the HEAD of the submodule and the commit
recorded in the superproject into account. "untracked" will additionally
let submodules with modified tracked files in their work tree show up.
Using "none" (the default when this option is not set) also shows
submodules that have untracked files in their work tree as changed.
This setting overrides any setting made in .gitmodules for this submodule,
both settings can be overridden on the command line by using the
"--ignore-submodules" option. The 'git submodule' commands are not
affected by this setting.
Submodules: Add the new "ignore" config option for diff and status The new "ignore" config option controls the default behavior for "git status" and the diff family. It specifies under what circumstances they consider submodules as modified and can be set separately for each submodule. The command line option "--ignore-submodules=" has been extended to accept the new parameter "none" for both status and diff. Users that chose submodules to get rid of long work tree scanning times might want to set the "dirty" option for those submodules. This brings back the pre 1.7.0 behavior, where submodule work trees were never scanned for modifications. By using "--ignore-submodules=none" on the command line the status and diff commands can be told to do a full scan. This option can be set to the following values (which have the same name and meaning as for the "--ignore-submodules" option of status and diff): "all": All changes to the submodule will be ignored. "dirty": Only differences of the commit recorded in the superproject and the submodules HEAD will be considered modifications, all changes to the work tree of the submodule will be ignored. When using this value, the submodule will not be scanned for work tree changes at all, leading to a performance benefit on large submodules. "untracked": Only untracked files in the submodules work tree are ignored, a changed HEAD and/or modified files in the submodule will mark it as modified. "none" (which is the default): Either untracked or modified files in a submodules work tree or a difference between the subdmodules HEAD and the commit recorded in the superproject will make it show up as changed. This value is added as a new parameter for the "--ignore-submodules" option of the diff family and "git status" so the user can override the settings in the configuration. Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-08-05 22:39:25 +00:00
submodule: decouple url and submodule interest Currently the submodule.<name>.url config option is used to determine if a given submodule is of interest to the user. This ends up being cumbersome in a world where we want to have different submodules checked out in different worktrees or a more generalized mechanism to select which submodules are of interest. In a future with worktree support for submodules, there will be multiple working trees, each of which may only need a subset of the submodules checked out. The URL (which is where the submodule repository can be obtained) should not differ between different working trees. It may also be convenient for users to more easily specify groups of submodules they are interested in as opposed to running "git submodule init <path>" on each submodule they want checked out in their working tree. To this end two config options are introduced, submodule.active and submodule.<name>.active. The submodule.active config holds a pathspec that specifies which submodules should exist in the working tree. The submodule.<name>.active config is a boolean flag used to indicate if that particular submodule should exist in the working tree. Its important to note that submodule.active functions differently than the other configuration options since it takes a pathspec. This allows users to adopt at least two new workflows: 1. Submodules can be grouped with a leading directory, such that a pathspec e.g. 'lib/' would cover all library-ish modules to allow those who are interested in library-ish modules to set "submodule.active = lib/" just once to say any and all modules in 'lib/' are interesting. 2. Once the pathspec-attribute feature is invented, users can label submodules with attributes to group them, so that a broad pathspec with attribute requirements, e.g. ':(attr:lib)', can be used to say any and all modules with the 'lib' attribute are interesting. Since the .gitattributes file, just like the .gitmodules file, is tracked by the superproject, when a submodule moves in the superproject tree, the project can adjust which path gets the attribute in .gitattributes, just like it can adjust which path has the submodule in .gitmodules. Neither of these two additional configuration options solve the problem of wanting different submodules checked out in different worktrees because multiple worktrees share .git/config. Only once per-worktree configurations become a reality can this be solved, but this is a necessary preparatory step for that future. Given these multiple ways to check if a submodule is of interest, the more fine-grained submodule.<name>.active option has the highest order of precedence followed by the pathspec check against submodule.active. To ensure backwards compatibility, if neither of these options are set, git falls back to checking the submodule.<name>.url option to determine if a submodule is interesting. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17 22:38:01 +00:00
submodule.<name>.active::
Boolean value indicating if the submodule is of interest to git
commands. This config option takes precedence over the
submodule.active config option. See linkgit:gitsubmodules[7] for
details.
submodule: decouple url and submodule interest Currently the submodule.<name>.url config option is used to determine if a given submodule is of interest to the user. This ends up being cumbersome in a world where we want to have different submodules checked out in different worktrees or a more generalized mechanism to select which submodules are of interest. In a future with worktree support for submodules, there will be multiple working trees, each of which may only need a subset of the submodules checked out. The URL (which is where the submodule repository can be obtained) should not differ between different working trees. It may also be convenient for users to more easily specify groups of submodules they are interested in as opposed to running "git submodule init <path>" on each submodule they want checked out in their working tree. To this end two config options are introduced, submodule.active and submodule.<name>.active. The submodule.active config holds a pathspec that specifies which submodules should exist in the working tree. The submodule.<name>.active config is a boolean flag used to indicate if that particular submodule should exist in the working tree. Its important to note that submodule.active functions differently than the other configuration options since it takes a pathspec. This allows users to adopt at least two new workflows: 1. Submodules can be grouped with a leading directory, such that a pathspec e.g. 'lib/' would cover all library-ish modules to allow those who are interested in library-ish modules to set "submodule.active = lib/" just once to say any and all modules in 'lib/' are interesting. 2. Once the pathspec-attribute feature is invented, users can label submodules with attributes to group them, so that a broad pathspec with attribute requirements, e.g. ':(attr:lib)', can be used to say any and all modules with the 'lib' attribute are interesting. Since the .gitattributes file, just like the .gitmodules file, is tracked by the superproject, when a submodule moves in the superproject tree, the project can adjust which path gets the attribute in .gitattributes, just like it can adjust which path has the submodule in .gitmodules. Neither of these two additional configuration options solve the problem of wanting different submodules checked out in different worktrees because multiple worktrees share .git/config. Only once per-worktree configurations become a reality can this be solved, but this is a necessary preparatory step for that future. Given these multiple ways to check if a submodule is of interest, the more fine-grained submodule.<name>.active option has the highest order of precedence followed by the pathspec check against submodule.active. To ensure backwards compatibility, if neither of these options are set, git falls back to checking the submodule.<name>.url option to determine if a submodule is interesting. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17 22:38:01 +00:00
submodule.active::
A repeated field which contains a pathspec used to match against a
submodule's path to determine if the submodule is of interest to git
commands. See linkgit:gitsubmodules[7] for details.
submodule: decouple url and submodule interest Currently the submodule.<name>.url config option is used to determine if a given submodule is of interest to the user. This ends up being cumbersome in a world where we want to have different submodules checked out in different worktrees or a more generalized mechanism to select which submodules are of interest. In a future with worktree support for submodules, there will be multiple working trees, each of which may only need a subset of the submodules checked out. The URL (which is where the submodule repository can be obtained) should not differ between different working trees. It may also be convenient for users to more easily specify groups of submodules they are interested in as opposed to running "git submodule init <path>" on each submodule they want checked out in their working tree. To this end two config options are introduced, submodule.active and submodule.<name>.active. The submodule.active config holds a pathspec that specifies which submodules should exist in the working tree. The submodule.<name>.active config is a boolean flag used to indicate if that particular submodule should exist in the working tree. Its important to note that submodule.active functions differently than the other configuration options since it takes a pathspec. This allows users to adopt at least two new workflows: 1. Submodules can be grouped with a leading directory, such that a pathspec e.g. 'lib/' would cover all library-ish modules to allow those who are interested in library-ish modules to set "submodule.active = lib/" just once to say any and all modules in 'lib/' are interesting. 2. Once the pathspec-attribute feature is invented, users can label submodules with attributes to group them, so that a broad pathspec with attribute requirements, e.g. ':(attr:lib)', can be used to say any and all modules with the 'lib' attribute are interesting. Since the .gitattributes file, just like the .gitmodules file, is tracked by the superproject, when a submodule moves in the superproject tree, the project can adjust which path gets the attribute in .gitattributes, just like it can adjust which path has the submodule in .gitmodules. Neither of these two additional configuration options solve the problem of wanting different submodules checked out in different worktrees because multiple worktrees share .git/config. Only once per-worktree configurations become a reality can this be solved, but this is a necessary preparatory step for that future. Given these multiple ways to check if a submodule is of interest, the more fine-grained submodule.<name>.active option has the highest order of precedence followed by the pathspec check against submodule.active. To ensure backwards compatibility, if neither of these options are set, git falls back to checking the submodule.<name>.url option to determine if a submodule is interesting. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17 22:38:01 +00:00
submodule.recurse::
Specifies if commands recurse into submodules by default. This
applies to all commands that have a `--recurse-submodules` option,
except `clone`.
Defaults to false.
submodule.fetchJobs::
Specifies how many submodules are fetched/cloned at the same time.
A positive integer allows up to that number of submodules fetched
in parallel. A value of 0 will give some reasonable default.
If unset, it defaults to 1.
submodule.alternateLocation::
Specifies how the submodules obtain alternates when submodules are
cloned. Possible values are `no`, `superproject`.
By default `no` is assumed, which doesn't add references. When the
value is set to `superproject` the submodule to be cloned computes
its alternates location relative to the superprojects alternate.
submodule.alternateErrorStrategy::
Specifies how to treat errors with the alternates for a submodule
as computed via `submodule.alternateLocation`. Possible values are
`ignore`, `info`, `die`. Default is `die`.
tag.forceSignAnnotated::
A boolean to specify whether annotated tags created should be GPG signed.
If `--annotate` is specified on the command line, it takes
precedence over this option.
tag.sort::
This variable controls the sort ordering of tags when displayed by
linkgit:git-tag[1]. Without the "--sort=<value>" option provided, the
value of this variable will be used as the default.
tar.umask::
This variable can be used to restrict the permission bits of
tar archive entries. The default is 0002, which turns off the
world write bit. The special value "user" indicates that the
archiving user's umask will be used instead. See umask(2) and
linkgit:git-archive[1].
transfer.fsckObjects::
When `fetch.fsckObjects` or `receive.fsckObjects` are
not set, the value of this variable is used instead.
Defaults to false.
transfer.hideRefs::
String(s) `receive-pack` and `upload-pack` use to decide which
refs to omit from their initial advertisements. Use more than
one definition to specify multiple prefix strings. A ref that is
under the hierarchies listed in the value of this variable is
excluded, and is hidden when responding to `git push` or `git
fetch`. See `receive.hideRefs` and `uploadpack.hideRefs` for
program-specific versions of this config.
refs: support negative transfer.hideRefs If you hide a hierarchy of refs using the transfer.hideRefs config, there is no way to later override that config to "unhide" it. This patch implements a "negative" hide which causes matches to immediately be marked as unhidden, even if another match would hide it. We take care to apply the matches in reverse-order from how they are fed to us by the config machinery, as that lets our usual "last one wins" config precedence work (and entries in .git/config, for example, will override /etc/gitconfig). So you can now do: $ git config --system transfer.hideRefs refs/secret $ git config transfer.hideRefs '!refs/secret/not-so-secret' to hide refs/secret in all repos, except for one public bit in one specific repo. Or you can even do: $ git clone \ -u "git -c transfer.hiderefs="!refs/foo" upload-pack" \ remote:repo.git to clone remote:repo.git, overriding any hiding it has configured. There are two alternatives that were considered and rejected: 1. A generic config mechanism for removing an item from a list. E.g.: (e.g., "[transfer] hideRefs -= refs/foo"). This is nice because it could apply to other multi-valued config, as well. But it is not nearly as flexible. There is no way to say: [transfer] hideRefs = refs/secret hideRefs = refs/secret/not-so-secret Having explicit negative specifications means we can override previous entries, even if they are not the same literal string. 2. Adding another variable to override some parts of hideRefs (e.g., "exposeRefs"). This solves the problem from alternative (1), but it cannot easily obey the normal config precedence, because it would use two separate lists. For example: [transfer] hideRefs = refs/secret exposeRefs = refs/secret/not-so-secret hideRefs = refs/secret/not-so-secret/no-really-its-secret With two lists, we have to apply the "expose" rules first, and only then apply the "hide" rules. But that does not match what the above config intends. Of course we could internally parse that to a single list, respecting the ordering, which saves us having to invent the new "!" syntax. But using a single name communicates to the user that the ordering _is_ important. And "!" is well-known for negation, and should not appear at the beginning of a ref (it is actually valid in a ref-name, but all entries here should be fully-qualified, starting with "refs/"). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-07-28 20:23:26 +00:00
+
You may also include a `!` in front of the ref name to negate the entry,
explicitly exposing it, even if an earlier entry marked it as hidden.
If you have multiple hideRefs values, later entries override earlier ones
(and entries in more-specific config files override less-specific ones).
+
If a namespace is in use, the namespace prefix is stripped from each
reference before it is matched against `transfer.hiderefs` patterns.
For example, if `refs/heads/master` is specified in `transfer.hideRefs` and
the current namespace is `foo`, then `refs/namespaces/foo/refs/heads/master`
is omitted from the advertisements but `refs/heads/master` and
`refs/namespaces/bar/refs/heads/master` are still advertised as so-called
"have" lines. In order to match refs before stripping, add a `^` in front of
the ref name. If you combine `!` and `^`, `!` must be specified first.
+
Even if you hide refs, a client may still be able to steal the target
objects via the techniques described in the "SECURITY" section of the
linkgit:gitnamespaces[7] man page; it's best to keep private data in a
separate repository.
upload/receive-pack: allow hiding ref hierarchies A repository may have refs that are only used for its internal bookkeeping purposes that should not be exposed to the others that come over the network. Teach upload-pack to omit some refs from its initial advertisement by paying attention to the uploadpack.hiderefs multi-valued configuration variable. Do the same to receive-pack via the receive.hiderefs variable. As a convenient short-hand, allow using transfer.hiderefs to set the value to both of these variables. Any ref that is under the hierarchies listed on the value of these variable is excluded from responses to requests made by "ls-remote", "fetch", etc. (for upload-pack) and "push" (for receive-pack). Because these hidden refs do not count as OUR_REF, an attempt to fetch objects at the tip of them will be rejected, and because these refs do not get advertised, "git push :" will not see local branches that have the same name as them as "matching" ones to be sent. An attempt to update/delete these hidden refs with an explicit refspec, e.g. "git push origin :refs/hidden/22", is rejected. This is not a new restriction. To the pusher, it would appear that there is no such ref, so its push request will conclude with "Now that I sent you all the data, it is time for you to update the refs. I saw that the ref did not exist when I started pushing, and I want the result to point at this commit". The receiving end will apply the compare-and-swap rule to this request and rejects the push with "Well, your update request conflicts with somebody else; I see there is such a ref.", which is the right thing to do. Otherwise a push to a hidden ref will always be "the last one wins", which is not a good default. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-19 00:08:30 +00:00
transfer.unpackLimit::
When `fetch.unpackLimit` or `receive.unpackLimit` are
not set, the value of this variable is used instead.
The default value is 100.
uploadarchive.allowUnreachable::
If true, allow clients to use `git archive --remote` to request
any tree, whether reachable from the ref tips or not. See the
discussion in the "SECURITY" section of
linkgit:git-upload-archive[1] for more details. Defaults to
`false`.
uploadpack.hideRefs::
This variable is the same as `transfer.hideRefs`, but applies
only to `upload-pack` (and so affects only fetches, not pushes).
An attempt to fetch a hidden ref by `git fetch` will fail. See
also `uploadpack.allowTipSHA1InWant`.
uploadpack.allowTipSHA1InWant::
When `uploadpack.hideRefs` is in effect, allow `upload-pack`
to accept a fetch request that asks for an object at the tip
of a hidden ref (by default, such a request is rejected).
See also `uploadpack.hideRefs`. Even if this is false, a client
may be able to steal objects via the techniques described in the
"SECURITY" section of the linkgit:gitnamespaces[7] man page; it's
best to keep private data in a separate repository.
upload/receive-pack: allow hiding ref hierarchies A repository may have refs that are only used for its internal bookkeeping purposes that should not be exposed to the others that come over the network. Teach upload-pack to omit some refs from its initial advertisement by paying attention to the uploadpack.hiderefs multi-valued configuration variable. Do the same to receive-pack via the receive.hiderefs variable. As a convenient short-hand, allow using transfer.hiderefs to set the value to both of these variables. Any ref that is under the hierarchies listed on the value of these variable is excluded from responses to requests made by "ls-remote", "fetch", etc. (for upload-pack) and "push" (for receive-pack). Because these hidden refs do not count as OUR_REF, an attempt to fetch objects at the tip of them will be rejected, and because these refs do not get advertised, "git push :" will not see local branches that have the same name as them as "matching" ones to be sent. An attempt to update/delete these hidden refs with an explicit refspec, e.g. "git push origin :refs/hidden/22", is rejected. This is not a new restriction. To the pusher, it would appear that there is no such ref, so its push request will conclude with "Now that I sent you all the data, it is time for you to update the refs. I saw that the ref did not exist when I started pushing, and I want the result to point at this commit". The receiving end will apply the compare-and-swap rule to this request and rejects the push with "Well, your update request conflicts with somebody else; I see there is such a ref.", which is the right thing to do. Otherwise a push to a hidden ref will always be "the last one wins", which is not a good default. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-19 00:08:30 +00:00
uploadpack.allowReachableSHA1InWant::
Allow `upload-pack` to accept a fetch request that asks for an
object that is reachable from any ref tip. However, note that
calculating object reachability is computationally expensive.
Defaults to `false`. Even if this is false, a client may be able
to steal objects via the techniques described in the "SECURITY"
section of the linkgit:gitnamespaces[7] man page; it's best to
keep private data in a separate repository.
uploadpack.allowAnySHA1InWant::
Allow `upload-pack` to accept a fetch request that asks for any
object at all.
Defaults to `false`.
uploadpack.keepAlive::
When `upload-pack` has started `pack-objects`, there may be a
quiet period while `pack-objects` prepares the pack. Normally
it would output progress information, but if `--quiet` was used
for the fetch, `pack-objects` will output nothing at all until
the pack data begins. Some clients and networks may consider
the server to be hung and give up. Setting this option instructs
`upload-pack` to send an empty keepalive packet every
`uploadpack.keepAlive` seconds. Setting this option to 0
disables keepalive packets entirely. The default is 5 seconds.
upload-pack: provide a hook for running pack-objects When upload-pack serves a client request, it turns to pack-objects to do the heavy lifting of creating a packfile. There's no easy way to intercept the call to pack-objects, but there are a few good reasons to want to do so: 1. If you're debugging a client or server issue with fetching, you may want to store a copy of the generated packfile. 2. If you're gathering data from real-world fetches for performance analysis or debugging, storing a copy of the arguments and stdin lets you replay the pack generation at your leisure. 3. You may want to insert a caching layer around pack-objects; it is the most CPU- and memory-intensive part of serving a fetch, and its output is a pure function[1] of its input, making it an ideal place to consolidate identical requests. This patch adds a simple "hook" interface to intercept calls to pack-objects. The new test demonstrates how it can be used for debugging (using it for caching is a straightforward extension; the tricky part is writing the actual caching layer). This hook is unlike the normal hook scripts found in the "hooks/" directory of a repository. Because we promise that upload-pack is safe to run in an untrusted repository, we cannot execute arbitrary code or commands found in the repository (neither in hooks/, nor in the config). So instead, this hook is triggered from a config variable that is explicitly ignored in the per-repo config. The config variable holds the actual shell command to run as the hook. Another approach would be to simply treat it as a boolean: "should I respect the upload-pack hooks in this repo?", and then run the script from "hooks/" as we usually do. However, that isn't as flexible; there's no way to run a hook approved by the site administrator (e.g., in "/etc/gitconfig") on a repository whose contents are not trusted. The approach taken by this patch is more fine-grained, if a little less conventional for git hooks (it does behave similar to other configured commands like diff.external, etc). [1] Pack-objects isn't _actually_ a pure function. Its output depends on the exact packing of the object database, and if multi-threading is used for delta compression, can even differ racily. But for the purposes of caching, that's OK; of the many possible outputs for a given input, it is sufficient only that we output one of them. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-18 22:45:37 +00:00
uploadpack.packObjectsHook::
If this option is set, when `upload-pack` would run
`git pack-objects` to create a packfile for a client, it will
run this shell command instead. The `pack-objects` command and
arguments it _would_ have run (including the `git pack-objects`
at the beginning) are appended to the shell command. The stdin
and stdout of the hook are treated as if `pack-objects` itself
was run. I.e., `upload-pack` will feed input intended for
`pack-objects` to the hook, and expects a completed packfile on
stdout.
uploadpack.allowFilter::
If this option is set, `upload-pack` will support partial
clone and partial fetch object filtering.
upload-pack: provide a hook for running pack-objects When upload-pack serves a client request, it turns to pack-objects to do the heavy lifting of creating a packfile. There's no easy way to intercept the call to pack-objects, but there are a few good reasons to want to do so: 1. If you're debugging a client or server issue with fetching, you may want to store a copy of the generated packfile. 2. If you're gathering data from real-world fetches for performance analysis or debugging, storing a copy of the arguments and stdin lets you replay the pack generation at your leisure. 3. You may want to insert a caching layer around pack-objects; it is the most CPU- and memory-intensive part of serving a fetch, and its output is a pure function[1] of its input, making it an ideal place to consolidate identical requests. This patch adds a simple "hook" interface to intercept calls to pack-objects. The new test demonstrates how it can be used for debugging (using it for caching is a straightforward extension; the tricky part is writing the actual caching layer). This hook is unlike the normal hook scripts found in the "hooks/" directory of a repository. Because we promise that upload-pack is safe to run in an untrusted repository, we cannot execute arbitrary code or commands found in the repository (neither in hooks/, nor in the config). So instead, this hook is triggered from a config variable that is explicitly ignored in the per-repo config. The config variable holds the actual shell command to run as the hook. Another approach would be to simply treat it as a boolean: "should I respect the upload-pack hooks in this repo?", and then run the script from "hooks/" as we usually do. However, that isn't as flexible; there's no way to run a hook approved by the site administrator (e.g., in "/etc/gitconfig") on a repository whose contents are not trusted. The approach taken by this patch is more fine-grained, if a little less conventional for git hooks (it does behave similar to other configured commands like diff.external, etc). [1] Pack-objects isn't _actually_ a pure function. Its output depends on the exact packing of the object database, and if multi-threading is used for delta compression, can even differ racily. But for the purposes of caching, that's OK; of the many possible outputs for a given input, it is sufficient only that we output one of them. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-05-18 22:45:37 +00:00
+
Note that this configuration variable is ignored if it is seen in the
repository-level config (this is a safety measure against fetching from
untrusted repositories).
uploadpack.allowRefInWant::
If this option is set, `upload-pack` will support the `ref-in-want`
feature of the protocol version 2 `fetch` command. This feature
is intended for the benefit of load-balanced servers which may
not have the same view of what OIDs their refs point to due to
replication delay.
url.<base>.insteadOf::
Any URL that starts with this value will be rewritten to
start, instead, with <base>. In cases where some site serves a
large number of repositories, and serves them with multiple
access methods, and some users need to use different access
methods, this feature allows people to specify any of the
equivalent URLs and have Git automatically rewrite the URL to
the best alternative for the particular user, even for a
never-before-seen repository on the site. When more than one
insteadOf strings match a given URL, the longest match is used.
+
Note that any protocol restrictions will be applied to the rewritten
URL. If the rewrite changes the URL to use a custom protocol or remote
helper, you may need to adjust the `protocol.*.allow` config to permit
the request. In particular, protocols you expect to use for submodules
must be set to `always` rather than the default of `user`. See the
description of `protocol.allow` above.
url.<base>.pushInsteadOf::
Any URL that starts with this value will not be pushed to;
instead, it will be rewritten to start with <base>, and the
resulting URL will be pushed to. In cases where some site serves
a large number of repositories, and serves them with multiple
access methods, some of which do not allow push, this feature
allows people to specify a pull-only URL and have Git
automatically use an appropriate URL to push, even for a
never-before-seen repository on the site. When more than one
pushInsteadOf strings match a given URL, the longest match is
used. If a remote has an explicit pushurl, Git will ignore this
setting for that remote.
user.email::
Your email address to be recorded in any newly created commits.
Can be overridden by the `GIT_AUTHOR_EMAIL`, `GIT_COMMITTER_EMAIL`, and
`EMAIL` environment variables. See linkgit:git-commit-tree[1].
user.name::
Your full name to be recorded in any newly created commits.
Can be overridden by the `GIT_AUTHOR_NAME` and `GIT_COMMITTER_NAME`
environment variables. See linkgit:git-commit-tree[1].
user.useConfigOnly::
Instruct Git to avoid trying to guess defaults for `user.email`
and `user.name`, and instead retrieve the values only from the
configuration. For example, if you have multiple email addresses
and would like to use a different one for each repository, then
with this configuration option set to `true` in the global config
along with a name, Git will prompt you to set up an email before
making new commits in a newly cloned repository.
Defaults to `false`.
user.signingKey::
If linkgit:git-tag[1] or linkgit:git-commit[1] is not selecting the
key you want it to automatically when creating a signed tag or
commit, you can override the default selection with this variable.
This option is passed unchanged to gpg's --local-user parameter,
so you may specify a key using any method that gpg supports.
versioncmp: generalize version sort suffix reordering The 'versionsort.prereleaseSuffix' configuration variable, as its name suggests, is supposed to only deal with tagnames with prerelease suffixes, and allows sorting those prerelease tags in a user-defined order before the suffixless main release tag, instead of sorting them simply lexicographically. However, the previous changes in this series resulted in an interesting and useful property of version sort: - The empty string as a configured suffix matches all tagnames, including tagnames without any suffix, but - tagnames containing a "real" configured suffix are still ordered according to that real suffix, because any longer suffix takes precedence over the empty string. Exploiting this property we can easily generalize suffix reordering and specify the order of tags with given suffixes not only before but even after a main release tag by using the empty suffix to denote the position of the main release tag, without any algorithm changes: $ git -c versionsort.prereleaseSuffix=-alpha \ -c versionsort.prereleaseSuffix=-beta \ -c versionsort.prereleaseSuffix="" \ -c versionsort.prereleaseSuffix=-gamma \ -c versionsort.prereleaseSuffix=-delta \ tag -l --sort=version:refname 'v3.0*' v3.0-alpha1 v3.0-beta1 v3.0 v3.0-gamma1 v3.0-delta1 Since 'versionsort.prereleaseSuffix' is not a fitting name for a configuration variable to control this more general suffix reordering, introduce the new variable 'versionsort.suffix'. Still keep the old configuration variable name as a deprecated alias, though, to avoid suddenly breaking setups already using it. Ignore the old variable if both old and new configuration variables are set, but emit a warning so users will be aware of it and can fix their configuration. Extend the documentation to describe and add a test to check this more general behavior. Note: since the empty suffix matches all tagnames, tagnames with suffixes not included in the configuration are listed together with the suffixless main release tag, ordered lexicographically right after that, i.e. before tags with suffixes listed in the configuration following the empty suffix. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-08 14:24:01 +00:00
versionsort.prereleaseSuffix (deprecated)::
Deprecated alias for `versionsort.suffix`. Ignored if
`versionsort.suffix` is set.
versionsort.suffix::
Even when version sort is used in linkgit:git-tag[1], tagnames
with the same base version but different suffixes are still sorted
lexicographically, resulting e.g. in prerelease tags appearing
after the main release (e.g. "1.0-rc1" after "1.0"). This
variable can be specified to determine the sorting order of tags
with different suffixes.
+
By specifying a single suffix in this variable, any tagname containing
that suffix will appear before the corresponding main release. E.g. if
the variable is set to "-rc", then all "1.0-rcX" tags will appear before
"1.0". If specified multiple times, once per suffix, then the order of
suffixes in the configuration will determine the sorting order of tagnames
with those suffixes. E.g. if "-pre" appears before "-rc" in the
configuration, then all "1.0-preX" tags will be listed before any
"1.0-rcX" tags. The placement of the main release tag relative to tags
with various suffixes can be determined by specifying the empty suffix
among those other suffixes. E.g. if the suffixes "-rc", "", "-ck" and
"-bfs" appear in the configuration in this order, then all "v4.8-rcX" tags
are listed first, followed by "v4.8", then "v4.8-ckX" and finally
"v4.8-bfsX".
+
versioncmp: cope with common part overlapping with prerelease suffix Version sort with prerelease reordering sometimes puts tagnames in the wrong order, when the common part of two compared tagnames overlaps with the leading character(s) of one or more configured prerelease suffixes. Note the position of "v2.1.0-beta-1": $ git -c versionsort.prereleaseSuffix=-beta \ tag -l --sort=version:refname v2.1.* v2.1.0-beta-2 v2.1.0-beta-3 v2.1.0 v2.1.0-RC1 v2.1.0-RC2 v2.1.0-beta-1 v2.1.1 v2.1.2 The reason is that when comparing a pair of tagnames, first versioncmp() looks for the first different character in a pair of tagnames, and then the swap_prereleases() helper function looks for a configured prerelease suffix _starting at_ that character. Thus, when in the above example the sorting algorithm happens to compare the tagnames "v2.1.0-beta-1" and "v2.1.0-RC2", swap_prereleases() tries to match the suffix "-beta" against "beta-1" to no avail, and the two tagnames erroneously end up being ordered lexicographically. To fix this issue change swap_prereleases() to look for configured prerelease suffixes _containing_ the position of that first different character. Care must be taken, when a configured suffix is longer than the tagnames' common part up to the first different character, to avoid reading memory before the beginning of the tagnames. Add a test that uses an exceptionally long prerelease suffix to check for this, in the hope that in case of a regression the illegal memory access causes a segfault in 'git tag' on one of the commonly used platforms (the test happens to pass successfully on my Linux system with the safety check removed), or at least makes valgrind complain. Under some circumstances it's possible that more than one prerelease suffixes can be found in the same tagname around that first different character. With this simple bugfix patch such a tagname is sorted according to the contained suffix that comes first in the configuration for now. This is less than ideal in some cases, and the following patch will take care of those. Reported-by: Leho Kraav <leho@conversionready.com> Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-08 14:23:59 +00:00
If more than one suffixes match the same tagname, then that tagname will
versioncmp: use earliest-longest contained suffix to determine sorting order When comparing tagnames, it is possible that a tagname contains more than one of the configured prerelease suffixes around the first different character. After fixing a bug in the previous commit such a tagname is sorted according to the contained suffix which comes first in the configuration. This is, however, not quite the right thing to do in the following corner cases: 1. $ git -c versionsort.suffix=-bar -c versionsort.suffix=-foo-baz -c versionsort.suffix=-foo-bar tag -l --sort=version:refname 'v1*' v1.0-foo-bar v1.0-foo-baz The suffix of the tagname 'v1.0-foo-bar' is clearly '-foo-bar', so it should be listed last. However, as it also contains '-bar' around the first different character, it is listed first instead, because that '-bar' suffix comes first the configuration. 2. One of the configured suffixes starts with the other: $ git -c versionsort.prereleasesuffix=-pre \ -c versionsort.prereleasesuffix=-prerelease \ tag -l --sort=version:refname 'v2*' v2.0-prerelease1 v2.0-pre1 v2.0-pre2 Here the tagname 'v2.0-prerelease1' should be the last. When comparing 'v2.0-pre1' and 'v2.0-prerelease1' the first different characters are '1' and 'r', respectively. Since this first different character must be part of the configured suffix, the '-pre' suffix is not recognized in the first tagname. OTOH, the '-prerelease' suffix is properly recognized in 'v2.0-prerelease1', thus it is listed first. Improve version sort in these corner cases, and - look for a configured prerelease suffix containing the first different character or ending right before it, so the '-pre' suffixes are recognized in case (2). This also means that when comparing tagnames 'v2.0-pre1' and 'v2.0-pre2', swap_prereleases() would find the '-pre' suffix in both, but then it will return "undecided" and the caller will do the right thing by sorting based in '1' and '2'. - If the tagname contains more than one suffix, then give precedence to the contained suffix that starts at the earliest offset in the tagname to address (1). - If there are more than one suffixes starting at that earliest position, then give precedence to the longest of those suffixes, thus ensuring that in (2) the tagname 'v2.0-prerelease1' won't be sorted based on the '-pre' suffix. Add tests for these corner cases and adjust the documentation accordingly. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-08 14:24:00 +00:00
be sorted according to the suffix which starts at the earliest position in
the tagname. If more than one different matching suffixes start at
that earliest position, then that tagname will be sorted according to the
longest of those suffixes.
versioncmp: cope with common part overlapping with prerelease suffix Version sort with prerelease reordering sometimes puts tagnames in the wrong order, when the common part of two compared tagnames overlaps with the leading character(s) of one or more configured prerelease suffixes. Note the position of "v2.1.0-beta-1": $ git -c versionsort.prereleaseSuffix=-beta \ tag -l --sort=version:refname v2.1.* v2.1.0-beta-2 v2.1.0-beta-3 v2.1.0 v2.1.0-RC1 v2.1.0-RC2 v2.1.0-beta-1 v2.1.1 v2.1.2 The reason is that when comparing a pair of tagnames, first versioncmp() looks for the first different character in a pair of tagnames, and then the swap_prereleases() helper function looks for a configured prerelease suffix _starting at_ that character. Thus, when in the above example the sorting algorithm happens to compare the tagnames "v2.1.0-beta-1" and "v2.1.0-RC2", swap_prereleases() tries to match the suffix "-beta" against "beta-1" to no avail, and the two tagnames erroneously end up being ordered lexicographically. To fix this issue change swap_prereleases() to look for configured prerelease suffixes _containing_ the position of that first different character. Care must be taken, when a configured suffix is longer than the tagnames' common part up to the first different character, to avoid reading memory before the beginning of the tagnames. Add a test that uses an exceptionally long prerelease suffix to check for this, in the hope that in case of a regression the illegal memory access causes a segfault in 'git tag' on one of the commonly used platforms (the test happens to pass successfully on my Linux system with the safety check removed), or at least makes valgrind complain. Under some circumstances it's possible that more than one prerelease suffixes can be found in the same tagname around that first different character. With this simple bugfix patch such a tagname is sorted according to the contained suffix that comes first in the configuration for now. This is less than ideal in some cases, and the following patch will take care of those. Reported-by: Leho Kraav <leho@conversionready.com> Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-08 14:23:59 +00:00
The sorting order between different suffixes is undefined if they are
in multiple config files.
web.browser::
Specify a web browser that may be used by some commands.
Currently only linkgit:git-instaweb[1] and linkgit:git-help[1]
may use it.
worktree.guessRemote::
With `add`, if no branch argument, and neither of `-b` nor
`-B` nor `--detach` are given, the command defaults to
creating a new branch from HEAD. If `worktree.guessRemote` is
set to true, `worktree add` tries to find a remote-tracking
branch whose name uniquely matches the new branch name. If
such a branch exists, it is checked out and set as "upstream"
for the new branch. If no such match can be found, it falls
back to creating a new branch from the current HEAD.