git/t/t7900-maintenance.sh

795 lines
28 KiB
Bash
Raw Normal View History

maintenance: create basic maintenance runner The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 18:11:42 +00:00
#!/bin/sh
test_description='git maintenance builtin'
. ./test-lib.sh
GIT_TEST_COMMIT_GRAPH=0
maintenance: add incremental-repack task The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:36 +00:00
GIT_TEST_MULTI_PACK_INDEX=0
maintenance: use launchctl on macOS The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies: 1. Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code. 2. While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required. The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool. [1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of <string> tags. For example, here is my plist file for the weekly scheduled tasks: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"><dict> <key>Label</key><string>org.git-scm.git.weekly</string> <key>ProgramArguments</key> <array> <string>/usr/local/libexec/git-core/git</string> <string>--exec-path=/usr/local/libexec/git-core</string> <string>for-each-repo</string> <string>--config=maintenance.repo</string> <string>maintenance</string> <string>run</string> <string>--schedule=weekly</string> </array> <key>StartCalendarInterval</key> <array> <dict> <key>Day</key><integer>0</integer> <key>Hour</key><integer>0</integer> <key>Minute</key><integer>0</integer> </dict> </array> </dict> </plist> The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task). The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency. The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice. [2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/ To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists. The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:\..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine. We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER. An earlier version of this patch accidentally had an opening "<dict>" tag when it should have had a closing "</dict>" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available. We strive to make these tests work on all platforms, but Windows caused some headaches. In particular, the value of getuid() called by the C code is not guaranteed to be the same as `$(id -u)` invoked by a test. This is because `git.exe` is a native Windows program, whereas the utility programs run by the test script mostly utilize the MSYS2 runtime, which emulates a POSIX-like environment. Since the purpose of the test is to check that the input to the hook is well-formed, the actual user ID is immaterial, thus we can work around the problem by making the the test UID-agnostic. Another subtle issue is the $HOME environment variable being a Windows-style path instead of a Unix-style path. We can be more flexible here instead of expecting exact path matches. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:27 +00:00
test_lazy_prereq XMLLINT '
xmllint --version
'
test_xmllint () {
if test_have_prereq XMLLINT
then
xmllint --noout "$@"
else
true
fi
}
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
test_lazy_prereq SYSTEMD_ANALYZE '
systemd-analyze verify /lib/systemd/system/basic.target
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
'
test_systemd_analyze_verify () {
if test_have_prereq SYSTEMD_ANALYZE
then
systemd-analyze verify "$@"
fi
}
maintenance: create basic maintenance runner The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 18:11:42 +00:00
test_expect_success 'help text' '
test_expect_code 129 git maintenance -h 2>err &&
test_i18ngrep "usage: git maintenance <subcommand>" err &&
maintenance: create basic maintenance runner The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 18:11:42 +00:00
test_expect_code 128 git maintenance barf 2>err &&
test_i18ngrep "invalid subcommand: barf" err &&
test_expect_code 129 git maintenance 2>err &&
test_i18ngrep "usage: git maintenance" err
'
test_expect_success 'run [--auto|--quiet]' '
GIT_TRACE2_EVENT="$(pwd)/run-no-auto.txt" \
git maintenance run 2>/dev/null &&
GIT_TRACE2_EVENT="$(pwd)/run-auto.txt" \
git maintenance run --auto 2>/dev/null &&
GIT_TRACE2_EVENT="$(pwd)/run-no-quiet.txt" \
git maintenance run --no-quiet 2>/dev/null &&
test_subcommand git gc --quiet <run-no-auto.txt &&
test_subcommand ! git gc --auto --quiet <run-auto.txt &&
test_subcommand git gc --no-quiet <run-no-quiet.txt
maintenance: create basic maintenance runner The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 18:11:42 +00:00
'
test_expect_success 'maintenance.auto config option' '
GIT_TRACE2_EVENT="$(pwd)/default" git commit --quiet --allow-empty -m 1 &&
test_subcommand git maintenance run --auto --quiet <default &&
GIT_TRACE2_EVENT="$(pwd)/true" \
git -c maintenance.auto=true \
commit --quiet --allow-empty -m 2 &&
test_subcommand git maintenance run --auto --quiet <true &&
GIT_TRACE2_EVENT="$(pwd)/false" \
git -c maintenance.auto=false \
commit --quiet --allow-empty -m 3 &&
test_subcommand ! git maintenance run --auto --quiet <false
'
test_expect_success 'maintenance.<task>.enabled' '
git config maintenance.gc.enabled false &&
git config maintenance.commit-graph.enabled true &&
GIT_TRACE2_EVENT="$(pwd)/run-config.txt" git maintenance run 2>err &&
test_subcommand ! git gc --quiet <run-config.txt &&
test_subcommand git commit-graph write --split --reachable --no-progress <run-config.txt
'
test_expect_success 'run --task=<task>' '
GIT_TRACE2_EVENT="$(pwd)/run-commit-graph.txt" \
git maintenance run --task=commit-graph 2>/dev/null &&
GIT_TRACE2_EVENT="$(pwd)/run-gc.txt" \
git maintenance run --task=gc 2>/dev/null &&
GIT_TRACE2_EVENT="$(pwd)/run-commit-graph.txt" \
git maintenance run --task=commit-graph 2>/dev/null &&
GIT_TRACE2_EVENT="$(pwd)/run-both.txt" \
git maintenance run --task=commit-graph --task=gc 2>/dev/null &&
test_subcommand ! git gc --quiet <run-commit-graph.txt &&
test_subcommand git gc --quiet <run-gc.txt &&
test_subcommand git gc --quiet <run-both.txt &&
test_subcommand git commit-graph write --split --reachable --no-progress <run-commit-graph.txt &&
test_subcommand ! git commit-graph write --split --reachable --no-progress <run-gc.txt &&
test_subcommand git commit-graph write --split --reachable --no-progress <run-both.txt
'
test_expect_success 'core.commitGraph=false prevents write process' '
GIT_TRACE2_EVENT="$(pwd)/no-commit-graph.txt" \
git -c core.commitGraph=false maintenance run \
--task=commit-graph 2>/dev/null &&
test_subcommand ! git commit-graph write --split --reachable --no-progress \
<no-commit-graph.txt
'
maintenance: test commit-graph auto condition The auto condition for the commit-graph maintenance task walks refs looking for commits that are not in the commit-graph file. This was added in 4ddc79b2 (maintenance: add auto condition for commit-graph task, 2020-09-17) but was left untested. The initial goal of this change was to demonstrate the feature works properly by adding tests. However, there was an off-by-one error that caused the basic tests around maintenance.commit-graph.auto=1 to fail when it should work. The subtlety is that if a ref tip is not in the commit-graph, then we were not adding that to the total count. In the test, we see that we have only added one commit since our last commit-graph write, so the auto condition would say there is nothing to do. The fix is simple: add the check for the commit-graph position to see that the tip is not in the commit-graph file before starting our walk. Since this happens before adding to the DFS stack, we do not need to clear our (currently empty) commit list. This does add some extra complexity for the test, because we also want to verify that the walk along the parents actually does some work. This means we need to add at least two commits in a row without writing the commit-graph. However, we also need to make sure no additional refs are pointing to the middle of this list or else the for_each_ref() in should_write_commit_graph() might visit these commits as tips instead of doing a DFS walk. Hence, the last two commits are added with "git commit" instead of "test_commit". Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-08 00:50:39 +00:00
test_expect_success 'commit-graph auto condition' '
COMMAND="maintenance run --task=commit-graph --auto --quiet" &&
GIT_TRACE2_EVENT="$(pwd)/cg-no.txt" \
git -c maintenance.commit-graph.auto=1 $COMMAND &&
GIT_TRACE2_EVENT="$(pwd)/cg-negative-means-yes.txt" \
git -c maintenance.commit-graph.auto="-1" $COMMAND &&
test_commit first &&
GIT_TRACE2_EVENT="$(pwd)/cg-zero-means-no.txt" \
git -c maintenance.commit-graph.auto=0 $COMMAND &&
GIT_TRACE2_EVENT="$(pwd)/cg-one-satisfied.txt" \
git -c maintenance.commit-graph.auto=1 $COMMAND &&
git commit --allow-empty -m "second" &&
git commit --allow-empty -m "third" &&
GIT_TRACE2_EVENT="$(pwd)/cg-two-satisfied.txt" \
git -c maintenance.commit-graph.auto=2 $COMMAND &&
COMMIT_GRAPH_WRITE="git commit-graph write --split --reachable --no-progress" &&
test_subcommand ! $COMMIT_GRAPH_WRITE <cg-no.txt &&
test_subcommand $COMMIT_GRAPH_WRITE <cg-negative-means-yes.txt &&
test_subcommand ! $COMMIT_GRAPH_WRITE <cg-zero-means-no.txt &&
test_subcommand $COMMIT_GRAPH_WRITE <cg-one-satisfied.txt &&
test_subcommand $COMMIT_GRAPH_WRITE <cg-two-satisfied.txt
'
test_expect_success 'run --task=bogus' '
test_must_fail git maintenance run --task=bogus 2>err &&
test_i18ngrep "is not a valid task" err
'
test_expect_success 'run --task duplicate' '
test_must_fail git maintenance run --task=gc --task=gc 2>err &&
test_i18ngrep "cannot be selected multiple times" err
'
maintenance: add prefetch task When working with very large repositories, an incremental 'git fetch' command can download a large amount of data. If there are many other users pushing to a common repo, then this data can rival the initial pack-file size of a 'git clone' of a medium-size repo. Users may want to keep the data on their local repos as close as possible to the data on the remote repos by fetching periodically in the background. This can break up a large daily fetch into several smaller hourly fetches. The task is called "prefetch" because it is work done in advance of a foreground fetch to make that 'git fetch' command much faster. However, if we simply ran 'git fetch <remote>' in the background, then the user running a foreground 'git fetch <remote>' would lose some important feedback when a new branch appears or an existing branch updates. This is especially true if a remote branch is force-updated and this isn't noticed by the user because it occurred in the background. Further, the functionality of 'git push --force-with-lease' becomes suspect. When running 'git fetch <remote> <options>' in the background, use the following options for careful updating: 1. --no-tags prevents getting a new tag when a user wants to see the new tags appear in their foreground fetches. 2. --refmap= removes the configured refspec which usually updates refs/remotes/<remote>/* with the refs advertised by the remote. While this looks confusing, this was documented and tested by b40a50264ac (fetch: document and test --refmap="", 2020-01-21), including this sentence in the documentation: Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. 3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*" we can ensure that we actually load the new values somewhere in our refspace while not updating refs/heads or refs/remotes. By storing these refs here, the commit-graph job will update the commit-graph with the commits from these hidden refs. 4. --prune will delete the refs/prefetch/<remote> refs that no longer appear on the remote. 5. --no-write-fetch-head prevents updating FETCH_HEAD. We've been using this step as a critical background job in Scalar [1] (and VFS for Git). This solved a pain point that was showing up in user reports: fetching was a pain! Users do not like waiting to download the data that was created while they were away from their machines. After implementing background fetch, the foreground fetch commands sped up significantly because they mostly just update refs and download a small amount of new data. The effect is especially dramatic when paried with --no-show-forced-udpates (through fetch.showForcedUpdates=false). [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:31 +00:00
test_expect_success 'run --task=prefetch with no remotes' '
git maintenance run --task=prefetch 2>err &&
test_must_be_empty err
'
test_expect_success 'prefetch multiple remotes' '
git clone . clone1 &&
git clone . clone2 &&
git remote add remote1 "file://$(pwd)/clone1" &&
git remote add remote2 "file://$(pwd)/clone2" &&
git -C clone1 switch -c one &&
git -C clone2 switch -c two &&
test_commit -C clone1 one &&
test_commit -C clone2 two &&
GIT_TRACE2_EVENT="$(pwd)/run-prefetch.txt" git maintenance run --task=prefetch 2>/dev/null &&
fetchargs="--prefetch --prune --no-tags --no-write-fetch-head --recurse-submodules=no --quiet" &&
test_subcommand git fetch remote1 $fetchargs <run-prefetch.txt &&
test_subcommand git fetch remote2 $fetchargs <run-prefetch.txt &&
maintenance: add prefetch task When working with very large repositories, an incremental 'git fetch' command can download a large amount of data. If there are many other users pushing to a common repo, then this data can rival the initial pack-file size of a 'git clone' of a medium-size repo. Users may want to keep the data on their local repos as close as possible to the data on the remote repos by fetching periodically in the background. This can break up a large daily fetch into several smaller hourly fetches. The task is called "prefetch" because it is work done in advance of a foreground fetch to make that 'git fetch' command much faster. However, if we simply ran 'git fetch <remote>' in the background, then the user running a foreground 'git fetch <remote>' would lose some important feedback when a new branch appears or an existing branch updates. This is especially true if a remote branch is force-updated and this isn't noticed by the user because it occurred in the background. Further, the functionality of 'git push --force-with-lease' becomes suspect. When running 'git fetch <remote> <options>' in the background, use the following options for careful updating: 1. --no-tags prevents getting a new tag when a user wants to see the new tags appear in their foreground fetches. 2. --refmap= removes the configured refspec which usually updates refs/remotes/<remote>/* with the refs advertised by the remote. While this looks confusing, this was documented and tested by b40a50264ac (fetch: document and test --refmap="", 2020-01-21), including this sentence in the documentation: Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. 3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*" we can ensure that we actually load the new values somewhere in our refspace while not updating refs/heads or refs/remotes. By storing these refs here, the commit-graph job will update the commit-graph with the commits from these hidden refs. 4. --prune will delete the refs/prefetch/<remote> refs that no longer appear on the remote. 5. --no-write-fetch-head prevents updating FETCH_HEAD. We've been using this step as a critical background job in Scalar [1] (and VFS for Git). This solved a pain point that was showing up in user reports: fetching was a pain! Users do not like waiting to download the data that was created while they were away from their machines. After implementing background fetch, the foreground fetch commands sped up significantly because they mostly just update refs and download a small amount of new data. The effect is especially dramatic when paried with --no-show-forced-udpates (through fetch.showForcedUpdates=false). [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:31 +00:00
test_path_is_missing .git/refs/remotes &&
git log prefetch/remotes/remote1/one &&
git log prefetch/remotes/remote2/two &&
maintenance: add prefetch task When working with very large repositories, an incremental 'git fetch' command can download a large amount of data. If there are many other users pushing to a common repo, then this data can rival the initial pack-file size of a 'git clone' of a medium-size repo. Users may want to keep the data on their local repos as close as possible to the data on the remote repos by fetching periodically in the background. This can break up a large daily fetch into several smaller hourly fetches. The task is called "prefetch" because it is work done in advance of a foreground fetch to make that 'git fetch' command much faster. However, if we simply ran 'git fetch <remote>' in the background, then the user running a foreground 'git fetch <remote>' would lose some important feedback when a new branch appears or an existing branch updates. This is especially true if a remote branch is force-updated and this isn't noticed by the user because it occurred in the background. Further, the functionality of 'git push --force-with-lease' becomes suspect. When running 'git fetch <remote> <options>' in the background, use the following options for careful updating: 1. --no-tags prevents getting a new tag when a user wants to see the new tags appear in their foreground fetches. 2. --refmap= removes the configured refspec which usually updates refs/remotes/<remote>/* with the refs advertised by the remote. While this looks confusing, this was documented and tested by b40a50264ac (fetch: document and test --refmap="", 2020-01-21), including this sentence in the documentation: Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. 3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*" we can ensure that we actually load the new values somewhere in our refspace while not updating refs/heads or refs/remotes. By storing these refs here, the commit-graph job will update the commit-graph with the commits from these hidden refs. 4. --prune will delete the refs/prefetch/<remote> refs that no longer appear on the remote. 5. --no-write-fetch-head prevents updating FETCH_HEAD. We've been using this step as a critical background job in Scalar [1] (and VFS for Git). This solved a pain point that was showing up in user reports: fetching was a pain! Users do not like waiting to download the data that was created while they were away from their machines. After implementing background fetch, the foreground fetch commands sped up significantly because they mostly just update refs and download a small amount of new data. The effect is especially dramatic when paried with --no-show-forced-udpates (through fetch.showForcedUpdates=false). [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:31 +00:00
git fetch --all &&
test_cmp_rev refs/remotes/remote1/one refs/prefetch/remotes/remote1/one &&
test_cmp_rev refs/remotes/remote2/two refs/prefetch/remotes/remote2/two &&
maintenance: set log.excludeDecoration durin prefetch The 'prefetch' task fetches refs from all remotes and places them in the refs/prefetch/<remote>/ refspace. As this task is intended to run in the background, this allows users to keep their local data very close to the remote servers' data while not updating the users' understanding of the remote refs in refs/remotes/<remote>/. However, this can clutter 'git log' decorations with copies of the refs with the full name 'refs/prefetch/<remote>/<branch>'. The log.excludeDecoration config option was added in a6be5e67 (log: add log.excludeDecoration config option, 2020-05-16) for exactly this purpose. Ensure we set this only for users that would benefit from it by assigning it at the beginning of the prefetch task. Other alternatives would be during 'git maintenance register' or 'git maintenance start', but those might assign the config even when the prefetch task is disabled by existing config. Further, users could run 'git maintenance run --task=prefetch' using their own scripting or scheduling. This provides the best coverage to automatically update the config when valuable. It is improbable, but possible, that users might want to run the prefetch task _and_ see these refs in their log decorations. This seems incredibly unlikely to me, but users can always opt-in on a command-by-command basis using --decorate-refs=refs/prefetch/. Test that this works in a few cases. In particular, ensure that our assignment of log.excludeDecoration=refs/prefetch/ is additive to other existing exclusions. Further, ensure we do not add multiple copies in multiple runs. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-19 12:52:03 +00:00
test_cmp_config refs/prefetch/ log.excludedecoration &&
git log --oneline --decorate --all >log &&
! grep "prefetch" log &&
test_when_finished git config --unset remote.remote1.skipFetchAll &&
git config remote.remote1.skipFetchAll true &&
GIT_TRACE2_EVENT="$(pwd)/skip-remote1.txt" git maintenance run --task=prefetch 2>/dev/null &&
test_subcommand ! git fetch remote1 $fetchargs <skip-remote1.txt &&
test_subcommand git fetch remote2 $fetchargs <skip-remote1.txt
maintenance: set log.excludeDecoration durin prefetch The 'prefetch' task fetches refs from all remotes and places them in the refs/prefetch/<remote>/ refspace. As this task is intended to run in the background, this allows users to keep their local data very close to the remote servers' data while not updating the users' understanding of the remote refs in refs/remotes/<remote>/. However, this can clutter 'git log' decorations with copies of the refs with the full name 'refs/prefetch/<remote>/<branch>'. The log.excludeDecoration config option was added in a6be5e67 (log: add log.excludeDecoration config option, 2020-05-16) for exactly this purpose. Ensure we set this only for users that would benefit from it by assigning it at the beginning of the prefetch task. Other alternatives would be during 'git maintenance register' or 'git maintenance start', but those might assign the config even when the prefetch task is disabled by existing config. Further, users could run 'git maintenance run --task=prefetch' using their own scripting or scheduling. This provides the best coverage to automatically update the config when valuable. It is improbable, but possible, that users might want to run the prefetch task _and_ see these refs in their log decorations. This seems incredibly unlikely to me, but users can always opt-in on a command-by-command basis using --decorate-refs=refs/prefetch/. Test that this works in a few cases. In particular, ensure that our assignment of log.excludeDecoration=refs/prefetch/ is additive to other existing exclusions. Further, ensure we do not add multiple copies in multiple runs. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-19 12:52:03 +00:00
'
test_expect_success 'prefetch and existing log.excludeDecoration values' '
git config --unset-all log.excludeDecoration &&
git config log.excludeDecoration refs/remotes/remote1/ &&
git maintenance run --task=prefetch &&
git config --get-all log.excludeDecoration >out &&
grep refs/remotes/remote1/ out &&
grep refs/prefetch/ out &&
git log --oneline --decorate --all >log &&
! grep "prefetch" log &&
! grep "remote1" log &&
grep "remote2" log &&
# a second run does not change the config
git maintenance run --task=prefetch &&
git log --oneline --decorate --all >log2 &&
test_cmp log log2
maintenance: add prefetch task When working with very large repositories, an incremental 'git fetch' command can download a large amount of data. If there are many other users pushing to a common repo, then this data can rival the initial pack-file size of a 'git clone' of a medium-size repo. Users may want to keep the data on their local repos as close as possible to the data on the remote repos by fetching periodically in the background. This can break up a large daily fetch into several smaller hourly fetches. The task is called "prefetch" because it is work done in advance of a foreground fetch to make that 'git fetch' command much faster. However, if we simply ran 'git fetch <remote>' in the background, then the user running a foreground 'git fetch <remote>' would lose some important feedback when a new branch appears or an existing branch updates. This is especially true if a remote branch is force-updated and this isn't noticed by the user because it occurred in the background. Further, the functionality of 'git push --force-with-lease' becomes suspect. When running 'git fetch <remote> <options>' in the background, use the following options for careful updating: 1. --no-tags prevents getting a new tag when a user wants to see the new tags appear in their foreground fetches. 2. --refmap= removes the configured refspec which usually updates refs/remotes/<remote>/* with the refs advertised by the remote. While this looks confusing, this was documented and tested by b40a50264ac (fetch: document and test --refmap="", 2020-01-21), including this sentence in the documentation: Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. 3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*" we can ensure that we actually load the new values somewhere in our refspace while not updating refs/heads or refs/remotes. By storing these refs here, the commit-graph job will update the commit-graph with the commits from these hidden refs. 4. --prune will delete the refs/prefetch/<remote> refs that no longer appear on the remote. 5. --no-write-fetch-head prevents updating FETCH_HEAD. We've been using this step as a critical background job in Scalar [1] (and VFS for Git). This solved a pain point that was showing up in user reports: fetching was a pain! Users do not like waiting to download the data that was created while they were away from their machines. After implementing background fetch, the foreground fetch commands sped up significantly because they mostly just update refs and download a small amount of new data. The effect is especially dramatic when paried with --no-show-forced-udpates (through fetch.showForcedUpdates=false). [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:31 +00:00
'
maintenance: add loose-objects task One goal of background maintenance jobs is to allow a user to disable auto-gc (gc.auto=0) but keep their repository in a clean state. Without any cleanup, loose objects will clutter the object database and slow operations. In addition, the loose objects will take up extra space because they are not stored with deltas against similar objects. Create a 'loose-objects' task for the 'git maintenance run' command. This helps clean up loose objects without disrupting concurrent Git commands using the following sequence of events: 1. Run 'git prune-packed' to delete any loose objects that exist in a pack-file. Concurrent commands will prefer the packed version of the object to the loose version. (Of course, there are exceptions for commands that specifically care about the location of an object. These are rare for a user to run on purpose, and we hope a user that has selected background maintenance will not be trying to do foreground maintenance.) 2. Run 'git pack-objects' on a batch of loose objects. These objects are grouped by scanning the loose object directories in lexicographic order until listing all loose objects -or- reaching 50,000 objects. This is more than enough if the loose objects are created only by a user doing normal development. We noticed users with _millions_ of loose objects because VFS for Git downloads blobs on-demand when a file read operation requires populating a virtual file. This step is based on a similar step in Scalar [1] and VFS for Git. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/LooseObjectsStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:32 +00:00
test_expect_success 'loose-objects task' '
# Repack everything so we know the state of the object dir
git repack -adk &&
# Hack to stop maintenance from running during "git commit"
echo in use >.git/objects/maintenance.lock &&
# Assuming that "git commit" creates at least one loose object
test_commit create-loose-object &&
rm .git/objects/maintenance.lock &&
ls .git/objects >obj-dir-before &&
test_file_not_empty obj-dir-before &&
ls .git/objects/pack/*.pack >packs-before &&
test_line_count = 1 packs-before &&
# The first run creates a pack-file
# but does not delete loose objects.
git maintenance run --task=loose-objects &&
ls .git/objects >obj-dir-between &&
test_cmp obj-dir-before obj-dir-between &&
ls .git/objects/pack/*.pack >packs-between &&
test_line_count = 2 packs-between &&
ls .git/objects/pack/loose-*.pack >loose-packs &&
test_line_count = 1 loose-packs &&
# The second run deletes loose objects
# but does not create a pack-file.
git maintenance run --task=loose-objects &&
ls .git/objects >obj-dir-after &&
cat >expect <<-\EOF &&
info
pack
EOF
test_cmp expect obj-dir-after &&
ls .git/objects/pack/*.pack >packs-after &&
test_cmp packs-between packs-after
'
test_expect_success 'maintenance.loose-objects.auto' '
git repack -adk &&
GIT_TRACE2_EVENT="$(pwd)/trace-lo1.txt" \
git -c maintenance.loose-objects.auto=1 maintenance \
run --auto --task=loose-objects 2>/dev/null &&
test_subcommand ! git prune-packed --quiet <trace-lo1.txt &&
printf data-A | git hash-object -t blob --stdin -w &&
GIT_TRACE2_EVENT="$(pwd)/trace-loA" \
git -c maintenance.loose-objects.auto=2 \
maintenance run --auto --task=loose-objects 2>/dev/null &&
test_subcommand ! git prune-packed --quiet <trace-loA &&
printf data-B | git hash-object -t blob --stdin -w &&
GIT_TRACE2_EVENT="$(pwd)/trace-loB" \
git -c maintenance.loose-objects.auto=2 \
maintenance run --auto --task=loose-objects 2>/dev/null &&
test_subcommand git prune-packed --quiet <trace-loB &&
GIT_TRACE2_EVENT="$(pwd)/trace-loC" \
git -c maintenance.loose-objects.auto=2 \
maintenance run --auto --task=loose-objects 2>/dev/null &&
test_subcommand git prune-packed --quiet <trace-loC
'
maintenance: add incremental-repack task The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:36 +00:00
test_expect_success 'incremental-repack task' '
packDir=.git/objects/pack &&
for i in $(test_seq 1 5)
do
test_commit $i || return 1
done &&
# Create three disjoint pack-files with size BIG, small, small.
echo HEAD~2 | git pack-objects --revs $packDir/test-1 &&
test_tick &&
git pack-objects --revs $packDir/test-2 <<-\EOF &&
HEAD~1
^HEAD~2
EOF
test_tick &&
git pack-objects --revs $packDir/test-3 <<-\EOF &&
HEAD
^HEAD~1
EOF
# Delete refs that have not been repacked in these packs.
git for-each-ref --format="delete %(refname)" \
refs/prefetch refs/tags refs/remotes >refs &&
git update-ref --stdin <refs &&
# Replace the object directory with this pack layout.
maintenance: add incremental-repack task The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:36 +00:00
rm -f $packDir/pack-* &&
rm -f $packDir/loose-* &&
ls $packDir/*.pack >packs-before &&
test_line_count = 3 packs-before &&
# make sure we do not have any broken refs that were
# missed in the deletion above
git for-each-ref &&
maintenance: add incremental-repack task The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:36 +00:00
# the job repacks the two into a new pack, but does not
# delete the old ones.
git maintenance run --task=incremental-repack &&
ls $packDir/*.pack >packs-between &&
test_line_count = 4 packs-between &&
# the job deletes the two old packs, and does not write
maintenance: auto-size incremental-repack batch When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:37 +00:00
# a new one because the batch size is not high enough to
# pack the largest pack-file.
maintenance: add incremental-repack task The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:36 +00:00
git maintenance run --task=incremental-repack &&
ls .git/objects/pack/*.pack >packs-after &&
maintenance: auto-size incremental-repack batch When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:37 +00:00
test_line_count = 2 packs-after
'
test_expect_success EXPENSIVE 'incremental-repack 2g limit' '
test_config core.compression 0 &&
maintenance: auto-size incremental-repack batch When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:37 +00:00
for i in $(test_seq 1 5)
do
test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
return 1
done &&
git add big &&
git commit -qm "Add big file (1)" &&
maintenance: auto-size incremental-repack batch When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:37 +00:00
# ensure any possible loose objects are in a pack-file
git maintenance run --task=loose-objects &&
rm big &&
for i in $(test_seq 6 10)
do
test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
return 1
done &&
git add big &&
git commit -qm "Add big file (2)" &&
maintenance: auto-size incremental-repack batch When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:37 +00:00
# ensure any possible loose objects are in a pack-file
git maintenance run --task=loose-objects &&
# Now run the incremental-repack task and check the batch-size
GIT_TRACE2_EVENT="$(pwd)/run-2g.txt" git maintenance run \
--task=incremental-repack 2>/dev/null &&
test_subcommand git multi-pack-index repack \
--no-progress --batch-size=2147483647 <run-2g.txt
maintenance: add incremental-repack task The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25 12:33:36 +00:00
'
run_incremental_repack_and_verify () {
test_commit A &&
git repack -adk &&
git multi-pack-index write &&
GIT_TRACE2_EVENT="$(pwd)/midx-init.txt" git \
-c maintenance.incremental-repack.auto=1 \
maintenance run --auto --task=incremental-repack 2>/dev/null &&
test_subcommand ! git multi-pack-index write --no-progress <midx-init.txt &&
test_commit B &&
git pack-objects --revs .git/objects/pack/pack <<-\EOF &&
HEAD
^HEAD~1
EOF
GIT_TRACE2_EVENT=$(pwd)/trace-A git \
-c maintenance.incremental-repack.auto=2 \
maintenance run --auto --task=incremental-repack 2>/dev/null &&
test_subcommand ! git multi-pack-index write --no-progress <trace-A &&
test_commit C &&
git pack-objects --revs .git/objects/pack/pack <<-\EOF &&
HEAD
^HEAD~1
EOF
GIT_TRACE2_EVENT=$(pwd)/trace-B git \
-c maintenance.incremental-repack.auto=2 \
maintenance run --auto --task=incremental-repack 2>/dev/null &&
test_subcommand git multi-pack-index write --no-progress <trace-B
}
test_expect_success 'maintenance.incremental-repack.auto' '
rm -rf incremental-repack-true &&
git init incremental-repack-true &&
(
cd incremental-repack-true &&
git config core.multiPackIndex true &&
run_incremental_repack_and_verify
)
'
test_expect_success 'maintenance.incremental-repack.auto (when config is unset)' '
rm -rf incremental-repack-unset &&
git init incremental-repack-unset &&
(
cd incremental-repack-unset &&
test_unconfig core.multiPackIndex &&
run_incremental_repack_and_verify
)
'
maintenance: add pack-refs task It is valuable to collect loose refs into a more compressed form. This is typically the packed-refs file, although this could be the reftable in the future. Having packed refs can be extremely valuable in repos with many tags or remote branches that are not modified by the local user, but still are necessary for other queries. For instance, with many exploded refs, commands such as git describe --tags --exact-match HEAD can be very slow (multiple seconds). This command in particular is used by terminal prompts to show when a detatched HEAD is pointing to an existing tag, so having it be slow causes significant delays for users. Add a new 'pack-refs' maintenance task. It runs 'git pack-refs --all --prune' to move loose refs into a packed form. For now, that is the packed-refs file, but could adjust to other file formats in the future. This is the first of several sub-tasks of the 'gc' task that could be extracted to their own tasks. In this process, we should not change the behavior of the 'gc' task since that remains the default way to keep repositories maintained. Creating a new task for one of these sub-tasks only provides more customization options for those choosing to not use the 'gc' task. It is certainly possible to have both the 'gc' and 'pack-refs' tasks enabled and run regularly. While they may repeat effort, they do not conflict in a destructive way. The 'auto_condition' function pointer is left NULL for now. We could extend this in the future to have a condition check if pack-refs should be run during 'git maintenance run --auto'. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-09 13:42:28 +00:00
test_expect_success 'pack-refs task' '
for n in $(test_seq 1 5)
do
git branch -f to-pack/$n HEAD || return 1
done &&
GIT_TRACE2_EVENT="$(pwd)/pack-refs.txt" \
git maintenance run --task=pack-refs &&
test_subcommand git pack-refs --all --prune <pack-refs.txt
'
maintenance: add --schedule option and config Maintenance currently triggers when certain data-size thresholds are met, such as number of pack-files or loose objects. Users may want to run certain maintenance tasks based on frequency instead. For example, a user may want to perform a 'prefetch' task every hour, or 'gc' task every day. To help these users, update the 'git maintenance run' command to include a '--schedule=<frequency>' option. The allowed frequencies are 'hourly', 'daily', and 'weekly'. These values are also allowed in a new config value 'maintenance.<task>.schedule'. The 'git maintenance run --schedule=<frequency>' checks the '*.schedule' config value for each enabled task to see if the configured frequency is at least as frequent as the frequency from the '--schedule' argument. We use the following order, for full clarity: 'hourly' > 'daily' > 'weekly' Use new 'enum schedule_priority' to track these values numerically. The following cron table would run the scheduled tasks with the correct frequencies: 0 1-23 * * * git -C <repo> maintenance run --schedule=hourly 0 0 * * 1-6 git -C <repo> maintenance run --schedule=daily 0 0 * * 0 git -C <repo> maintenance run --schedule=weekly This cron schedule will run --schedule=hourly every hour except at midnight. This avoids a concurrent run with the --schedule=daily that runs at midnight every day except the first day of the week. This avoids a concurrent run with the --schedule=weekly that runs at midnight on the first day of the week. Since --schedule=daily also runs the 'hourly' tasks and --schedule=weekly runs the 'hourly' and 'daily' tasks, we will still see all tasks run with the proper frequencies. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:15 +00:00
test_expect_success '--auto and --schedule incompatible' '
test_must_fail git maintenance run --auto --schedule=daily 2>err &&
test_i18ngrep "at most one" err
'
test_expect_success 'invalid --schedule value' '
test_must_fail git maintenance run --schedule=annually 2>err &&
test_i18ngrep "unrecognized --schedule" err
'
test_expect_success '--schedule inheritance weekly -> daily -> hourly' '
git config maintenance.loose-objects.enabled true &&
git config maintenance.loose-objects.schedule hourly &&
git config maintenance.commit-graph.enabled true &&
git config maintenance.commit-graph.schedule daily &&
git config maintenance.incremental-repack.enabled true &&
git config maintenance.incremental-repack.schedule weekly &&
GIT_TRACE2_EVENT="$(pwd)/hourly.txt" \
git maintenance run --schedule=hourly 2>/dev/null &&
test_subcommand git prune-packed --quiet <hourly.txt &&
test_subcommand ! git commit-graph write --split --reachable \
--no-progress <hourly.txt &&
test_subcommand ! git multi-pack-index write --no-progress <hourly.txt &&
GIT_TRACE2_EVENT="$(pwd)/daily.txt" \
git maintenance run --schedule=daily 2>/dev/null &&
test_subcommand git prune-packed --quiet <daily.txt &&
test_subcommand git commit-graph write --split --reachable \
--no-progress <daily.txt &&
test_subcommand ! git multi-pack-index write --no-progress <daily.txt &&
GIT_TRACE2_EVENT="$(pwd)/weekly.txt" \
git maintenance run --schedule=weekly 2>/dev/null &&
test_subcommand git prune-packed --quiet <weekly.txt &&
test_subcommand git commit-graph write --split --reachable \
--no-progress <weekly.txt &&
test_subcommand git multi-pack-index write --no-progress <weekly.txt
'
test_expect_success 'maintenance.strategy inheritance' '
for task in commit-graph loose-objects incremental-repack
do
git config --unset maintenance.$task.schedule || return 1
done &&
test_when_finished git config --unset maintenance.strategy &&
git config maintenance.strategy incremental &&
GIT_TRACE2_EVENT="$(pwd)/incremental-hourly.txt" \
git maintenance run --schedule=hourly --quiet &&
GIT_TRACE2_EVENT="$(pwd)/incremental-daily.txt" \
git maintenance run --schedule=daily --quiet &&
GIT_TRACE2_EVENT="$(pwd)/incremental-weekly.txt" \
git maintenance run --schedule=weekly --quiet &&
test_subcommand git commit-graph write --split --reachable \
--no-progress <incremental-hourly.txt &&
test_subcommand ! git prune-packed --quiet <incremental-hourly.txt &&
test_subcommand ! git multi-pack-index write --no-progress \
<incremental-hourly.txt &&
test_subcommand ! git pack-refs --all --prune \
<incremental-hourly.txt &&
test_subcommand git commit-graph write --split --reachable \
--no-progress <incremental-daily.txt &&
test_subcommand git prune-packed --quiet <incremental-daily.txt &&
test_subcommand git multi-pack-index write --no-progress \
<incremental-daily.txt &&
test_subcommand ! git pack-refs --all --prune \
<incremental-daily.txt &&
test_subcommand git commit-graph write --split --reachable \
--no-progress <incremental-weekly.txt &&
test_subcommand git prune-packed --quiet <incremental-weekly.txt &&
test_subcommand git multi-pack-index write --no-progress \
<incremental-weekly.txt &&
test_subcommand git pack-refs --all --prune \
<incremental-weekly.txt &&
# Modify defaults
git config maintenance.commit-graph.schedule daily &&
git config maintenance.loose-objects.schedule hourly &&
git config maintenance.incremental-repack.enabled false &&
GIT_TRACE2_EVENT="$(pwd)/modified-hourly.txt" \
git maintenance run --schedule=hourly --quiet &&
GIT_TRACE2_EVENT="$(pwd)/modified-daily.txt" \
git maintenance run --schedule=daily --quiet &&
test_subcommand ! git commit-graph write --split --reachable \
--no-progress <modified-hourly.txt &&
test_subcommand git prune-packed --quiet <modified-hourly.txt &&
test_subcommand ! git multi-pack-index write --no-progress \
<modified-hourly.txt &&
test_subcommand git commit-graph write --split --reachable \
--no-progress <modified-daily.txt &&
test_subcommand git prune-packed --quiet <modified-daily.txt &&
test_subcommand ! git multi-pack-index write --no-progress \
<modified-daily.txt
'
test_expect_success 'register and unregister' '
test_when_finished git config --global --unset-all maintenance.repo &&
git config --global --add maintenance.repo /existing1 &&
git config --global --add maintenance.repo /existing2 &&
git config --global --get-all maintenance.repo >before &&
git maintenance register &&
test_cmp_config false maintenance.auto &&
git config --global --get-all maintenance.repo >between &&
cp before expect &&
pwd >>expect &&
test_cmp expect between &&
git maintenance unregister &&
git config --global --get-all maintenance.repo >actual &&
test_cmp before actual
'
test_expect_success !MINGW 'register and unregister with regex metacharacters' '
META="a+b*c" &&
git init "$META" &&
git -C "$META" maintenance register &&
git config --get-all --show-origin maintenance.repo &&
git config --get-all --global --fixed-value \
maintenance.repo "$(pwd)/$META" &&
git -C "$META" maintenance unregister &&
test_must_fail git config --get-all --global --fixed-value \
maintenance.repo "$(pwd)/$META"
'
test_expect_success 'start --scheduler=<scheduler>' '
test_expect_code 129 git maintenance start --scheduler=foo 2>err &&
test_i18ngrep "unrecognized --scheduler argument" err &&
test_expect_code 129 git maintenance start --no-scheduler 2>err &&
test_i18ngrep "unknown option" err &&
test_expect_code 128 \
env GIT_TEST_MAINT_SCHEDULER="launchctl:true,schtasks:true" \
git maintenance start --scheduler=crontab 2>err &&
test_i18ngrep "fatal: crontab scheduler is not available" err
'
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
test_expect_success 'start from empty cron table' '
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance start --scheduler=crontab &&
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
# start registers the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
grep "for-each-repo --config=maintenance.repo maintenance run --schedule=daily" cron.txt &&
grep "for-each-repo --config=maintenance.repo maintenance run --schedule=hourly" cron.txt &&
grep "for-each-repo --config=maintenance.repo maintenance run --schedule=weekly" cron.txt
'
test_expect_success 'stop from existing schedule' '
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance stop &&
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
# stop does not unregister the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
# Operation is idempotent
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance stop &&
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
test_must_be_empty cron.txt
'
test_expect_success 'start preserves existing schedule' '
echo "Important information!" >cron.txt &&
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance start --scheduler=crontab &&
maintenance: add start/stop subcommands Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-11 17:49:18 +00:00
grep "Important information!" cron.txt
'
test_expect_success 'magic markers are correct' '
grep "GIT MAINTENANCE SCHEDULE" cron.txt >actual &&
cat >expect <<-\EOF &&
# BEGIN GIT MAINTENANCE SCHEDULE
# END GIT MAINTENANCE SCHEDULE
EOF
test_cmp actual expect
'
test_expect_success 'stop preserves surrounding schedule' '
echo "Crucial information!" >>cron.txt &&
GIT_TEST_MAINT_SCHEDULER="crontab:test-tool crontab cron.txt" git maintenance stop &&
grep "Important information!" cron.txt &&
grep "Crucial information!" cron.txt
'
maintenance: use launchctl on macOS The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies: 1. Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code. 2. While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required. The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool. [1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of <string> tags. For example, here is my plist file for the weekly scheduled tasks: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"><dict> <key>Label</key><string>org.git-scm.git.weekly</string> <key>ProgramArguments</key> <array> <string>/usr/local/libexec/git-core/git</string> <string>--exec-path=/usr/local/libexec/git-core</string> <string>for-each-repo</string> <string>--config=maintenance.repo</string> <string>maintenance</string> <string>run</string> <string>--schedule=weekly</string> </array> <key>StartCalendarInterval</key> <array> <dict> <key>Day</key><integer>0</integer> <key>Hour</key><integer>0</integer> <key>Minute</key><integer>0</integer> </dict> </array> </dict> </plist> The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task). The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency. The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice. [2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/ To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists. The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:\..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine. We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER. An earlier version of this patch accidentally had an opening "<dict>" tag when it should have had a closing "</dict>" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available. We strive to make these tests work on all platforms, but Windows caused some headaches. In particular, the value of getuid() called by the C code is not guaranteed to be the same as `$(id -u)` invoked by a test. This is because `git.exe` is a native Windows program, whereas the utility programs run by the test script mostly utilize the MSYS2 runtime, which emulates a POSIX-like environment. Since the purpose of the test is to check that the input to the hook is well-formed, the actual user ID is immaterial, thus we can work around the problem by making the the test UID-agnostic. Another subtle issue is the $HOME environment variable being a Windows-style path instead of a Unix-style path. We can be more flexible here instead of expecting exact path matches. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:27 +00:00
test_expect_success 'start and stop macOS maintenance' '
# ensure $HOME can be compared against hook arguments on all platforms
pfx=$(cd "$HOME" && pwd) &&
write_script print-args <<-\EOF &&
echo $* | sed "s:gui/[0-9][0-9]*:gui/[UID]:" >>args
EOF
rm -f args &&
GIT_TEST_MAINT_SCHEDULER=launchctl:./print-args git maintenance start --scheduler=launchctl &&
maintenance: use launchctl on macOS The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies: 1. Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code. 2. While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required. The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool. [1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of <string> tags. For example, here is my plist file for the weekly scheduled tasks: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"><dict> <key>Label</key><string>org.git-scm.git.weekly</string> <key>ProgramArguments</key> <array> <string>/usr/local/libexec/git-core/git</string> <string>--exec-path=/usr/local/libexec/git-core</string> <string>for-each-repo</string> <string>--config=maintenance.repo</string> <string>maintenance</string> <string>run</string> <string>--schedule=weekly</string> </array> <key>StartCalendarInterval</key> <array> <dict> <key>Day</key><integer>0</integer> <key>Hour</key><integer>0</integer> <key>Minute</key><integer>0</integer> </dict> </array> </dict> </plist> The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task). The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency. The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice. [2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/ To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists. The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:\..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine. We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER. An earlier version of this patch accidentally had an opening "<dict>" tag when it should have had a closing "</dict>" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available. We strive to make these tests work on all platforms, but Windows caused some headaches. In particular, the value of getuid() called by the C code is not guaranteed to be the same as `$(id -u)` invoked by a test. This is because `git.exe` is a native Windows program, whereas the utility programs run by the test script mostly utilize the MSYS2 runtime, which emulates a POSIX-like environment. Since the purpose of the test is to check that the input to the hook is well-formed, the actual user ID is immaterial, thus we can work around the problem by making the the test UID-agnostic. Another subtle issue is the $HOME environment variable being a Windows-style path instead of a Unix-style path. We can be more flexible here instead of expecting exact path matches. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:27 +00:00
# start registers the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
maintenance: use launchctl on macOS The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies: 1. Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code. 2. While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required. The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool. [1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of <string> tags. For example, here is my plist file for the weekly scheduled tasks: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"><dict> <key>Label</key><string>org.git-scm.git.weekly</string> <key>ProgramArguments</key> <array> <string>/usr/local/libexec/git-core/git</string> <string>--exec-path=/usr/local/libexec/git-core</string> <string>for-each-repo</string> <string>--config=maintenance.repo</string> <string>maintenance</string> <string>run</string> <string>--schedule=weekly</string> </array> <key>StartCalendarInterval</key> <array> <dict> <key>Day</key><integer>0</integer> <key>Hour</key><integer>0</integer> <key>Minute</key><integer>0</integer> </dict> </array> </dict> </plist> The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task). The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency. The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice. [2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/ To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists. The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:\..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine. We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER. An earlier version of this patch accidentally had an opening "<dict>" tag when it should have had a closing "</dict>" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available. We strive to make these tests work on all platforms, but Windows caused some headaches. In particular, the value of getuid() called by the C code is not guaranteed to be the same as `$(id -u)` invoked by a test. This is because `git.exe` is a native Windows program, whereas the utility programs run by the test script mostly utilize the MSYS2 runtime, which emulates a POSIX-like environment. Since the purpose of the test is to check that the input to the hook is well-formed, the actual user ID is immaterial, thus we can work around the problem by making the the test UID-agnostic. Another subtle issue is the $HOME environment variable being a Windows-style path instead of a Unix-style path. We can be more flexible here instead of expecting exact path matches. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:27 +00:00
ls "$HOME/Library/LaunchAgents" >actual &&
cat >expect <<-\EOF &&
org.git-scm.git.daily.plist
org.git-scm.git.hourly.plist
org.git-scm.git.weekly.plist
EOF
test_cmp expect actual &&
rm -f expect &&
for frequency in hourly daily weekly
do
PLIST="$pfx/Library/LaunchAgents/org.git-scm.git.$frequency.plist" &&
test_xmllint "$PLIST" &&
grep schedule=$frequency "$PLIST" &&
echo "bootout gui/[UID] $PLIST" >>expect &&
echo "bootstrap gui/[UID] $PLIST" >>expect || return 1
done &&
test_cmp expect args &&
rm -f args &&
GIT_TEST_MAINT_SCHEDULER=launchctl:./print-args git maintenance stop &&
# stop does not unregister the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
maintenance: use launchctl on macOS The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies: 1. Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code. 2. While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required. The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool. [1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of <string> tags. For example, here is my plist file for the weekly scheduled tasks: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"><dict> <key>Label</key><string>org.git-scm.git.weekly</string> <key>ProgramArguments</key> <array> <string>/usr/local/libexec/git-core/git</string> <string>--exec-path=/usr/local/libexec/git-core</string> <string>for-each-repo</string> <string>--config=maintenance.repo</string> <string>maintenance</string> <string>run</string> <string>--schedule=weekly</string> </array> <key>StartCalendarInterval</key> <array> <dict> <key>Day</key><integer>0</integer> <key>Hour</key><integer>0</integer> <key>Minute</key><integer>0</integer> </dict> </array> </dict> </plist> The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task). The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency. The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice. [2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/ To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists. The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:\..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine. We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER. An earlier version of this patch accidentally had an opening "<dict>" tag when it should have had a closing "</dict>" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available. We strive to make these tests work on all platforms, but Windows caused some headaches. In particular, the value of getuid() called by the C code is not guaranteed to be the same as `$(id -u)` invoked by a test. This is because `git.exe` is a native Windows program, whereas the utility programs run by the test script mostly utilize the MSYS2 runtime, which emulates a POSIX-like environment. Since the purpose of the test is to check that the input to the hook is well-formed, the actual user ID is immaterial, thus we can work around the problem by making the the test UID-agnostic. Another subtle issue is the $HOME environment variable being a Windows-style path instead of a Unix-style path. We can be more flexible here instead of expecting exact path matches. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:27 +00:00
printf "bootout gui/[UID] $pfx/Library/LaunchAgents/org.git-scm.git.%s.plist\n" \
hourly daily weekly >expect &&
test_cmp expect args &&
ls "$HOME/Library/LaunchAgents" >actual &&
test_line_count = 0 actual
'
test_expect_success 'use launchctl list to prevent extra work' '
# ensure we are registered
GIT_TEST_MAINT_SCHEDULER=launchctl:./print-args git maintenance start --scheduler=launchctl &&
# do it again on a fresh args file
rm -f args &&
GIT_TEST_MAINT_SCHEDULER=launchctl:./print-args git maintenance start --scheduler=launchctl &&
ls "$HOME/Library/LaunchAgents" >actual &&
cat >expect <<-\EOF &&
list org.git-scm.git.hourly
list org.git-scm.git.daily
list org.git-scm.git.weekly
EOF
test_cmp expect args
maintenance: use launchctl on macOS The existing mechanism for scheduling background maintenance is done through cron. The 'crontab -e' command allows updating the schedule while cron itself runs those commands. While this is technically supported by macOS, it has some significant deficiencies: 1. Every run of 'crontab -e' must request elevated privileges through the user interface. When running 'git maintenance start' from the Terminal app, it presents a dialog box saying "Terminal.app would like to administer your computer. Administration can include modifying passwords, networking, and system settings." This is more alarming than what we are hoping to achieve. If this alert had some information about how "git" is trying to run "crontab" then we would have some reason to believe that this dialog might be fine. However, it also doesn't help that some scenarios just leave Git waiting for a response without presenting anything to the user. I experienced this when executing the command from a Bash terminal view inside Visual Studio Code. 2. While cron initializes a user environment enough for "git config --global --show-origin" to show the correct config file information, it does not set up the environment enough for Git Credential Manager Core to load credentials during a 'prefetch' task. My prefetches against private repositories required re-authenticating through UI pop-ups in a way that should not be required. The solution is to switch from cron to the Apple-recommended [1] 'launchd' tool. [1] https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/ScheduledJobs.html The basics of this tool is that we need to create XML-formatted "plist" files inside "~/Library/LaunchAgents/" and then use the 'launchctl' tool to make launchd aware of them. The plist files include all of the scheduling information, along with the command-line arguments split across an array of <string> tags. For example, here is my plist file for the weekly scheduled tasks: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"><dict> <key>Label</key><string>org.git-scm.git.weekly</string> <key>ProgramArguments</key> <array> <string>/usr/local/libexec/git-core/git</string> <string>--exec-path=/usr/local/libexec/git-core</string> <string>for-each-repo</string> <string>--config=maintenance.repo</string> <string>maintenance</string> <string>run</string> <string>--schedule=weekly</string> </array> <key>StartCalendarInterval</key> <array> <dict> <key>Day</key><integer>0</integer> <key>Hour</key><integer>0</integer> <key>Minute</key><integer>0</integer> </dict> </array> </dict> </plist> The schedules for the daily and hourly tasks are more complicated since we need to use an array for the StartCalendarInterval with an entry for each of the six days other than the 0th day (to avoid colliding with the weekly task), and each of the 23 hours other than the 0th hour (to avoid colliding with the daily task). The "Label" value is currently filled with "org.git-scm.git.X" where X is the frequency. We need a different plist file for each frequency. The launchctl command needs to be aligned with a user id in order to initialize the command environment. This must be done using the 'launchctl bootstrap' subcommand. This subcommand is new as of macOS 10.11, which was released in September 2015. Before that release the 'launchctl load' subcommand was recommended. The best source of information on this transition I have seen is available at [2]. The current design does not preclude a future version that detects the available fatures of 'launchctl' to use the older commands. However, it is best to rely on the newest version since Apple might completely remove the deprecated version on short notice. [2] https://babodee.wordpress.com/2016/04/09/launchctl-2-0-syntax/ To remove a schedule, we must run 'launchctl bootout' with a valid plist file. We also need to 'bootout' a task before the 'bootstrap' subcommand will succeed, if such a task already exists. The need for a user id requires us to run 'id -u' which works on POSIX systems but not Windows. Further, the need for fully-qualitifed path names including $HOME behaves differently in the Git internals and the external test suite. The $HOME variable starts with "C:\..." instead of the "/c/..." that is provided by Git in these subcommands. The test therefore has a prerequisite that we are not on Windows. The cross- platform logic still allows us to test the macOS logic on a Linux machine. We can verify the commands that were run by 'git maintenance start' and 'git maintenance stop' by injecting a script that writes the command-line arguments into GIT_TEST_MAINT_SCHEDULER. An earlier version of this patch accidentally had an opening "<dict>" tag when it should have had a closing "</dict>" tag. This was caught during manual testing with actual 'launchctl' commands, but we do not want to update developers' tasks when running tests. It appears that macOS includes the "xmllint" tool which can verify the XML format. This is useful for any system that might contain the tool, so use it whenever it is available. We strive to make these tests work on all platforms, but Windows caused some headaches. In particular, the value of getuid() called by the C code is not guaranteed to be the same as `$(id -u)` invoked by a test. This is because `git.exe` is a native Windows program, whereas the utility programs run by the test script mostly utilize the MSYS2 runtime, which emulates a POSIX-like environment. Since the purpose of the test is to check that the input to the hook is well-formed, the actual user ID is immaterial, thus we can work around the problem by making the the test UID-agnostic. Another subtle issue is the $HOME environment variable being a Windows-style path instead of a Unix-style path. We can be more flexible here instead of expecting exact path matches. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:27 +00:00
'
maintenance: use Windows scheduled tasks Git's background maintenance uses cron by default, but this is not available on Windows. Instead, integrate with Task Scheduler. Tasks can be scheduled using the 'schtasks' command. There are several command-line options that can allow for some advanced scheduling, but unfortunately these seem to all require authenticating using a password. Instead, use the "/xml" option to pass an XML file that contains the configuration for the necessary schedule. These XML files are based on some that I exported after constructing a schedule in the Task Scheduler GUI. These options only run background maintenance when the user is logged in, and more fields are populated with the current username and SID at run-time by 'schtasks'. Since the GIT_TEST_MAINT_SCHEDULER environment variable allows us to specify 'schtasks' as the scheduler, we can test the Windows-specific logic on other platforms. Thus, add a check that the XML file written by Git is valid when xmllint exists on the system. Since we use a temporary file for the XML files sent to 'schtasks', we prefix the random characters with the frequency so it is easier to examine the proper file during tests. Instead of an exact match on the 'args' file, we 'grep' for the arguments other than the filename. There is a deficiency in the current design. Windows has two kinds of applications: GUI applications that start by "winmain()" and console applications that start by "main()". Console applications are attached to a new Console window if they are not already associated with a GUI application. This means that every hour the scheudled task launches a command window for the scheduled tasks. Not only is this visually obtrusive, but it also takes focus from whatever else the user is doing! A simple fix would be to insert a GUI application that acts as a shim between the scheduled task and Git. This is currently possible in Git for Windows by setting the <Command> tag equal to C:\Program Files\Git\git-bash.exe with options "--hide --no-needs-console --command=cmd\git.exe" followed by the arguments currently used. Since git-bash.exe is not included in Windows builds of core Git, I chose to leave out this feature. My plan is to submit a small patch to Git for Windows that converts the use of git.exe with this use of git-bash.exe in the short term. In the long term, we can consider creating this GUI shim application within core Git, perhaps in contrib/. Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:28 +00:00
test_expect_success 'start and stop Windows maintenance' '
write_script print-args <<-\EOF &&
echo $* >>args
while test $# -gt 0
do
case "$1" in
/xml) shift; xmlfile=$1; break ;;
*) shift ;;
esac
done
test -z "$xmlfile" || cp "$xmlfile" "$xmlfile.xml"
EOF
rm -f args &&
GIT_TEST_MAINT_SCHEDULER="schtasks:./print-args" git maintenance start --scheduler=schtasks &&
maintenance: use Windows scheduled tasks Git's background maintenance uses cron by default, but this is not available on Windows. Instead, integrate with Task Scheduler. Tasks can be scheduled using the 'schtasks' command. There are several command-line options that can allow for some advanced scheduling, but unfortunately these seem to all require authenticating using a password. Instead, use the "/xml" option to pass an XML file that contains the configuration for the necessary schedule. These XML files are based on some that I exported after constructing a schedule in the Task Scheduler GUI. These options only run background maintenance when the user is logged in, and more fields are populated with the current username and SID at run-time by 'schtasks'. Since the GIT_TEST_MAINT_SCHEDULER environment variable allows us to specify 'schtasks' as the scheduler, we can test the Windows-specific logic on other platforms. Thus, add a check that the XML file written by Git is valid when xmllint exists on the system. Since we use a temporary file for the XML files sent to 'schtasks', we prefix the random characters with the frequency so it is easier to examine the proper file during tests. Instead of an exact match on the 'args' file, we 'grep' for the arguments other than the filename. There is a deficiency in the current design. Windows has two kinds of applications: GUI applications that start by "winmain()" and console applications that start by "main()". Console applications are attached to a new Console window if they are not already associated with a GUI application. This means that every hour the scheudled task launches a command window for the scheduled tasks. Not only is this visually obtrusive, but it also takes focus from whatever else the user is doing! A simple fix would be to insert a GUI application that acts as a shim between the scheduled task and Git. This is currently possible in Git for Windows by setting the <Command> tag equal to C:\Program Files\Git\git-bash.exe with options "--hide --no-needs-console --command=cmd\git.exe" followed by the arguments currently used. Since git-bash.exe is not included in Windows builds of core Git, I chose to leave out this feature. My plan is to submit a small patch to Git for Windows that converts the use of git.exe with this use of git-bash.exe in the short term. In the long term, we can consider creating this GUI shim application within core Git, perhaps in contrib/. Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:28 +00:00
# start registers the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
maintenance: use Windows scheduled tasks Git's background maintenance uses cron by default, but this is not available on Windows. Instead, integrate with Task Scheduler. Tasks can be scheduled using the 'schtasks' command. There are several command-line options that can allow for some advanced scheduling, but unfortunately these seem to all require authenticating using a password. Instead, use the "/xml" option to pass an XML file that contains the configuration for the necessary schedule. These XML files are based on some that I exported after constructing a schedule in the Task Scheduler GUI. These options only run background maintenance when the user is logged in, and more fields are populated with the current username and SID at run-time by 'schtasks'. Since the GIT_TEST_MAINT_SCHEDULER environment variable allows us to specify 'schtasks' as the scheduler, we can test the Windows-specific logic on other platforms. Thus, add a check that the XML file written by Git is valid when xmllint exists on the system. Since we use a temporary file for the XML files sent to 'schtasks', we prefix the random characters with the frequency so it is easier to examine the proper file during tests. Instead of an exact match on the 'args' file, we 'grep' for the arguments other than the filename. There is a deficiency in the current design. Windows has two kinds of applications: GUI applications that start by "winmain()" and console applications that start by "main()". Console applications are attached to a new Console window if they are not already associated with a GUI application. This means that every hour the scheudled task launches a command window for the scheduled tasks. Not only is this visually obtrusive, but it also takes focus from whatever else the user is doing! A simple fix would be to insert a GUI application that acts as a shim between the scheduled task and Git. This is currently possible in Git for Windows by setting the <Command> tag equal to C:\Program Files\Git\git-bash.exe with options "--hide --no-needs-console --command=cmd\git.exe" followed by the arguments currently used. Since git-bash.exe is not included in Windows builds of core Git, I chose to leave out this feature. My plan is to submit a small patch to Git for Windows that converts the use of git.exe with this use of git-bash.exe in the short term. In the long term, we can consider creating this GUI shim application within core Git, perhaps in contrib/. Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:28 +00:00
for frequency in hourly daily weekly
do
grep "/create /tn Git Maintenance ($frequency) /f /xml" args &&
file=$(ls .git/schedule_${frequency}*.xml) &&
test_xmllint "$file" || return 1
done &&
rm -f args &&
GIT_TEST_MAINT_SCHEDULER="schtasks:./print-args" git maintenance stop &&
# stop does not unregister the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
maintenance: use Windows scheduled tasks Git's background maintenance uses cron by default, but this is not available on Windows. Instead, integrate with Task Scheduler. Tasks can be scheduled using the 'schtasks' command. There are several command-line options that can allow for some advanced scheduling, but unfortunately these seem to all require authenticating using a password. Instead, use the "/xml" option to pass an XML file that contains the configuration for the necessary schedule. These XML files are based on some that I exported after constructing a schedule in the Task Scheduler GUI. These options only run background maintenance when the user is logged in, and more fields are populated with the current username and SID at run-time by 'schtasks'. Since the GIT_TEST_MAINT_SCHEDULER environment variable allows us to specify 'schtasks' as the scheduler, we can test the Windows-specific logic on other platforms. Thus, add a check that the XML file written by Git is valid when xmllint exists on the system. Since we use a temporary file for the XML files sent to 'schtasks', we prefix the random characters with the frequency so it is easier to examine the proper file during tests. Instead of an exact match on the 'args' file, we 'grep' for the arguments other than the filename. There is a deficiency in the current design. Windows has two kinds of applications: GUI applications that start by "winmain()" and console applications that start by "main()". Console applications are attached to a new Console window if they are not already associated with a GUI application. This means that every hour the scheudled task launches a command window for the scheduled tasks. Not only is this visually obtrusive, but it also takes focus from whatever else the user is doing! A simple fix would be to insert a GUI application that acts as a shim between the scheduled task and Git. This is currently possible in Git for Windows by setting the <Command> tag equal to C:\Program Files\Git\git-bash.exe with options "--hide --no-needs-console --command=cmd\git.exe" followed by the arguments currently used. Since git-bash.exe is not included in Windows builds of core Git, I chose to leave out this feature. My plan is to submit a small patch to Git for Windows that converts the use of git.exe with this use of git-bash.exe in the short term. In the long term, we can consider creating this GUI shim application within core Git, perhaps in contrib/. Co-authored-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-05 13:08:28 +00:00
printf "/delete /tn Git Maintenance (%s) /f\n" \
hourly daily weekly >expect &&
test_cmp expect args
'
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
test_expect_success 'start and stop Linux/systemd maintenance' '
write_script print-args <<-\EOF &&
printf "%s\n" "$*" >>args
EOF
XDG_CONFIG_HOME="$PWD" &&
export XDG_CONFIG_HOME &&
rm -f args &&
GIT_TEST_MAINT_SCHEDULER="systemctl:./print-args" git maintenance start --scheduler=systemd-timer &&
# start registers the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
test_systemd_analyze_verify "systemd/user/git-maintenance@.service" &&
printf -- "--user enable --now git-maintenance@%s.timer\n" hourly daily weekly >expect &&
test_cmp expect args &&
rm -f args &&
GIT_TEST_MAINT_SCHEDULER="systemctl:./print-args" git maintenance stop &&
# stop does not unregister the repo
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
test_path_is_missing "systemd/user/git-maintenance@.timer" &&
test_path_is_missing "systemd/user/git-maintenance@.service" &&
printf -- "--user disable --now git-maintenance@%s.timer\n" hourly daily weekly >expect &&
test_cmp expect args
'
test_expect_success 'start and stop when several schedulers are available' '
write_script print-args <<-\EOF &&
printf "%s\n" "$*" | sed "s:gui/[0-9][0-9]*:gui/[UID]:; s:\(schtasks /create .* /xml\).*:\1:;" >>args
EOF
rm -f args &&
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
GIT_TEST_MAINT_SCHEDULER="systemctl:./print-args systemctl,launchctl:./print-args launchctl,schtasks:./print-args schtasks" git maintenance start --scheduler=systemd-timer &&
printf "launchctl bootout gui/[UID] $pfx/Library/LaunchAgents/org.git-scm.git.%s.plist\n" \
hourly daily weekly >expect &&
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
printf "schtasks /delete /tn Git Maintenance (%s) /f\n" \
hourly daily weekly >>expect &&
printf -- "systemctl --user enable --now git-maintenance@%s.timer\n" hourly daily weekly >>expect &&
test_cmp expect args &&
rm -f args &&
GIT_TEST_MAINT_SCHEDULER="systemctl:./print-args systemctl,launchctl:./print-args launchctl,schtasks:./print-args schtasks" git maintenance start --scheduler=launchctl &&
printf -- "systemctl --user disable --now git-maintenance@%s.timer\n" hourly daily weekly >expect &&
printf "schtasks /delete /tn Git Maintenance (%s) /f\n" \
hourly daily weekly >>expect &&
for frequency in hourly daily weekly
do
PLIST="$pfx/Library/LaunchAgents/org.git-scm.git.$frequency.plist" &&
echo "launchctl bootout gui/[UID] $PLIST" >>expect &&
echo "launchctl bootstrap gui/[UID] $PLIST" >>expect || return 1
done &&
test_cmp expect args &&
rm -f args &&
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
GIT_TEST_MAINT_SCHEDULER="systemctl:./print-args systemctl,launchctl:./print-args launchctl,schtasks:./print-args schtasks" git maintenance start --scheduler=schtasks &&
printf -- "systemctl --user disable --now git-maintenance@%s.timer\n" hourly daily weekly >expect &&
printf "launchctl bootout gui/[UID] $pfx/Library/LaunchAgents/org.git-scm.git.%s.plist\n" \
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
hourly daily weekly >>expect &&
printf "schtasks /create /tn Git Maintenance (%s) /f /xml\n" \
hourly daily weekly >>expect &&
test_cmp expect args &&
rm -f args &&
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
GIT_TEST_MAINT_SCHEDULER="systemctl:./print-args systemctl,launchctl:./print-args launchctl,schtasks:./print-args schtasks" git maintenance stop &&
printf -- "systemctl --user disable --now git-maintenance@%s.timer\n" hourly daily weekly >expect &&
printf "launchctl bootout gui/[UID] $pfx/Library/LaunchAgents/org.git-scm.git.%s.plist\n" \
maintenance: add support for systemd timers on Linux The existing mechanism for scheduling background maintenance is done through cron. On Linux systems managed by systemd, systemd provides an alternative to schedule recurring tasks: systemd timers. The main motivations to implement systemd timers in addition to cron are: * cron is optional and Linux systems running systemd might not have it installed. * The execution of `crontab -l` can tell us if cron is installed but not if the daemon is actually running. * With systemd, each service is run in its own cgroup and its logs are tagged by the service inside journald. With cron, all scheduled tasks are running in the cron daemon cgroup and all the logs of the user-scheduled tasks are pretended to belong to the system cron service. Concretely, a user that doesn’t have access to the system logs won’t have access to the log of their own tasks scheduled by cron whereas they will have access to the log of their own tasks scheduled by systemd timer. Although `cron` attempts to send email, that email may go unseen by the user because these days, local mailboxes are not heavily used anymore. In order to schedule git maintenance, we need two unit template files: * ~/.config/systemd/user/git-maintenance@.service to define the command to be started by systemd and * ~/.config/systemd/user/git-maintenance@.timer to define the schedule at which the command should be run. Those units are templates that are parameterized by the frequency. Based on those templates, 3 timers are started: * git-maintenance@hourly.timer * git-maintenance@daily.timer * git-maintenance@weekly.timer The command launched by those three timers are the same as with the other scheduling methods: /path/to/git for-each-repo --exec-path=/path/to --config=maintenance.repo maintenance run --schedule=%i with the full path for git to ensure that the version of git launched for the scheduled maintenance is the same as the one used to run `maintenance start`. The timer unit contains `Persistent=true` so that, if the computer is powered down when a maintenance task should run, the task will be run when the computer is back powered on. Signed-off-by: Lénaïc Huard <lenaic@lhuard.fr> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-04 20:55:00 +00:00
hourly daily weekly >>expect &&
printf "schtasks /delete /tn Git Maintenance (%s) /f\n" \
hourly daily weekly >>expect &&
test_cmp expect args
'
test_expect_success 'register preserves existing strategy' '
git config maintenance.strategy none &&
git maintenance register &&
test_config maintenance.strategy none &&
git config --unset maintenance.strategy &&
git maintenance register &&
test_config maintenance.strategy incremental
'
test_expect_success 'fails when running outside of a repository' '
nongit test_must_fail git maintenance run &&
nongit test_must_fail git maintenance stop &&
nongit test_must_fail git maintenance start &&
nongit test_must_fail git maintenance register &&
nongit test_must_fail git maintenance unregister
'
test_expect_success 'register and unregister bare repo' '
test_when_finished "git config --global --unset-all maintenance.repo || :" &&
test_might_fail git config --global --unset-all maintenance.repo &&
git init --bare barerepo &&
(
cd barerepo &&
git maintenance register &&
git config --get --global --fixed-value maintenance.repo "$(pwd)" &&
git maintenance unregister &&
test_must_fail git config --global --get-all maintenance.repo
)
'
maintenance: create basic maintenance runner The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17 18:11:42 +00:00
test_done