add log
This commit is contained in:
parent
e7a85e5e00
commit
b015a3e8e2
12 changed files with 116 additions and 63 deletions
|
@ -15,25 +15,25 @@ aria2c [<OPTIONS>] [<URI>|<MAGNET>|<TORRENT_FILE>|<METALINK_FILE>]
|
|||
```
|
||||
|
||||
### Options
|
||||
| Option | Description |
|
||||
| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-d, --dir=<DIR>` | The directory to store the downloaded file. |
|
||||
| `-i, --input-file=<FILE>` | Downloads the URIs listed in **FILE**. |
|
||||
| `-l, --log=<LOG>` | The file name of the log file. If - is specified, log is written to stdout. If empty string("") is specified, or this option is omitted, no log is written to disk at all. |
|
||||
| `-j, --max-concurrent-downloads=<N>` | Set the maximum number of parallel downloads for every queue item. |
|
||||
| Option | Description |
|
||||
| -------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-d, --dir=<DIR>` | The directory to store the downloaded file. |
|
||||
| `-i, --input-file=<FILE>` | Downloads the URIs listed in **FILE**. |
|
||||
| `-l, --log=<LOG>` | The file name of the [log file](../../../dev/Log.md). If - is specified, log is written to stdout. If empty string("") is specified, or this option is omitted, no log is written to disk at all. |
|
||||
| `-j, --max-concurrent-downloads=<N>` | Set the maximum number of parallel downloads for every queue item. |
|
||||
| `-V, --check-integrity [true/false]` | Check file integrity by validating piece hashes or a hash of entire file. This option has effect only in [BitTorrent](../../../internet/BitTorrent.md), Metalink downloads with checksums or [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads with --checksum option. If piece hashes are provided, this option can detect damaged portions of a file and re-download them. If a hash of entire file is provided, hash check is only done when file has been already download. This is determined by file length. If hash check fails, file is re-downloaded from scratch. If both piece hashes and a hash of entire file are provided, only piece hashes are used. Default: false |
|
||||
| `-c, --continue [true/false]` | Continue downloading a partially downloaded file. Use this option to resume a download started by a web browser or another program which downloads files sequentially from the beginning. Currently this option is only applicable to [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads. |
|
||||
| `--checksum=<TYPE>=<DIGEST>` | Set checksum. TYPE is hash type. The supported hash type is listed in Hash Algorithms in aria2c -v. DIGEST is hex digest. For example, setting sha-1 digest looks like this: `sha-1=0192ba11326fe2298c8cb4de616f4d4140213838` This option applies only to [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads. |
|
||||
| `-x, --max-connection-per-server=<NUM>` | The maximum number of connections to one server for each download. Default: **1** |
|
||||
| `-k, --min-split-size=<SIZE>` | aria2 does not split less than 2\*SIZE byte range. For example, let's consider downloading 20MiB file. If SIZE is 10M, aria2 can split file into 2 range (0-10MiB) and (10MiB-20MiB) and download it using 2 sources(if --split >= 2, of course). If SIZE is 15M, since 2\*15M > 20MiB, aria2 does not split file and download it using 1 source. You can append K or M (1K = 1024, 1M = 1024K). Possible Values: 1M -1024M Default: 20M |
|
||||
| `-o, --out=<FILE>` | The file name of the downloaded file. It is always relative to the directory given in `--dir` option. |
|
||||
| `-s, --split=<N>` | Download a file using N connections. If more than N URIs are given, first N URIs are used and remaining URIs are used for backup. If less than N URIs are given, those URIs are used more than once so that N connections total are made simultaneously. The number of connections to the same host is restricted by the `--max-connection-per-server` option. See also the `--min-split-size` option. Default: 5 |
|
||||
| `-t, --timeout=<SEC>` | Set timeout in seconds. Default: 60 |
|
||||
| `--check-certificate [true/false]` | Verify the peer using certificates specified in `--ca-certificate` option. Default: true |
|
||||
| `--http-user=<USER>, --http-passwd=<PASSWD>` | Credentials for [HTTP](../../../internet/HTTP.md) Auth |
|
||||
| `--header=<HEADER>` | Append HEADER to [HTTP](../../../internet/HTTP.md) request header. You can use this option repeatedly to specify more than one header:<br>`aria2c --header="X-A: b78" --header="X-B: 9J1" "http://host/file"` |
|
||||
| `--load-cookies=<FILE>` | Load Cookies from FILE using the Firefox3 format (SQLite3), Chromium/Google Chrome (SQLite3) and the Mozilla/[Firefox](../../network/browsers/Firefox.md)(1.x/2.x)/Netscape format. |
|
||||
| `--save-cookies=<FILE>` | Save Cookies to FILE in Mozilla/[Firefox](../../network/browsers/Firefox.md)(1.x/2.x)/ Netscape format. If FILE already exists, it is overwritten. Session Cookies are also saved and their expiry values are treated as 0. |
|
||||
| `-U, --user-agent=<USER_AGENT>` | Set user agent for [HTTP](../../../internet/HTTP.md)(S) downloads. Default: `aria2/$VERSION`, $VERSION is replaced by package version. |
|
||||
| `-S, --show-files [true/false]` | Print file listing of ".torrent", ".meta4" and ".metalink" file and exit. In case of ".torrent" file, additional information (infohash, piece length, etc) is also printed. |
|
||||
| `--select-file=<INDEX>...` | (Torrent) Set file to download by specifying its index. You can find the file index using the `--show-files` option. Multiple indexes can be specified by using `,`, for example: `3,6`. You can also use `-` to specify a range: `1-5`. `,` and `-` can be used together: `1-5,8,9`. |
|
||||
| `-c, --continue [true/false]` | Continue downloading a partially downloaded file. Use this option to resume a download started by a web browser or another program which downloads files sequentially from the beginning. Currently this option is only applicable to [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads. |
|
||||
| `--checksum=<TYPE>=<DIGEST>` | Set checksum. TYPE is hash type. The supported hash type is listed in Hash Algorithms in aria2c -v. DIGEST is hex digest. For example, setting sha-1 digest looks like this: `sha-1=0192ba11326fe2298c8cb4de616f4d4140213838` This option applies only to [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads. |
|
||||
| `-x, --max-connection-per-server=<NUM>` | The maximum number of connections to one server for each download. Default: **1** |
|
||||
| `-k, --min-split-size=<SIZE>` | aria2 does not split less than 2\*SIZE byte range. For example, let's consider downloading 20MiB file. If SIZE is 10M, aria2 can split file into 2 range (0-10MiB) and (10MiB-20MiB) and download it using 2 sources(if --split >= 2, of course). If SIZE is 15M, since 2\*15M > 20MiB, aria2 does not split file and download it using 1 source. You can append K or M (1K = 1024, 1M = 1024K). Possible Values: 1M -1024M Default: 20M |
|
||||
| `-o, --out=<FILE>` | The file name of the downloaded file. It is always relative to the directory given in `--dir` option. |
|
||||
| `-s, --split=<N>` | Download a file using N connections. If more than N URIs are given, first N URIs are used and remaining URIs are used for backup. If less than N URIs are given, those URIs are used more than once so that N connections total are made simultaneously. The number of connections to the same host is restricted by the `--max-connection-per-server` option. See also the `--min-split-size` option. Default: 5 |
|
||||
| `-t, --timeout=<SEC>` | Set timeout in seconds. Default: 60 |
|
||||
| `--check-certificate [true/false]` | Verify the peer using certificates specified in `--ca-certificate` option. Default: true |
|
||||
| `--http-user=<USER>, --http-passwd=<PASSWD>` | Credentials for [HTTP](../../../internet/HTTP.md) Auth |
|
||||
| `--header=<HEADER>` | Append HEADER to [HTTP](../../../internet/HTTP.md) request header. You can use this option repeatedly to specify more than one header:<br>`aria2c --header="X-A: b78" --header="X-B: 9J1" "http://host/file"` |
|
||||
| `--load-cookies=<FILE>` | Load Cookies from FILE using the Firefox3 format (SQLite3), Chromium/Google Chrome (SQLite3) and the Mozilla/[Firefox](../../network/browsers/Firefox.md)(1.x/2.x)/Netscape format. |
|
||||
| `--save-cookies=<FILE>` | Save Cookies to FILE in Mozilla/[Firefox](../../network/browsers/Firefox.md)(1.x/2.x)/ Netscape format. If FILE already exists, it is overwritten. Session Cookies are also saved and their expiry values are treated as 0. |
|
||||
| `-U, --user-agent=<USER_AGENT>` | Set user agent for [HTTP](../../../internet/HTTP.md)(S) downloads. Default: `aria2/$VERSION`, $VERSION is replaced by package version. |
|
||||
| `-S, --show-files [true/false]` | Print file listing of ".torrent", ".meta4" and ".metalink" file and exit. In case of ".torrent" file, additional information (infohash, piece length, etc) is also printed. |
|
||||
| `--select-file=<INDEX>...` | (Torrent) Set file to download by specifying its index. You can find the file index using the `--show-files` option. Multiple indexes can be specified by using `,`, for example: `3,6`. You can also use `-` to specify a range: `1-5`. `,` and `-` can be used together: `1-5,8,9`. |
|
||||
|
|
|
@ -20,19 +20,19 @@ Wget has been designed for robustness over slow or unstable network connections;
|
|||
| `-e, --execute command` | Execute command as if it were a part of .wgetrc. A command thus invoked will be executed after the commands in .wgetrc, thus taking precedence over them. If you need to specify more than one wgetrc command, use multiple instances of -e. |
|
||||
|
||||
### Logging Options
|
||||
| Option | Description |
|
||||
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-o, --output-file=logfile` | Log all messages to logfile. The messages are normally reported to standard error. |
|
||||
| `-a, --append-output=logfile` | Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old log file. |
|
||||
| `-q, --quiet` | Turn off Wget's output. |
|
||||
| Option | Description |
|
||||
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-o, --output-file=logfile` | Log all messages to logfile. The messages are normally reported to standard error. |
|
||||
| `-a, --append-output=logfile` | Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old [log file](../../../dev/Log.md). |
|
||||
| `-q, --quiet` | Turn off Wget's output. |
|
||||
| `-i, --input-file=file` | Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.). If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line. However, if you specify --force-html, the document will be regarded as [html](../../../internet/HTML.md). In that case you may have problems with relative links, which you can solve either by adding "\<base href="url">" to the documents or by specifying --base=url on the command line. If the file is an external one, the document will be automatically treated as [html](../../../internet/HTML.md) if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified. |
|
||||
| `-B, --base=URL` | Resolves relative links using URL as the point of reference, when reading links from an [HTML](../../../internet/HTML.md) file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a server describing it as [HTML](../../../internet/HTML.md)). This is equivalent to the presence of a "BASE" tag in the [HTML](../../../internet/HTML.md) input file, with URL as the value for the "href" attribute. |
|
||||
| `-B, --base=URL` | Resolves relative links using URL as the point of reference, when reading links from an [HTML](../../../internet/HTML.md) file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a server describing it as [HTML](../../../internet/HTML.md)). This is equivalent to the presence of a "BASE" tag in the [HTML](../../../internet/HTML.md) input file, with URL as the value for the "href" attribute. |
|
||||
|
||||
### Download Options
|
||||
| Option | Description |
|
||||
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-t, --tries=number` | Set number of tries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like "connection refused" or "not found" (404), which are not retried. |
|
||||
| `-O, --output-document=file` | The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If - is used as file, documents will be printed to standard output, disabling link conversion. (Use ./- to print to a file literally named -.) |
|
||||
| `-O, --output-document=file` | The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If - is used as file, documents will be printed to standard output, disabling link conversion. (Use ./- to print to a file literally named -.) |
|
||||
| `--backups=backups` | Before (over)writing a file, back up an existing file by adding a .1 suffix (\_1 on VMS) to the file name. Such backup files are rotated to .2, .3, and so on, up to `backups` (and lost beyond that) |
|
||||
| `-c, --continue` | Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. |
|
||||
| `--show-progress` | Force wget to display the progress bar in any verbosity. |
|
||||
|
@ -41,7 +41,7 @@ Wget has been designed for robustness over slow or unstable network connections;
|
|||
| `-w, --wait=seconds` | Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d" suffix. |
|
||||
| `--waitretry=seconds` | If you don't want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. Wget will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to the maximum number of seconds you specify. |
|
||||
| `--random-wait` | Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0.5 and 1.5 * wait seconds, where wait was specified using the --wait option, in order to mask Wget's presence from such analysis. |
|
||||
| `--user=user, --password=password` | Specify the username and password for both [FTP](../../../internet/FTP.md) and [HTTP](../../../internet/HTTP.md) file retrieval. |
|
||||
| `--user=user, --password=password` | Specify the username and password for both [FTP](../../../internet/FTP.md) and [HTTP](../../../internet/HTTP.md) file retrieval. |
|
||||
| `--ask-password` | Prompt for a password for each connection established. |
|
||||
|
||||
### Directory Options
|
||||
|
@ -55,13 +55,13 @@ Wget has been designed for robustness over slow or unstable network connections;
|
|||
| Option | Description |
|
||||
| ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--no-cookies` | Disable the use of cookies. |
|
||||
| `--load-cookies file` | Load cookies from file before the first [HTTP](../../../internet/HTTP.md) retrieval. file is a textual file in the format originally used by Netscape's cookies.txt file. |
|
||||
| `--load-cookies file` | Load cookies from file before the first [HTTP](../../../internet/HTTP.md) retrieval. file is a textual file in the format originally used by Netscape's cookies.txt file. |
|
||||
| `--save-cookies file` | Save cookies to file before exiting. This will not save cookies that have expired or that have no expiry time (so-called "session cookies"), but also see --keep-session-cookies. |
|
||||
| `--keep-session-cookies` | When specified, causes --save-cookies to also save session cookies. Session cookies are normally not saved because they are meant to be kept in memory and forgotten when you exit the browser. Saving them is useful on sites that require you to log in or to visit the home page before you can access some pages. With this option, multiple Wget runs are considered a single browser session as far as the site is concerned. |
|
||||
| `--header=header-line` | Send header-line along with the rest of the headers in each [HTTP](../../../internet/HTTP.md) request. The supplied header is sent as-is, which means it must contain name and value separated by colon, and must not contain newlines. |
|
||||
| `--header=header-line` | Send header-line along with the rest of the headers in each [HTTP](../../../internet/HTTP.md) request. The supplied header is sent as-is, which means it must contain name and value separated by colon, and must not contain newlines. |
|
||||
| `--proxy-user=user, --proxy-password=password` | Specify the username user and password password for authentication on a proxy server. Wget will encode them using the "basic" authentication scheme. |
|
||||
| `--referer=url` | Include 'Referer: url' header in [HTTP](../../../internet/HTTP.md) request. Useful for retrieving documents with server-side processing that assume they are always being retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them. |
|
||||
| `-U, --user-agent=agent-string` | Identify as `agent-string` to the [HTTP](../../../internet/HTTP.md) server. |
|
||||
| `--referer=url` | Include 'Referer: url' header in [HTTP](../../../internet/HTTP.md) request. Useful for retrieving documents with server-side processing that assume they are always being retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them. |
|
||||
| `-U, --user-agent=agent-string` | Identify as `agent-string` to the [HTTP](../../../internet/HTTP.md) server. |
|
||||
|
||||
### HTTPS Options
|
||||
| Option | Description |
|
||||
|
|
|
@ -6,9 +6,9 @@ repo: https://github.com/bensadeh/tailspin
|
|||
# tailspin
|
||||
A log file highlighter
|
||||
|
||||
tailspin works by reading through a log file line by line, running a series of regexes against each line. The regexes recognize patterns like dates, numbers, severity keywords and more.
|
||||
tailspin works by reading through a [log file](../../dev/Log.md) line by line, running a series of regexes against each line. The regexes recognize patterns like dates, numbers, severity keywords and more.
|
||||
|
||||
tailspin does not make any assumptions on the format or position of the items it wants to highlight. For this reason, it requires no configuration or setup and will work predictably regardless of the format the log file is in.
|
||||
tailspin does not make any assumptions on the format or position of the items it wants to highlight. For this reason, it requires no configuration or setup and will work predictably regardless of the format the [log file](../../dev/Log.md) is in.
|
||||
|
||||
![Screenshot][Screenshot]
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue