remove non ascii whitespaces

This commit is contained in:
JMARyA 2024-01-17 09:44:04 +01:00
parent 598a10bc28
commit 5a6d6c4d13
Signed by: jmarya
GPG key ID: 901B2ADDF27C2263
117 changed files with 1928 additions and 1928 deletions

View file

@ -24,7 +24,7 @@ aria2c [<OPTIONS>] [<URI>|<MAGNET>|<TORRENT_FILE>|<METALINK_FILE>]
| `-V, --check-integrity [true/false]` | Check file integrity by validating piece hashes or a hash of entire file. This option has effect only in [BitTorrent](../../../internet/BitTorrent.md), Metalink downloads with checksums or [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads with --checksum option. If piece hashes are provided, this option can detect damaged portions of a file and re-download them. If a hash of entire file is provided, hash check is only done when file has been already download. This is determined by file length. If hash check fails, file is re-downloaded from scratch. If both piece hashes and a hash of entire file are provided, only piece hashes are used. Default: false |
| `-c, --continue [true/false]` | Continue downloading a partially downloaded file. Use this option to resume a download started by a web browser or another program which downloads files sequentially from the beginning. Currently this option is only applicable to [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads. |
| `--checksum=<TYPE>=<DIGEST>` | Set checksum. TYPE is hash type. The supported hash type is listed in Hash Algorithms in aria2c -v. DIGEST is hex digest. For example, setting sha-1 digest looks like this: `sha-1=0192ba11326fe2298c8cb4de616f4d4140213838` This option applies only to [HTTP](../../../internet/HTTP.md)(S)/[FTP](../../../internet/FTP.md) downloads. |
| `-x, --max-connection-per-server=<NUM>` | The maximum number of connections to one server for each download.  Default: **1** |
| `-x, --max-connection-per-server=<NUM>` | The maximum number of connections to one server for each download. Default: **1** |
| `-k, --min-split-size=<SIZE>` | aria2 does not split less than 2\*SIZE byte range. For example, let's consider downloading 20MiB file. If SIZE is 10M, aria2 can split file into 2 range (0-10MiB) and (10MiB-20MiB) and download it using 2 sources(if --split >= 2, of course). If SIZE is 15M, since 2\*15M > 20MiB, aria2 does not split file and download it using 1 source. You can append K or M (1K = 1024, 1M = 1024K). Possible Values: 1M -1024M Default: 20M |
| `-o, --out=<FILE>` | The file name of the downloaded file. It is always relative to the directory given in `--dir` option. |
| `-s, --split=<N>` | Download a file using N connections. If more than N URIs are given, first N URIs are used and remaining URIs are used for backup. If less than N URIs are given, those URIs are used more than once so that N connections total are made simultaneously. The number of connections to the same host is restricted by the `--max-connection-per-server` option. See also the `--min-split-size` option. Default: 5 |

View file

@ -4,7 +4,7 @@ wiki: https://en.wikipedia.org/wiki/Netcat
---
# netcat
The `nc` (or `netcat`) utility is used for just about anything under the sun involving [TCP](../../../internet/TCP.md), [UDP](../../../internet/UDP.md), or UNIX-domain sockets. It can open [TCP](../../../internet/TCP.md) connections, send [UDP](../../../internet/UDP.md) packets, listen on arbitrary [TCP](../../../internet/TCP.md) and [UDP](../../../internet/UDP.md) ports, do port scanning, and deal with both IPv4 and IPv6.
The `nc` (or `netcat`) utility is used for just about anything under the sun involving [TCP](../../../internet/TCP.md), [UDP](../../../internet/UDP.md), or UNIX-domain sockets. It can open [TCP](../../../internet/TCP.md) connections, send [UDP](../../../internet/UDP.md) packets, listen on arbitrary [TCP](../../../internet/TCP.md) and [UDP](../../../internet/UDP.md) ports, do port scanning, and deal with both IPv4 and IPv6.
Common uses include:
- simple [TCP](../../../internet/TCP.md) proxies
@ -19,32 +19,32 @@ Common uses include:
| `-6` | Use IPv6 addresses only |
| `-b` | Allow broadcast |
| `-l` | Listen for an incoming connection rather than initiating a connection to a remote host |
| `-N` | shutdown the network socket after EOF on the input. Some servers require this to finish their work |
| `-p <source_port>` | Specify the source port `nc` should use, subject to privilege restrictions and availability |
| `-N` | shutdown the network socket after EOF on the input. Some servers require this to finish their work |
| `-p <source_port>` | Specify the source port `nc` should use, subject to privilege restrictions and availability |
## Examples
### Client/Server Model
On one console, start `nc` listening on a specific port for a connection. For example:
On one console, start `nc` listening on a specific port for a connection. For example:
```shell
nc -l 1234
```
`nc` is now listening on port 1234 for a connection. On a second console (or a second machine), connect to the machine and port being listened on:
`nc` is now listening on port 1234 for a connection. On a second console (or a second machine), connect to the machine and port being listened on:
```shell
nc -N 127.0.0.1 1234
```
There should now be a connection between the ports. Anything typed at the second console will be concatenated to the first, and vice-versa. After the connection has been set up, `nc` does not really care which side is being used as a server and which side is being used as a client. The connection may be terminated using an `EOF` (`^D`), as the `-N` flag was given.
There should now be a connection between the ports. Anything typed at the second console will be concatenated to the first, and vice-versa. After the connection has been set up, `nc` does not really care which side is being used as a server and which side is being used as a client. The connection may be terminated using an `EOF` (`^D`), as the `-N` flag was given.
### Data Transfer
The example in the previous section can be expanded to build a basic data transfer model. Any information input into one end of the connection will be output to the other end, and input and output can be easily captured in order to emulate file transfer.
Start by using `nc` to listen on a specific port, with output captured into a file:
Start by using `nc` to listen on a specific port, with output captured into a file:
```shell
nc -l 1234 > filename.out
```
Using a second machine, connect to the listening `nc` process, feeding it the file which is to be transferred:
Using a second machine, connect to the listening `nc` process, feeding it the file which is to be transferred:
```shell
nc -N host.example.com 1234 < filename.in
```

View file

@ -17,7 +17,7 @@ Wget has been designed for robustness over slow or unstable network connections;
| Option | Description |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-b, --background` | Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log. |
| `-e, --execute command` | Execute  command  as  if  it were a part of .wgetrc.  A command thus invoked will be executed after the commands in .wgetrc, thus taking precedence over them.  If you need to specify more than one wgetrc command, use multiple instances of -e. |
| `-e, --execute command` | Execute command as if it were a part of .wgetrc. A command thus invoked will be executed after the commands in .wgetrc, thus taking precedence over them. If you need to specify more than one wgetrc command, use multiple instances of -e. |
### Logging Options
| Option | Description |
@ -25,48 +25,48 @@ Wget has been designed for robustness over slow or unstable network connections;
| `-o, --output-file=logfile` | Log all messages to logfile. The messages are normally reported to standard error. |
| `-a, --append-output=logfile` | Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old log file. |
| `-q, --quiet` | Turn off Wget's output. |
| `-i, --input-file=file` | Read URLs from a local or external file.  If - is specified as file, URLs are read from the standard input.  (Use ./- to read from a file literally named -.). If this function is used, no URLs need be present on the command line.  If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line. However, if you specify --force-html, the document will be regarded as [html](../../../internet/HTML.md).  In that case you may have problems with relative links, which you can solve either by adding "\<base href="url">" to the documents or by specifying --base=url on the command line. If the file is an external one, the document will be automatically treated as [html](../../../internet/HTML.md) if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified. |
| `-B, --base=URL` | Resolves relative links using URL as the point of reference, when reading links from an [HTML](../../../internet/HTML.md) file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a server describing it as [HTML](../../../internet/HTML.md)). This is equivalent to the presence of a "BASE" tag in the [HTML](../../../internet/HTML.md) input file, with URL as the value for the "href" attribute. |
| `-i, --input-file=file` | Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.). If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html is not specified, then file should consist of a series of URLs, one per line. However, if you specify --force-html, the document will be regarded as [html](../../../internet/HTML.md). In that case you may have problems with relative links, which you can solve either by adding "\<base href="url">" to the documents or by specifying --base=url on the command line. If the file is an external one, the document will be automatically treated as [html](../../../internet/HTML.md) if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified. |
| `-B, --base=URL` | Resolves relative links using URL as the point of reference, when reading links from an [HTML](../../../internet/HTML.md) file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a server describing it as [HTML](../../../internet/HTML.md)). This is equivalent to the presence of a "BASE" tag in the [HTML](../../../internet/HTML.md) input file, with URL as the value for the "href" attribute. |
### Download Options
| Option | Description |
| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-t, --tries=number` | Set number of tries to number. Specify 0 or inf for infinite retrying.  The default is to retry 20 times, with the exception of fatal errors like "connection refused" or "not found" (404), which are not retried. |
| `-O, --output-document=file` | The documents will not be written to the appropriate files, but all will be concatenated together and written to file.  If - is used as file, documents will be printed to standard output, disabling link conversion.  (Use ./- to print to a file literally named -.) |
| `--backups=backups` | Before (over)writing a file, back up an existing file by adding a .1 suffix (\_1 on VMS) to the file name.  Such backup files are rotated to .2, .3, and so on, up to `backups` (and lost beyond that) |
| `-c, --continue` | Continue  getting  a  partially-downloaded file.  This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. |
| `-t, --tries=number` | Set number of tries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like "connection refused" or "not found" (404), which are not retried. |
| `-O, --output-document=file` | The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If - is used as file, documents will be printed to standard output, disabling link conversion. (Use ./- to print to a file literally named -.) |
| `--backups=backups` | Before (over)writing a file, back up an existing file by adding a .1 suffix (\_1 on VMS) to the file name. Such backup files are rotated to .2, .3, and so on, up to `backups` (and lost beyond that) |
| `-c, --continue` | Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. |
| `--show-progress` | Force wget to display the progress bar in any verbosity. |
| `-T, --timeout=seconds` | Set the network timeout to `seconds` seconds. |
| `--limit-rate=amount` | Limit the download speed to amount bytes per second.  Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the  m  suffix. For example, --limit-rate=20k will limit the retrieval rate to 20KB/s. This is useful when, for whatever reason, you don't want Wget to consume the entire available bandwidth. |
| `-w, --wait=seconds` | Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d" suffix. |
| `--waitretry=seconds` | If you don't want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. Wget  will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on  that file, up to the maximum number of seconds you specify. |
| `--random-wait` | Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant  similarities in the time between requests. This option causes the time between requests to vary between 0.5 and 1.5 * wait seconds,  where wait was specified using the --wait option, in order to mask Wget's presence from such analysis. |
| `--limit-rate=amount` | Limit the download speed to amount bytes per second. Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the m suffix. For example, --limit-rate=20k will limit the retrieval rate to 20KB/s. This is useful when, for whatever reason, you don't want Wget to consume the entire available bandwidth. |
| `-w, --wait=seconds` | Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d" suffix. |
| `--waitretry=seconds` | If you don't want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. Wget will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to the maximum number of seconds you specify. |
| `--random-wait` | Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0.5 and 1.5 * wait seconds, where wait was specified using the --wait option, in order to mask Wget's presence from such analysis. |
| `--user=user, --password=password` | Specify the username and password for both [FTP](../../../internet/FTP.md) and [HTTP](../../../internet/HTTP.md) file retrieval. |
| `--ask-password` | Prompt for a password for each connection established. |
### Directory Options
| Option | Description |
| ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-nH, --no-host-directories` | Disable generation of host-prefixed directories. By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of  directories beginning with fly.srk.fer.hr/. This option disables such behavior. |
| `-nH, --no-host-directories` | Disable generation of host-prefixed directories. By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior. |
| `--cut-dirs=number` | Ignore number directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved. |
| `-P, --directory-prefix=prefix` | Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e.  the top of the retrieval tree. The default is . (the current directory). |
| `-P, --directory-prefix=prefix` | Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory). |
### HTTP Options
| Option | Description |
| ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--no-cookies` | Disable the use of cookies. |
| `--load-cookies file` | Load cookies from file before the first [HTTP](../../../internet/HTTP.md) retrieval.  file is a textual file in the format originally used by Netscape's  cookies.txt file. |
| `--load-cookies file` | Load cookies from file before the first [HTTP](../../../internet/HTTP.md) retrieval. file is a textual file in the format originally used by Netscape's cookies.txt file. |
| `--save-cookies file` | Save cookies to file before exiting. This will not save cookies that have expired or that have no expiry time (so-called "session cookies"), but also see --keep-session-cookies. |
| `--keep-session-cookies` | When specified, causes --save-cookies to also save session cookies. Session cookies are normally not saved because they are meant to be  kept in memory and forgotten when you exit the browser. Saving them is useful on sites that require you to log in or to visit the home  page before you can access some pages. With this option, multiple Wget runs are considered a single browser session as far as the site  is concerned. |
| `--header=header-line` | Send header-line along with the rest of the headers in each [HTTP](../../../internet/HTTP.md) request. The supplied header is sent as-is, which means it must  contain name and value separated by colon, and must not contain newlines. |
| `--proxy-user=user, --proxy-password=password` | Specify the username user and password password for authentication on a proxy server. Wget will encode them using the "basic"  authentication scheme. |
| `--referer=url` | Include 'Referer: url' header in [HTTP](../../../internet/HTTP.md) request. Useful for retrieving documents with server-side processing that assume they are always  being retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them. |
| `--keep-session-cookies` | When specified, causes --save-cookies to also save session cookies. Session cookies are normally not saved because they are meant to be kept in memory and forgotten when you exit the browser. Saving them is useful on sites that require you to log in or to visit the home page before you can access some pages. With this option, multiple Wget runs are considered a single browser session as far as the site is concerned. |
| `--header=header-line` | Send header-line along with the rest of the headers in each [HTTP](../../../internet/HTTP.md) request. The supplied header is sent as-is, which means it must contain name and value separated by colon, and must not contain newlines. |
| `--proxy-user=user, --proxy-password=password` | Specify the username user and password password for authentication on a proxy server. Wget will encode them using the "basic" authentication scheme. |
| `--referer=url` | Include 'Referer: url' header in [HTTP](../../../internet/HTTP.md) request. Useful for retrieving documents with server-side processing that assume they are always being retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them. |
| `-U, --user-agent=agent-string` | Identify as `agent-string` to the [HTTP](../../../internet/HTTP.md) server. |
### HTTPS Options
| Option | Description |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `--no-check-certificate` | Don't check the server certificate against the available certificate authorities. Also don't require the URL host name to match the  common name presented by the certificate. |
| `--no-check-certificate` | Don't check the server certificate against the available certificate authorities. Also don't require the URL host name to match the common name presented by the certificate. |
| `--ca-certificate=file` | Use file as the file with the bundle of certificate authorities ("CA") to verify the peers. The certificates must be in PEM format. |
| `--ca-directory=directory` | Specifies directory containing CA certificates in PEM format. |
@ -75,5 +75,5 @@ Wget has been designed for robustness over slow or unstable network connections;
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `-r, --recursive` | Turn on recursive retrieving. The default maximum depth is 5. |
| `-l, --level=depth` | Set the maximum number of subdirectories that Wget will recurse into to depth. |
| `-k, --convert-links` | After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the  visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets,  hyperlinks to non-[HTML](../../../internet/HTML.md) content, etc. |
| `-p, --page-requisites` | This option causes Wget to download all the files that are necessary to properly display a given [HTML](../../../internet/HTML.md) page. This includes such things  as inlined images, sounds, and referenced stylesheets. |
| `-k, --convert-links` | After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-[HTML](../../../internet/HTML.md) content, etc. |
| `-p, --page-requisites` | This option causes Wget to download all the files that are necessary to properly display a given [HTML](../../../internet/HTML.md) page. This includes such things as inlined images, sounds, and referenced stylesheets. |