update
This commit is contained in:
parent
28933b6af8
commit
ee55a0b016
12 changed files with 885 additions and 1 deletions
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
obj: meta/collection
|
||||
rev: 2025-01-09
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# Applications
|
||||
|
@ -105,6 +105,7 @@ rev: 2025-01-09
|
|||
- [SnapDrop](./network/SnapDrop.md)
|
||||
- [OnionShare](./network/OnionShare.md)
|
||||
- [qBittorrent](./network/qBittorrent.md)
|
||||
- [bitmagnet](./web/bitmagnet.md)
|
||||
|
||||
## Utilities
|
||||
- [Bottles](./utilities/Bottles.md)
|
||||
|
@ -239,6 +240,14 @@ rev: 2025-01-09
|
|||
- [pass](./cli/pass.md)
|
||||
- [ocrs](./cli/ocrs.md)
|
||||
- [stew](./cli/stew.md)
|
||||
- [names](./cli/names.md)
|
||||
- [qrtool](./cli/qrtool.md)
|
||||
- [tagctl](./cli/tagctl.md)
|
||||
- [unionfarm](./cli/unionfarm.md)
|
||||
- [xt](./cli/xt.md)
|
||||
- [refold](./cli/refold.md)
|
||||
- [rexturl](./cli/rexturl.md)
|
||||
- [mhost](./cli/mhost.md)
|
||||
|
||||
## System
|
||||
- [Core Utils](./cli/system/Core%20Utils.md)
|
||||
|
@ -254,6 +263,7 @@ rev: 2025-01-09
|
|||
- [sbctl](../linux/sbctl.md)
|
||||
- [systemd-cryptenroll](../linux/systemd/systemd-cryptenroll.md)
|
||||
- [bubblewrap](./utilities/bubblewrap.md)
|
||||
- [retry-cli](./utilities/retry-cli.md)
|
||||
|
||||
## Development
|
||||
- [act](./development/act.md)
|
||||
|
@ -268,6 +278,7 @@ rev: 2025-01-09
|
|||
- [Podman](../tools/Podman.md)
|
||||
- [serie](./cli/serie.md)
|
||||
- [usql](./cli/usql.md)
|
||||
- [kondo](./cli/kondo.md)
|
||||
|
||||
## Media
|
||||
- [yt-dlp](./media/yt-dlp.md)
|
||||
|
|
29
technology/applications/cli/kondo.md
Normal file
29
technology/applications/cli/kondo.md
Normal file
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/tbillington/kondo
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# Kondo 🧹
|
||||
Cleans `node_modules`, `target`, `build`, and friends from your projects.
|
||||
|
||||
Excellent if
|
||||
- 💾 You want to back up your code but don't want to include GBs of dependencies
|
||||
- 🧑🎨 You try out lots of projects but hate how much space they occupy
|
||||
- ⚡️ You like keeping your disks lean and zippy
|
||||
|
||||
## Usage
|
||||
Kondo recursively cleans project directories.
|
||||
|
||||
Supported project types: Cargo, Node, Unity, SBT, Haskell Stack, Maven, Unreal Engine, Jupyter Notebook, Python, Jupyter Notebooks, CMake, Composer, Pub, Elixir, Swift, Gradle, and .NET projects.
|
||||
|
||||
Usage: `kondo [OPTIONS] [DIRS]...`
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-I, --ignored-dirs <IGNORED_DIRS>` | Directories to ignore. Will also prevent recursive traversal within |
|
||||
| `-q, --quiet...` | Quiet mode. Won't output to the terminal. `-qq` prevents all output |
|
||||
| `-a, --all` | Clean all found projects without confirmation |
|
||||
| `-L, --follow-symlinks` | Follow symbolic links |
|
||||
| `-s, --same-filesystem` | Restrict directory traversal to the root filesystem |
|
||||
| `-o, --older <OLDER>` | Only directories with a file last modified n units of time ago will be looked at. Ex: 20d. Units are m: minutes, h: hours, d: days, w: weeks, M: months and y: years |
|
122
technology/applications/cli/mhost.md
Normal file
122
technology/applications/cli/mhost.md
Normal file
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/lukaspustina/mhost
|
||||
website: https://mhost.pustina.de
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# mhost
|
||||
A modern take on the classic host DNS lookup utility including an easy to use and very fast Rust lookup library.
|
||||
|
||||
## Use Cases
|
||||
### Just lookup an IP address
|
||||
```shell
|
||||
$ mhost l github.com
|
||||
```
|
||||
|
||||
### Just lookup an IP address, using even more than just your local name servers
|
||||
```shell
|
||||
$ mhost server-lists public-dns -o servers.txt
|
||||
$ mhost --limit 6000 --max-concurrent-servers 1000 --timeout 1 -f servers.txt l www.github.com
|
||||
```
|
||||
|
||||
The first command downloads a list of public available name servers that are maintained by the Public DNS community. Usually only a subset of these are reachable, but it still a large set of active name servers.
|
||||
|
||||
The second command uses the name servers list from before and queries all of them concurrently. These settings are very aggressive and highly stresses your internet connection. mhost default settings are set much more cautiously.
|
||||
|
||||
### Just lookup an IP address, using UDP, TCP, DoT, and DoH
|
||||
```shell
|
||||
$ mhost -s 1.1.1.1 -s tcp:1.1.1.1 -s tls:1.1.1.1:853,tls_auth_name=cloudflare-dns.com -s https:1.1.1.1:443,tls_auth_name=cloudflare-dns.com,name=Cloudflare -p l github.com
|
||||
```
|
||||
|
||||
As already mentioned before, mhost supports DNS queries over UDP, TCP, DNS over TLS (DoT), as well as DNS over HTTPS (DoH). In the above example, mhost uses all four protocols to query Cloudflare’s name servers.
|
||||
|
||||
This command also shows the syntax for name server specification, which in general is `protocol:<host name | ip address>:port,tls_auth_name=hostname,name=human-readable-name`.
|
||||
|
||||
### Discover a domain
|
||||
Sometimes you want to know which host names and subdomains a domain has. mhost offers a simple command to help you find these. Please mind, that mhost only uses DNS specific discovery methods. If you want even deeper discoveries using Google, Shodan etc. there are other tools available.
|
||||
|
||||
```shell
|
||||
$ mhost -p d github.com -p
|
||||
```
|
||||
|
||||
This command uses the predefined name servers to discover the GitHub domain. The `-s` reduces all discovered names to real subdomains of github.com..
|
||||
|
||||
### You can go one more step and explore the autonomous systems GitHub uses. In order to discover those, you can use the following commands:
|
||||
|
||||
```shell
|
||||
$ mhost -p l --all -w github.com
|
||||
$ mhost -p l --all 140.82.121.0/24
|
||||
```
|
||||
|
||||
### Check your name server configuration
|
||||
```shell
|
||||
$ mhost -p c github.com -p
|
||||
```
|
||||
|
||||
## Usage
|
||||
mhost has three main commands: `lookup`, `discover`, and `check`. `lookup` lookups up arbitrary DNS records of a domain name. `discover` tries various methods to discover host names and subdomains of a domain. `check` uses lints to check if all records of a domain name adhere to the DNS RFC.
|
||||
|
||||
### General Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------------ | -------------------------------------------------------------------------------------------------- |
|
||||
| `-use-system-resolv-opt` | Uses options set in `/etc/resolv.conf` |
|
||||
| `-no-system-nameservers` | Ignores nameservers from `/etc/resolv.conf` |
|
||||
| `-S, --no-system-lookups` | Ignores system nameservers for lookups |
|
||||
| `--resolv-conf <FILE>` | Uses alternative resolv.conf file |
|
||||
| `--ndots <NUMBER>` | Sets number of dots to qualify domain name as FQDN [default: 1] |
|
||||
| `--search-domain <DOMAIN>` | Sets the search domain to append if HOSTNAME has less than `ndots` dots |
|
||||
| `--system-nameserver <IP ADDR>...` | Adds system nameserver for system lookups; only IP addresses allowed |
|
||||
| `-s, --nameserver <HOSTNAME / IP ADDR>...` | Adds nameserver for lookups |
|
||||
| `-p, --predefined` | Adds predefined nameservers for lookups |
|
||||
| `--predefined-filter <PROTOCOL>` | Filters predefined nameservers by protocol [default: udp] [possible values: udp, tcp, https, tls] |
|
||||
| `--list-predefined` | Lists all predefined nameservers |
|
||||
| `-f, --nameservers-from-file <FILE>` | Adds nameservers from file |
|
||||
| `--limit <NUMBER>` | Sets max. number of nameservers to query [default: 100] |
|
||||
| `--max-concurrent-servers <NUMBER>` | Sets max. concurrent nameservers [default: 10] |
|
||||
| `--max-concurrent-requests <NUMBER>` | Sets max. concurrent requests per nameserver [default: 5] |
|
||||
| `--retries <NUMBER>` | Sets number of retries if first lookup to nameserver fails [default: 0] |
|
||||
| `--timeout <TIMEOUT>` | Sets timeout in seconds for responses [default: 5] |
|
||||
| `-m, --resolvers-mode <MODE>` | Sets resolvers lookup mode [default: multi] [possible values: multi, uni] |
|
||||
| `--wait-multiple-responses` | Waits until timeout for additional responses from nameservers |
|
||||
| `--no-abort-on-error` | Sets do-not-ignore errors from nameservers |
|
||||
| `--no-abort-on-timeout` | Sets do-not-ignore timeouts from nameservers |
|
||||
| `--no-aborts` | Sets do-not-ignore errors and timeouts from nameservers |
|
||||
| `-o, --output <FORMAT>` | Sets the output format for result presentation [default: summary] [possible values: json, summary] |
|
||||
| `--output-options <OPTIONS>` | Sets output options |
|
||||
| `--show-errors` | Shows error counts |
|
||||
| `-q, --quiet` | Does not print anything but results |
|
||||
| `--no-color` | Disables colorful output |
|
||||
| `--ascii` | Uses only ASCII compatible characters for output |
|
||||
|
||||
### Lookup Options
|
||||
|
||||
| Option | Description |
|
||||
| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--all` | Enables lookups for all record types |
|
||||
| `-s`, `--service` | Parses ARG as service spec and set record type to SRV |
|
||||
| `-w`, `--whois` | Retrieves Whois information about A, AAAA, and PTR records |
|
||||
| `-h`, `--help` | Prints help information |
|
||||
| `-t`, `--record-type <RECORD TYPE>...` | Sets record type to lookup, will be ignored in case of IP address lookup [default: A,AAAA,CNAME,MX] [possible values: A, AAAA, ANAME, ANY, CNAME, MX, NULL, NS, PTR, SOA, SRV, TXT] |
|
||||
|
||||
### Discover Options
|
||||
|
||||
```markdown
|
||||
| Option | Description |
|
||||
| ----------------------------------- | ------------------------------------------------------------------------------------------ |
|
||||
| `-p`, `--show-partial-results` | Shows results after each lookup step |
|
||||
| `-w`, `--wordlist-from-file <FILE>` | Uses wordlist from file |
|
||||
| `--rnd-names-number <NUMBER>` | Sets number of random domain names to generate for wildcard resolution check [default: 3] |
|
||||
| `--rnd-names-len <LEN>` | Sets length of random domain names to generate for wildcard resolution check [default: 32] |
|
||||
| `-s`, `--subdomains-only` | Shows subdomains only omitting all other discovered names |
|
||||
|
||||
### Check Options
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------- | ------------------------------------------- |
|
||||
| `--show-partial-results` | Shows results after each check step |
|
||||
| `--show-intermediate-lookups` | Shows all lookups made during by all checks |
|
||||
| `--no-cnames` | Does not run cname lints |
|
||||
| `--no-soa` | Does not run SOA check |
|
||||
| `--no-spf` | Does not run SPF check |
|
17
technology/applications/cli/names.md
Normal file
17
technology/applications/cli/names.md
Normal file
|
@ -0,0 +1,17 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/fnichol/names
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# names
|
||||
Random name generator for Rust
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
> names
|
||||
selfish-change
|
||||
```
|
||||
|
||||
Usage: `names [-n, --number] <AMOUNT>`
|
31
technology/applications/cli/qrtool.md
Normal file
31
technology/applications/cli/qrtool.md
Normal file
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/sorairolake/qrtool
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# qrtool
|
||||
qrtool is a command-line utility for encoding or decoding QR code.
|
||||
|
||||
## Usage
|
||||
### Encode
|
||||
Usage: `qrtool encode [OPTION]… [STRING]`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-o, --output FILE` | Output the result to a file. |
|
||||
| `-r, --read-from FILE` | Read input data from a file. This option conflicts with `[STRING]`. |
|
||||
| `-s, --size NUMBER` | The module size in pixels. If this option is not specified, the module size is 8 when the output format is PNG or SVG, and 1 otherwise. |
|
||||
| `-l, --error-correction-level LEVEL` | Error correction level. The possible values are: Level `L`. 7% of codewords can be restored. Level `M`. 15% of codewords can be restored. This is the default value. Level `Q`. 25% of codewords can be restored. Level `H`. 30% of codewords can be restored. |
|
||||
| `--level LEVEL` | Alias for `-l, --error-correction-level`. |
|
||||
| `-m, --margin NUMBER` | The width of margin. If this option is not specified, the margin will be 4 for normal QR code and 2 for Micro QR code. |
|
||||
| `-t, --type FORMAT` | The format of the output. The possible values are: `png`, `svg`, `pic`, `ansi256`, `ansi-true-color`, `ascii`, `ascii-invert`, `unicode`, `unicode-invert` |
|
||||
| `--foreground COLOR` | Foreground color. COLOR takes a CSS color string. Colored output is only available when the output format is PNG, SVG or any ANSI escape sequences. Note that lossy conversion may be performed depending on the color space supported by the method to specify a color, the color depth supported by the output format, etc. Default is black. |
|
||||
| `--background COLOR` | Background color. COLOR takes a CSS color string. Colored output is only available when the output format is PNG, SVG or any ANSI escape sequences. Note that lossy conversion may be performed depending on the color space supported by the method to specify a color, the color depth supported by the output format, etc. Default is white. |
|
||||
|
||||
### Decode
|
||||
Usage: `qrtool decode [OPTION]… [IMAGE]`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-t, --type FORMAT` | The format of the input. If FORMAT is not specified, the format is determined based on the extension or the magic number. The possible values are: `bmp`, `dds`, `farbfeld`, `gif`, `hdr`, `ico`, `jpeg`, `openexr`, `png`, `pnm`, `qoi`, `svg`, `tga`, `tiff`, `webp`, `xbm` |
|
23
technology/applications/cli/refold.md
Normal file
23
technology/applications/cli/refold.md
Normal file
|
@ -0,0 +1,23 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/wr7/refold
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# refold
|
||||
refold is a commandline tool for performing text-wrapping, similar to unix `fold`. Unlike `fold`, refold will recombine lines before performing line-wrapping, and it will automatically detect line prefixes.
|
||||
|
||||
## Usage
|
||||
Usage: `refold [FLAGS...]`
|
||||
|
||||
refold reads from stdin and writes to stdout
|
||||
|
||||
### Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--width, -w <width>` | Sets the width to wrap at (default 80). |
|
||||
| `--prefix, -p <prefix>` | Sets the prefix for each line (default: auto detect). Set to an empty string to disable prefixing entirely. |
|
||||
| `--boundaries, -b, --unicode-boundaries` | Sets the split mode to "boundaries" mode (default). In boundaries mode, line wrapping may occur in-between unicode breakable characters. |
|
||||
| `--spaces, -s` | Sets the split mode to "space" mode. In space mode, line wrapping may occur in-between words separated by ASCII spaces. |
|
||||
| `--characters, -c, --break-words, --break` | Sets the split mode to "character" mode. In character mode, line wrapping may occur in-between any two characters. |
|
45
technology/applications/cli/rexturl.md
Normal file
45
technology/applications/cli/rexturl.md
Normal file
|
@ -0,0 +1,45 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/vschwaberow/rexturl
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# rexturl
|
||||
A versatile command-line tool for parsing and manipulating URLs.
|
||||
|
||||
## Usage
|
||||
Usage: `rexturl [OPTIONS] [URLS...]`
|
||||
|
||||
If no URLs are provided, rexturl will read from stdin.
|
||||
|
||||
### Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------- | --------------------------------------------------------- |
|
||||
| `--urls <URLS>` | Input URLs to process |
|
||||
| `--scheme` | Extract and display the URL scheme |
|
||||
| `--username` | Extract and display the username from the URL |
|
||||
| `--host` | Extract and display the hostname |
|
||||
| `--port` | Extract and display the port number |
|
||||
| `--path` | Extract and display the URL path |
|
||||
| `--query` | Extract and display the query string |
|
||||
| `--fragment` | Extract and display the URL fragment |
|
||||
| `--sort` | Sort the output |
|
||||
| `--unique` | Remove duplicate entries from the output |
|
||||
| `--json` | Output results in JSON format |
|
||||
| `--all` | Display all URL components |
|
||||
| `--custom` | Enable custom output mode |
|
||||
| `--format <FORMAT>` | Custom output format [default: `{scheme}://{host}{path}`] |
|
||||
| '--domain' | Extract and display the domain |
|
||||
|
||||
|
||||
### Custom Output Format
|
||||
When using `--custom` and `--format`, you can use the following placeholders:
|
||||
- `{scheme}`
|
||||
- `{username}`
|
||||
- `{host}`
|
||||
- `{domain}`
|
||||
- `{port}`
|
||||
- `{path}`
|
||||
- `{query}`
|
||||
- `{fragment}`
|
54
technology/applications/cli/tagctl.md
Normal file
54
technology/applications/cli/tagctl.md
Normal file
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://gitlab.com/KodyVB/tagctl
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# tagctl
|
||||
Tagctl is a command line program which can add or remove tags to files.
|
||||
The tags can either be in the name or under `user.xdg.tags` in the extended attributes.
|
||||
|
||||
## Usage
|
||||
Usage: `tagctl [OPTIONS] [FILES]...`
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-t, --tag <tag>` | Tag to add/remove to selected files. `%p` uses the parent directory name, `%y` uses the modified year, `%m` uses modified month, `%d` uses modified day, and `%w` uses modified weekday |
|
||||
| `-d, --delimiter <delimiter>` | Separator for multiple tags (default: `,`) |
|
||||
| `-i, --input` | Accepts input from stdin |
|
||||
| `-x, --xattr` | Adds/removes tags via xattr under `user.xdg.tags` |
|
||||
| `-r, --remove` | Removes tag instead of adding |
|
||||
| `-R, --remove_all` | Removes all tags |
|
||||
| `-v, --verbose` | Increases verbosity of output |
|
||||
| `-g, --generate_autocomplete <generate_autocomplete>` | The shell to generate auto-completion for `bash`, `elvish`, `fish`, `zsh` |
|
||||
|
||||
|
||||
## Examples
|
||||
**Add tag `example` to current directory using file names:**
|
||||
```shell
|
||||
tagctl -t example "$(ls)"
|
||||
ls | tagctl --input --tag example
|
||||
```
|
||||
|
||||
**Remove tag `example` from current directory using file names:**
|
||||
```shell
|
||||
tagctl -r --tag=example "$(ls)"
|
||||
ls | tagctl --remove -it example
|
||||
```
|
||||
|
||||
**Add tag `example` to current directory using extended attributes:**
|
||||
```shell
|
||||
tagctl -xt example "$(ls)"
|
||||
ls | tagctl --xattr --input --tag example
|
||||
```
|
||||
|
||||
**Remove tag `example` from current directory using extended attributes:**
|
||||
```shell
|
||||
tagctl -xr --tag=example "$(ls)"
|
||||
ls | tagctl --xattr --remove -it example
|
||||
```
|
||||
|
||||
**Add tag `example` to two sets of inputs using file names:**
|
||||
```shell
|
||||
find /home/user/Documents | tagctl -it "example" "$(ls)"
|
||||
```
|
54
technology/applications/cli/unionfarm.md
Normal file
54
technology/applications/cli/unionfarm.md
Normal file
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://codeberg.org/chrysn/unionfarm
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# unionfarm
|
||||
This is a small utility for managing symlink farms. It takes a "farm" directory and any number of "data" directories, and creates (or updates) the union (or overlay) of the data directories in the farm directory by placing symlinks to data directories.
|
||||
|
||||
It is similar to
|
||||
- union mounts (overlay/overlayfs) -- but works without system privileges; it is not live, but can then again err out on duplicate files rather than picking the highest ranking
|
||||
|
||||
Usage: `unionfarm <FARM> [DATA]...`
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
$ tree my-photos
|
||||
my-photos
|
||||
├── 2018/
|
||||
│ └── Rome/
|
||||
│ └── ...
|
||||
└── 2019/
|
||||
└── Helsinki/
|
||||
└── DSCN2305.jpg
|
||||
```
|
||||
|
||||
Assume you have a collection of photos as above, and want to see them overlaid with a friend's photos:
|
||||
|
||||
```
|
||||
$ tree ~friend/photos
|
||||
/home/friend/photos
|
||||
├── 2018/
|
||||
│ └── Amsterdam/
|
||||
│ └── ...
|
||||
└── 2019/
|
||||
└── Helsinki/
|
||||
└── DSC_0815.jpg
|
||||
```
|
||||
|
||||
With unionfarm, you can create a shared view on them:
|
||||
|
||||
```
|
||||
$ unionfarm all-photos my-photos ~friend/photos
|
||||
$ tree all-photos
|
||||
all-photos
|
||||
├── 2018/
|
||||
│ ├── Amsterdam -> /home/friend/photos/2018/Amsterdam/
|
||||
│ └── Rome -> ../../my-photos/2018/Rome/
|
||||
└── 2019/
|
||||
└── Helsinki/
|
||||
├── DSC_0815.jpg -> /home/friend/photos/2019/Helsinki/DSC_0815.jpg
|
||||
└── DSCN2305.jpg -> ../../../my-photos/2019/Helsinki/DSCN2305.jpg
|
||||
```
|
22
technology/applications/cli/xt.md
Normal file
22
technology/applications/cli/xt.md
Normal file
|
@ -0,0 +1,22 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/ahamlinman/xt
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# xt
|
||||
xt is a cross-format translator for JSON, MessagePack, TOML, and YAML.
|
||||
|
||||
## Usage
|
||||
Usage: `xt [-f format] [-t format] [file ...]`
|
||||
|
||||
| Option | Description |
|
||||
|---|---|
|
||||
| `-f format` | Skip detection and convert every input from the given format |
|
||||
| `-t format` | Convert to the given format (default: `json`) |
|
||||
|
||||
## Formats
|
||||
- `json`, `j`:
|
||||
- `msgpack`, `m`
|
||||
- `toml`, `t`
|
||||
- `yaml`, `y`
|
19
technology/applications/utilities/retry-cli.md
Normal file
19
technology/applications/utilities/retry-cli.md
Normal file
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/demoray/retry-cli
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# retry-cli
|
||||
retry is a command line tool written in Rust intended to automatically re-run failed commands with a user configurable delay between tries.
|
||||
|
||||
## Usage
|
||||
Usage: `retry [OPTIONS] <COMMAND>...`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------- | -------------------------------------------------------------- |
|
||||
| `--attempts <ATTEMPTS>` | Amount of retries (default: `3`) |
|
||||
| `--min-duration <MIN_DURATION>` | minimum duration (default: `10ms`) |
|
||||
| `--max-duration <MAX_DURATION>` | maximum duration |
|
||||
| `--jitter <JITTER>` | amount of randomization to add to the backoff (default: `0.3`) |
|
||||
| `--factor <FACTOR>` | backoff factor (default: `2`) |
|
457
technology/applications/web/bitmagnet.md
Normal file
457
technology/applications/web/bitmagnet.md
Normal file
|
@ -0,0 +1,457 @@
|
|||
---
|
||||
obj: application
|
||||
website: https://bitmagnet.io
|
||||
---
|
||||
|
||||
# bitmagnet
|
||||
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.
|
||||
|
||||
## Docker Compose
|
||||
|
||||
```yml
|
||||
services:
|
||||
bitmagnet:
|
||||
image: ghcr.io/bitmagnet-io/bitmagnet:latest
|
||||
container_name: bitmagnet
|
||||
ports:
|
||||
# API and WebUI port:
|
||||
- "3333:3333"
|
||||
# BitTorrent ports:
|
||||
- "3334:3334/tcp"
|
||||
- "3334:3334/udp"
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- POSTGRES_HOST=postgres
|
||||
- POSTGRES_PASSWORD=postgres
|
||||
# - TMDB_API_KEY=your_api_key
|
||||
command:
|
||||
- worker
|
||||
- run
|
||||
- --keys=http_server
|
||||
- --keys=queue_server
|
||||
# disable the next line to run without DHT crawler
|
||||
- --keys=dht_crawler
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
container_name: bitmagnet-postgres
|
||||
volumes:
|
||||
- ./data/postgres:/var/lib/postgresql/data
|
||||
# ports:
|
||||
# - "5432:5432" Expose this port if you'd like to dig around in the database
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=postgres
|
||||
- POSTGRES_DB=bitmagnet
|
||||
- PGUSER=postgres
|
||||
shm_size: 1g
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD-SHELL
|
||||
- pg_isready
|
||||
start_period: 20s
|
||||
interval: 10s
|
||||
```
|
||||
|
||||
After running `docker compose up -d` you should be able to access the web interface at http://localhost:3333. The DHT crawler should have started and you should see items appear in the web UI within around a minute.
|
||||
|
||||
To run the bitmagnet CLI, use `docker compose run bitmagnet bitmagnet command...`
|
||||
|
||||
## Configuration
|
||||
|
||||
- `postgres.host`, `postgres.name`, `postgres.user`, `postgres.password` (default: `localhost`, `bitmagnet`, `postgres`, `empty`): Set these values to configure connection to your Postgres database.
|
||||
- `tmdb.api_key`: TMDB API Key.
|
||||
- `tmdb.enabled` (default: `true`): Specify false to disable the TMDB API integration.
|
||||
- `dht_crawler.save_files_threshold` (default: `100`): Some torrents contain many thousands of files, which impacts performance and uses a lot of database disk space. This parameter sets a maximum limit for the number of files saved by the crawler with each torrent.
|
||||
- `dht_crawler.save_pieces` (default: `false`): If true, the DHT crawler will save the pieces bytes from the torrent metadata. The pieces take up quite a lot of space, and aren’t currently very useful, but they may be used by future features.
|
||||
- `log.level` (default: `info`): Logging
|
||||
- `log.json` (default: `false`): By default logs are output in a pretty format with colors; enable this flag if you’d prefer plain JSON.
|
||||
|
||||
To see a full list of available configuration options using the CLI, run:
|
||||
|
||||
```sh
|
||||
bitmagnet config show
|
||||
```
|
||||
|
||||
### Specifying configuration values
|
||||
Configuration paths are delimited by dots. If you’re specifying configuration in a YAML file then each dot represents a nesting level, for example to configure `log.json`, `tmdb.api_key` and `http_server.cors.allowed_origins`:
|
||||
|
||||
```yml
|
||||
log:
|
||||
json: true
|
||||
tmdb:
|
||||
api_key: my-api-key
|
||||
http_server:
|
||||
cors:
|
||||
allowed_origins:
|
||||
- https://example1.com
|
||||
- https://example2.com
|
||||
```
|
||||
|
||||
This is not a suggested configuration file, it’s just an example of how to specify configuration values.
|
||||
|
||||
To configure these same values with environment variables, upper-case the path and replace all dots with underscores, for example:
|
||||
|
||||
```sh
|
||||
LOG_JSON=true \
|
||||
TMDB_API_KEY=my-api-key \
|
||||
HTTP_SERVER_CORS_ALLOWED_ORIGINS=https://example1.com,https://example2.com \
|
||||
bitmagnet config show
|
||||
```
|
||||
|
||||
### Configuration precedence
|
||||
In order of precedence, configuration values will be read from:
|
||||
- Environment variables
|
||||
- `config.yml` in the current working directory
|
||||
- `config.yml` in the XDG-compliant config location for the current user (for example on MacOS this is `~/Library/Application Support/bitmagnet/config.yml`)
|
||||
- Default values
|
||||
|
||||
Environment variables can be used to configure simple scalar types (strings, numbers, booleans) and slice types (arrays). For more complex configuration types such as maps you’ll have to use YAML configuration. bitmagnet will exit with an error if it’s unable to parse a provided configuration value.
|
||||
|
||||
### VPN configuration
|
||||
It’s recommended that you run bitmagnet behind a VPN. If you’re using Docker then `gluetun` is a good solution for this, although the networking settings can be tricky.
|
||||
|
||||
### Classifier
|
||||
The classifier can be configured and customized to do things like:
|
||||
- automatically delete torrents you don’t want in your index
|
||||
- add custom tags to torrents you’re interested in
|
||||
- customize the keywords and file extensions used for determining a torrent’s content type
|
||||
- specify completely custom logic to classify and perform other actions on torrents
|
||||
|
||||
#### Background
|
||||
After a torrent is crawled or imported, some further processing must be done to gather metadata, have a guess at the torrent’s contents and finally index it in the database, allowing it to be searched and displayed in the UI/API.
|
||||
|
||||
bitmagnet’s classifier is powered by a Domain Specific Language. The aim of this is to provide a high level of customisability, along with transparency into the classification process which will hopefully aid collaboration on improvements to the core classifier logic.
|
||||
|
||||
The classifier is declared in YAML format. The application includes a core classifier that can be configured, extended or completely replaced with a custom classifier. This page documents the required format.
|
||||
|
||||
#### Source precedence
|
||||
bitmagnet will attempt to load classifier source code from all the following locations. Any discovered classifier source will be merged with other sources in the following order of precedence:
|
||||
|
||||
- the core classifier
|
||||
- `classifier.yml` in the XDG-compliant config location for the current user (for example on MacOS this is `~/Library/Application Support/bitmagnet/classifier.yml`)
|
||||
- `classifier.yml` in the current working directory
|
||||
- Classifier configuration
|
||||
|
||||
Note that multiple sources will be merged, not replaced. For example, keywords added to the classifier configuration will be merged with the core keywords.
|
||||
|
||||
The merged classifier source can be viewed with the CLI command `bitmagnet classifier show`.
|
||||
|
||||
#### Schema
|
||||
A JSON schema for the classifier is available; some editors and IDEs will be able to validate the structure of your classifier document by specifying the `$schema` attribute:
|
||||
|
||||
```yml
|
||||
$schema: bitmagnet.io/schemas/classifier-0.1.json
|
||||
```
|
||||
|
||||
The classifier schema can also be viewed by running the cli command `bitmagnet classifier schema`.
|
||||
|
||||
|
||||
The classifier declaration comprises the following components:
|
||||
- **Workflows**
|
||||
A workflow is a list of actions that will be executed on all torrents when they are classified. When no custom configuration is provided, the default workflow will be run. To use a different workflow instead, specify the classifier.workflow configuration option with the name of your custom workflow.
|
||||
|
||||
- **Actions**
|
||||
An action is a piece of workflow to be executed. All actions either return an updated classification result or an error.
|
||||
For example, the following action will set the content type of the current torrent to audiobook:
|
||||
|
||||
```yml
|
||||
set_content_type: audiobook
|
||||
```
|
||||
|
||||
The following action will return an unmatched error:
|
||||
```yml
|
||||
unmatched
|
||||
```
|
||||
|
||||
And the following action will delete the current torrent being classified (returning a delete error):
|
||||
```yml
|
||||
delete
|
||||
```
|
||||
|
||||
These actions aren’t much use on their own - we’d want to check some conditions are satisfied before setting a content type or deleting a torrent, and for this we’d use the if_else action. For example, the following action will set the content type to audiobook if the torrent name contains audiobook-related keywords, and will otherwise return an unmatched error:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "torrent.baseName.matches(keywords.audiobook)"
|
||||
if_action:
|
||||
set_content_type: audiobook
|
||||
else_action: unmatched
|
||||
```
|
||||
|
||||
The following action will delete a torrent if its name matches the list ofbanned keywords:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "torrent.baseName.matches(keywords.banned)"
|
||||
if_action: delete
|
||||
```
|
||||
|
||||
Actions may return the following types of error:
|
||||
- An unmatched error indicates that the current action did not match for the current torrent
|
||||
- A delete error indicates that the torrent should be deleted
|
||||
- An unhandled error may occur, for example if the TMDB API was unreachable
|
||||
|
||||
Whenever an error is returned, the current classification will be terminated.
|
||||
|
||||
Note that a workflow should never return an unmatched error. We expect to iterate through a series of checks corresponding to each content type. If the current torrent does not match the content type being checked, we’ll proceed to the next check until we find a match; if no match can be found, the content type will be unknown. To facilitate this, we can use the find_match action.
|
||||
|
||||
The find_match action is a bit like a try/catch block in some programming languages; it will try to match a particular content type, and if an unmatched error is returned, it will catch the unmatched error proceed to the next check. For example, the following action will attempt to classify a torrent as an audiobook, and then as an ebook. If both checks fail, the content type will be unknown:
|
||||
```yml
|
||||
find_match:
|
||||
# match audiobooks:
|
||||
- if_else:
|
||||
condition: "torrent.baseName.matches(keywords.audiobook)"
|
||||
if_action:
|
||||
set_content_type: audiobook
|
||||
else_action: unmatched
|
||||
# match ebooks:
|
||||
- if_else:
|
||||
condition: "torrent.files.map(f, f.extension in extensions.ebook ? f.size : - f.size).sum() > 0"
|
||||
if_action:
|
||||
set_content_type: ebook
|
||||
else_action: unmatched
|
||||
```
|
||||
|
||||
For a full list of available actions, please refer to the JSON schema.
|
||||
|
||||
#### Conditions
|
||||
Conditions are used in conjunction with the `if_else` action, in order to execute an action if a particular condition is satisfied.
|
||||
|
||||
The conditions in the examples above use CEL (Common Expression Language) expressions.
|
||||
|
||||
##### The CEL environment
|
||||
CEL is already a well-documented language, so this page won’t go into detail about the CEL syntax. In the context of the bitmagnet classifier, the CEL environment exposes a number of variables:
|
||||
|
||||
- `torrent`: The current torrent being classified (protobuf type: `bitmagnet.Torrent`)
|
||||
- `result`: The current classification result (protobuf type: `bitmagnet.Classification`)
|
||||
- `keywords`: A map of strings to regular expressions, representing named lists of keywords
|
||||
- `extensions`: A map of strings to string lists, representing named lists of extensions
|
||||
- `contentType`: A map of strings to enum values representing content types (e.g. `contentType.movie`, `contentType.music`)
|
||||
- `fileType`: A map of strings to enum values representing file types (e.g. `fileType.video`, `fileType.audio`)
|
||||
- `flags`: A map of strings to the configured values of flags
|
||||
- `kb`, `mb`, `gb`: Variables defined for convenience, equal to the number of bytes in a kilobyte, megabyte and gigabyte respectively
|
||||
|
||||
For more details on the protocol buffer types, please refer to the protobuf schema.
|
||||
|
||||
##### Boolean logic (`or`, `and` & `not`)
|
||||
In addition to CEL expressions, conditions may be declared using the boolean logic operators or, and and not. For example, the following condition evaluates to true, if either the torrent consists mostly of file extensions very commonly used for music (e.g. `flac`), OR if the torrent both has a name that includes music-related keywords, and consists mostly of audio files:
|
||||
|
||||
or:
|
||||
- "torrent.files.map(f, f.extension in extensions.music ? f.size : - f.size).sum() > 0"
|
||||
- and:
|
||||
- "torrent.baseName.matches(keywords.music)"
|
||||
- "torrent.files.map(f, f.fileType == fileType.audio ? f.size : - f.size).sum() > 0"
|
||||
|
||||
> Note that we could also have specified the above condition using just one CEL expression, but breaking up complex conditions like this is more readable.
|
||||
|
||||
#### Keywords
|
||||
The classifier includes lists of keywords associated with different types of torrents. These aim to provide a simpler alternative to regular expressions, and the classifier will compile all keyword lists to regular expressions that can be used within CEL expressions. In order for a keyword to match, it must appear as an isolated token in the test string - that is, it must be either at the beginning or preceded by a non-word character, and either at the end or followed by a non-word character.
|
||||
|
||||
Reserved characters in the syntax are:
|
||||
|
||||
parentheses `(` and `)` enclose a group
|
||||
`|` is an `OR` operator
|
||||
`*` is a wildcard operator
|
||||
`?` makes the previous character or group optional
|
||||
`+` specifies one or more of the previous character
|
||||
`#` specifies any number
|
||||
` ` specifies any non-word or non-number character
|
||||
|
||||
For example, to define some music- and audiobook-related keywords:
|
||||
```yml
|
||||
keywords:
|
||||
music: # define music-related keywords
|
||||
- music # all letters are case-insensitive, and must be defined in lowercase unless escaped
|
||||
- discography
|
||||
- album
|
||||
- \V.?\A # escaped letters are case-sensitive; matches "VA", "V.A" and "V.A.", but not "va"
|
||||
- various artists # matches "various artists" and "Various.Artists"
|
||||
audiobook: # define audiobook-related keywords
|
||||
- (audio)?books?
|
||||
- (un)?abridged
|
||||
- narrated
|
||||
- novels?
|
||||
- (auto)?biograph(y|ies) # matches "biography", "autobiographies" etc.
|
||||
```
|
||||
|
||||
If you’d rather use plain old regular expressions, the CEL syntax supports that too, for example `torrent.baseName.matches("^myregex$")`.
|
||||
|
||||
#### Extensions
|
||||
The classifier includes lists of file extensions associated with different types of content. For example, to identify torrents of type comic by their file extensions, the extensions are first declared:
|
||||
|
||||
```yml
|
||||
extensions:
|
||||
comic:
|
||||
- cb7
|
||||
- cba
|
||||
- cbr
|
||||
- cbt
|
||||
- cbz
|
||||
```
|
||||
|
||||
The extensions can now be used as part of a condition within an `if_else` action:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "torrent.files.map(f, f.extension in extensions.comic ? f.size : - f.size).sum() > 0"
|
||||
if_action:
|
||||
set_content_type: comic
|
||||
else_action: unmatched
|
||||
```
|
||||
|
||||
#### Flags
|
||||
Flags can be used to configure workflows. In order to use a flag in a workflow, it must first be defined. For example, the core classifier defines the following flags that are used in the default workflow:
|
||||
```yml
|
||||
flag_definitions:
|
||||
tmdb_enabled: bool
|
||||
delete_content_types: content_type_list
|
||||
delete_xxx: bool
|
||||
```
|
||||
|
||||
These flags can be referenced within CEL expressions, for example to delete adult content if the `delete_xxx` flag is set to true:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "flags.delete_xxx && result.contentType == contentType.xxx"
|
||||
if_action: delete
|
||||
```
|
||||
|
||||
#### Configuration
|
||||
The classifier can be customized by providing a `classifier.yml` file in a supported location as described above. If you only want to make some minor modifications, it may be convenient to specify these using the main application configuration instead, by providing values in either `config.yml` or as environment variables. The application configuration exposes some but not all properties of the classifier.
|
||||
|
||||
For example, in your `config.yml` you could specify:
|
||||
```yml
|
||||
classifier:
|
||||
# specify a custom workflow to be used:
|
||||
workflow: custom
|
||||
# add to the core list of music keywords:
|
||||
keywords:
|
||||
music:
|
||||
- my-custom-music-keyword
|
||||
# add a file extension to the list of audiobook-related extensions:
|
||||
extensions:
|
||||
audiobook:
|
||||
- abc
|
||||
# auto-delete all comics
|
||||
flags:
|
||||
delete_content_types:
|
||||
- comics
|
||||
```
|
||||
|
||||
Or as environment variables you could specify:
|
||||
```shell
|
||||
TMDB_ENABLED=false \ # disable the TMDB API integration
|
||||
CLASSIFIER_WORKFLOW=custom \ # specify a custom workflow to be used
|
||||
CLASSIFIER_DELETE_XXX=true \ # auto-delete all adult content
|
||||
bitmagnet worker run --all
|
||||
```
|
||||
|
||||
#### Validation
|
||||
The classifier source is compiled on initial load, and all structural and syntax errors should be caught at compile time. If there are errors in your classifier source, bitmagnet should exit with an error message indicating the location of the problem.
|
||||
|
||||
#### Testing on individual torrents
|
||||
You can test the classifier on an individual torrent or torrents using the bitmagnet process CLI command:
|
||||
```shell
|
||||
bitmagnet process --infoHash=aaaaaaaaaaaaaaaaaaaa --infoHash=bbbbbbbbbbbbbbbbbbbb
|
||||
```
|
||||
|
||||
#### Reclassify all torrents
|
||||
The classifier is being updated regularly, and to reclassify already-crawled torrents you’ll need to run the CLI and queue them for reprocessing.
|
||||
|
||||
For context: after torrents are crawled or imported, they won’t show up in the UI straight away. They must first be “processed” by the job queue. This involves a few steps:
|
||||
- The classifier attempts to classify the torrent (determine its content type, and match it to a known piece of content)
|
||||
- The search index for the torrent is built
|
||||
- The torrent content record is saved to the database
|
||||
|
||||
The reprocess command will re-queue torrents to allow the latest updates to be applied to their content records.
|
||||
|
||||
To reprocess all torrents in your index, simply run `bitmagnet reprocess`. If you’ve indexed a lot of torrents, this will take a while, so there are a few options available to control exactly what gets reprocessed:
|
||||
- `apisDisabled`: Disable API calls during classification. This makes the classifier run a lot faster, but disables identification with external services such as TMDB (metadata already gathered from external APIs is not lost).
|
||||
- `contentType`: Only reprocess torrents of a certain content type. For example, `bitmagnet reprocess --contentType movie` will only reprocess movies. Multiple content types can be comma separated, and `null` refers to torrents of unknown content type.
|
||||
- `orphans`: Only reprocess torrents that have no content record.
|
||||
- `classifyMode`: This controls how already matched torrents are handled.
|
||||
- `default`: Only attempt to match previously unmatched torrents
|
||||
- `rematch`: Ignore any pre-existing match and always classify from scratch (A torrent is “matched” if it’s associated with a specific piece of content from one of the API integrations, currently only TMDB)
|
||||
|
||||
#### Practical use cases and examples
|
||||
##### Auto-delete specific content types
|
||||
The default workflow provides a flag that allows for automatically deleting specific content types. For example, to delete all comic, software and xxx torrents:
|
||||
```yml
|
||||
flags:
|
||||
delete_content_types:
|
||||
- comic
|
||||
- software
|
||||
- xxx
|
||||
```
|
||||
|
||||
Auto-deleting adult content has been one of the most requested features. For convenience, this is exposed as the configuration option `classifier.delete_xxx`, and can be specified with the environment variable `CLASSIFIER_DELETE_XXX=true`.
|
||||
|
||||
##### Auto-delete torrents containing specific keywords
|
||||
Any torrents containing keywords in the banned list will be automatically deleted. This is primarily used for deleting CSAM content, but the list can be extended to auto-delete any other keywords:
|
||||
|
||||
```yml
|
||||
keywords:
|
||||
banned:
|
||||
- my-hated-keyword
|
||||
```
|
||||
|
||||
##### Disable the TMDB API integration
|
||||
The `tmdb_enabled` flag can be used to disable the TMDB API integration:
|
||||
```yml
|
||||
flags:
|
||||
tmdb_enabled: false
|
||||
```
|
||||
|
||||
For convenience, this is also exposed as the configuration option `tmdb.enabled`, and can be specified with the environment variable `$TMDB_ENABLED=false`.
|
||||
|
||||
The `apis_enabled` flag has the same effect, disabling TMDB and any future API integrations:
|
||||
|
||||
```yml
|
||||
flags:
|
||||
apis_enabled: false
|
||||
```
|
||||
|
||||
API integrations can also be disabled for individual classifier runs, without disabling them globally, by passing the `--apisDisabled` flag to the reprocess command.
|
||||
|
||||
##### Extend the default workflow with custom logic
|
||||
Custom workflows can be added in the workflows section of the classifier document. It is possible to extend the default workflow by using the `run_workflow` action within your custom workflow, for example:
|
||||
```yml
|
||||
workflows:
|
||||
custom:
|
||||
- <my custom action to be executed before the default workflow>
|
||||
- run_workflow: default
|
||||
- <my custom action to be executed after the default workflow>
|
||||
```
|
||||
|
||||
A concrete example of this is adding tags to torrents based on custom criteria.
|
||||
|
||||
##### Use tags to create custom torrent categories
|
||||
Is there a category of torrent you’re interested in that isn’t captured by one of the core content types? Torrent tags are intended to capture custom categories and content types.
|
||||
|
||||
Let’s imagine you’d like to surface torrents containing interesting documents. The interesting documents have specific file extensions, and their filenames contain specific keywords. Let’s create a custom action to tag torrents containing interesting documents:
|
||||
|
||||
```yml
|
||||
# define file extensions for the documents we're interested in:
|
||||
extensions:
|
||||
interesting_documents:
|
||||
- doc
|
||||
- docx
|
||||
- pdf
|
||||
# define keywords that must be present in the filenames of the interesting documents:
|
||||
keywords:
|
||||
interesting_documents:
|
||||
- interesting
|
||||
- fascinating
|
||||
# extend the default workflow with a custom workflow to tag torrents containing interesting documents:
|
||||
workflows:
|
||||
custom:
|
||||
# first run the default workflow:
|
||||
- run_workflow: default
|
||||
# then add the tag to any torrents containing interesting documents:
|
||||
- if_else:
|
||||
condition: "torrent.files.filter(f, f.extension in extensions.interesting_documents && f.basePath.matches(keywords.interesting_documents)).size() > 0"
|
||||
if_action:
|
||||
add_tag: interesting-documents
|
||||
```
|
||||
|
||||
To specify that the custom workflow should be used, remember to specify the `classifier.workflow` configuration option, e.g. `CLASSIFIER_WORKFLOW=custom bitmagnet worker run --all`.
|
Loading…
Add table
Reference in a new issue