Compare commits

..

1 commit

Author SHA1 Message Date
d9470ab980
update 2024-12-23 08:58:46 +01:00
30 changed files with 64 additions and 2244 deletions

View file

@ -1,6 +1,6 @@
---
obj: meta/collection
rev: 2025-01-30
rev: 2024-12-10
---
# Applications
@ -105,7 +105,6 @@ rev: 2025-01-30
- [SnapDrop](./network/SnapDrop.md)
- [OnionShare](./network/OnionShare.md)
- [qBittorrent](./network/qBittorrent.md)
- [bitmagnet](./web/bitmagnet.md)
## Utilities
- [Bottles](./utilities/Bottles.md)
@ -120,6 +119,7 @@ rev: 2025-01-30
- [Wildcard](utilities/Wildcard.md)
- [Textpieces](utilities/Textpieces.md)
- [ImHex](utilities/ImHex.md)
- [VirtManager](utilities/virt-manager.md)
# Mobile
- [Aegis](./utilities/Aegis.md)
@ -199,7 +199,6 @@ rev: 2025-01-30
- [bat](./cli/bat.md)
- [glow](./cli/glow.md)
- [tailspin](./cli/tailspin.md)
- [csvlens](./cli/csvlens.md)
### Editor
- [nano](./cli/nano.md)
@ -235,21 +234,11 @@ rev: 2025-01-30
- [yazi](./cli/yazi.md)
- [GPG](../cryptography/GPG.md)
- [OpenSSL](../cryptography/OpenSSL.md)
- [age](../cryptography/age.md)
- [tomb](./cli/tomb.md)
- [dysk](./cli/dysk.md)
- [pass](./cli/pass.md)
- [ocrs](./cli/ocrs.md)
- [stew](./cli/stew.md)
- [names](./cli/names.md)
- [qrtool](./cli/qrtool.md)
- [tagctl](./cli/tagctl.md)
- [unionfarm](./cli/unionfarm.md)
- [xt](./cli/xt.md)
- [refold](./cli/refold.md)
- [rexturl](./cli/rexturl.md)
- [mhost](./cli/mhost.md)
- [timr-tui](./cli/timr-tui.md)
## System
- [Core Utils](./cli/system/Core%20Utils.md)
@ -264,8 +253,6 @@ rev: 2025-01-30
- [wine](../windows/Wine.md)
- [sbctl](../linux/sbctl.md)
- [systemd-cryptenroll](../linux/systemd/systemd-cryptenroll.md)
- [bubblewrap](./utilities/bubblewrap.md)
- [retry-cli](./utilities/retry-cli.md)
## Development
- [act](./development/act.md)
@ -280,8 +267,6 @@ rev: 2025-01-30
- [Podman](../tools/Podman.md)
- [serie](./cli/serie.md)
- [usql](./cli/usql.md)
- [kondo](./cli/kondo.md)
- [licensit](./development/licensit.md)
## Media
- [yt-dlp](./media/yt-dlp.md)

View file

@ -1,80 +0,0 @@
---
obj: application
repo: https://github.com/ys-l/csvlens
rev: 2025-01-31
---
# csvlens
`csvlens` is a command line CSV file viewer. It is like `less` but made
for CSV.
## Usage
Run `csvlens` by providing the CSV filename:
```
csvlens <filename>
```
Pipe CSV data directly to `csvlens`:
```
<your commands producing some csv data> | csvlens
```
### Key bindings
| Key | Action |
| ---------------------------- | ------------------------------------------------------------------ |
| `hjkl` (or `← ↓ ↑→ `) | Scroll one row or column in the given direction |
| `Ctrl + f` (or `Page Down`) | Scroll one window down |
| `Ctrl + b` (or `Page Up`) | Scroll one window up |
| `Ctrl + d` (or `d`) | Scroll half a window down |
| `Ctrl + u` (or `u`) | Scroll half a window up |
| `Ctrl + h` | Scroll one window left |
| `Ctrl + l` | Scroll one window right |
| `Ctrl + ←` | Scroll left to first column |
| `Ctrl + →` | Scroll right to last column |
| `G` (or `End`) | Go to bottom |
| `g` (or `Home`) | Go to top |
| `<n>G` | Go to line `n` |
| `/<regex>` | Find content matching regex and highlight matches |
| `n` (in Find mode) | Jump to next result |
| `N` (in Find mode) | Jump to previous result |
| `&<regex>` | Filter rows using regex (show only matches) |
| `*<regex>` | Filter columns using regex (show only matches) |
| `TAB` | Toggle between row, column or cell selection modes |
| `>` | Increase selected column's width |
| `<` | Decrease selected column's width |
| `Shift + ↓` (or `Shift + j`) | Sort rows or toggle sort direction by the selected column |
| `#` (in Cell mode) | Find and highlight rows like the selected cell |
| `@` (in Cell mode) | Filter rows like the selected cell |
| `y` | Copy the selected row or cell to clipboard |
| `Enter` (in Cell mode) | Print the selected cell to stdout and exit |
| `-S` | Toggle line wrapping |
| `-W` | Toggle line wrapping by words |
| `r` | Reset to default view (clear all filters and custom column widths) |
| `H` (or `?`) | Display help |
| `q` | Exit |
### Optional parameters
* `-d <char>`: Use this delimiter when parsing the CSV
(e.g. `csvlens file.csv -d '\t'`).
Specify `-d auto` to auto-detect the delimiter.
* `-t`, `--tab-separated`: Use tab as the delimiter (when specified, `-d` is ignored).
* `-i`, `--ignore-case`: Ignore case when searching. This flag is ignored if any
uppercase letters are present in the search string.
* `--no-headers`: Do not interpret the first row as headers.
* `--columns <regex>`: Use this regex to select columns to display by default.
* `--filter <regex>`: Use this regex to filter rows to display by default.
* `--find <regex>`: Use this regex to find and highlight matches by default.
* `--echo-column <column_name>`: Print the value of this column at the selected
row to stdout on `Enter` key and then exit.

View file

@ -1,71 +1,38 @@
---
obj: application
repo: https://github.com/casey/intermodal
website: https://imdl.io
rev: 2025-01-28
---
# Intermodal
[Repo](https://github.com/casey/intermodal)
Intermodal is a user-friendly and featureful command-line [BitTorrent](../../internet/BitTorrent.md) metainfo utility. The binary is called `imdl` and runs on [Linux](../../linux/Linux.md), [Windows](../../windows/Windows.md), and [macOS](../../macos/macOS.md).
## Usage
### Create torrent file:
```shell
imdl torrent create [OPTIONS] <FILES>
imdl torrent create file
```
| Option | Description |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| `-F, --follow-symlinks` | Follow symlinks in torrent input (default: no) |
| `-f, --force` | Overwrite destination `.torrent` file if it exists |
| `--ignore` | Skip files listed in `.gitignore`, `.ignore`, `.git/info/exclude`, and `git config --get core.excludesFile` |
| `-h, --include-hidden` | Include hidden files that would otherwise be skipped |
| `-j, --include-junk` | Include junk files that would otherwise be skipped |
| `-M, --md5` | Include MD5 checksum of each file in the torrent ( warning: MD5 is broken) |
| `--no-created-by` | Do not populate `created by` key with imdl version information |
| `--no-creation-date` | Do not populate `creation date` key with current time |
| `-O, --open` | Open `.torrent` file after creation (uses platform-specific opener) |
| `--link` | Print created torrent `magnet:` URL to standard output |
| `-P, --private` | Set private flag, restricting peer discovery |
| `-S, --show` | Display information about the created torrent file |
| `-V, --version` | Print version number |
| `-A, --allow <LINT>` | Allow specific lint (e.g., `small-piece-length`, `private-trackerless`) |
| `-a, --announce <URL>` | Use primary tracker announce URL for the torrent |
| `-t, --announce-tier <URL-LIST>` | Add tiered tracker announce URLs to the torrent metadata, separate their announce URLs with commas. |
| `-c, --comment <TEXT>` | Set comment text in the generated `.torrent` file |
| `--node <NODE>` | Add DHT bootstrap node to the torrent for peer discovery |
| `-g, --glob <GLOB>` | Include or exclude files matching specific glob patterns |
| `-i, --input <INPUT>` | Read contents from input source (file, dir, or standard input) |
| `-N, --name <TEXT>` | Set name of the encoded magnet link to specific text |
| `-o, --output <TARGET>` | Save `.torrent` file to specified target or print to output |
| `--peer <PEER>` | Add peer specification to the generated magnet link |
| `-p, --piece-length <BYTES>` | Set piece length for encoding torrent metadata |
| `--sort-by <SPEC>` | Determine order of files within the encoded torrent (path, size, or both) |
| `-s, --source <TEXT>` | Set source field in encoded torrent metadata to specific text |
| `--update-url <URL>` | Set URL where revised version of metainfo can be downloaded |
Flags:
```shell
-N, --name <TEXT> Set name of torrent
-i, --input <INPUT> Torrent Files
-c, --comment <TEXT> Torrent Comment
-a, --announce <URL> Torrent Tracker
```
### Show torrent information
```shell
imdl torrent show <torrent>
```
You can output the information as JSON using `--json`.
### Verify torrent
```shell
imdl torrent verify <torrent>
imdl torrent verify --input torr.torrent --content file
```
### Magnet Links
### Generate magnet link
```shell
# Get magnet link from torrent file
imdl torrent link [-s, --select-only <INDICES>...] <torrent>
# Select files to download. Values are indices into the `info.files` list, e.g. `--select-only 1,2,3`.
# Get torrent file from magnet link
imdl torrent from-link [-o, --output <OUT>] <INPUT>
# Announce a torrent
imdl torrent announce <INPUT>
```
imdl torrent link <torrent>
```

View file

@ -1,29 +0,0 @@
---
obj: application
repo: https://github.com/tbillington/kondo
rev: 2025-01-28
---
# Kondo 🧹
Cleans `node_modules`, `target`, `build`, and friends from your projects.
Excellent if
- 💾 You want to back up your code but don't want to include GBs of dependencies
- 🧑‍🎨 You try out lots of projects but hate how much space they occupy
- ⚡️ You like keeping your disks lean and zippy
## Usage
Kondo recursively cleans project directories.
Supported project types: Cargo, Node, Unity, SBT, Haskell Stack, Maven, Unreal Engine, Jupyter Notebook, Python, Jupyter Notebooks, CMake, Composer, Pub, Elixir, Swift, Gradle, and .NET projects.
Usage: `kondo [OPTIONS] [DIRS]...`
| Option | Description |
| ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-I, --ignored-dirs <IGNORED_DIRS>` | Directories to ignore. Will also prevent recursive traversal within |
| `-q, --quiet...` | Quiet mode. Won't output to the terminal. `-qq` prevents all output |
| `-a, --all` | Clean all found projects without confirmation |
| `-L, --follow-symlinks` | Follow symbolic links |
| `-s, --same-filesystem` | Restrict directory traversal to the root filesystem |
| `-o, --older <OLDER>` | Only directories with a file last modified n units of time ago will be looked at. Ex: 20d. Units are m: minutes, h: hours, d: days, w: weeks, M: months and y: years |

View file

@ -1,122 +0,0 @@
---
obj: application
repo: https://github.com/lukaspustina/mhost
website: https://mhost.pustina.de
rev: 2025-01-30
---
# mhost
A modern take on the classic host DNS lookup utility including an easy to use and very fast Rust lookup library.
## Use Cases
### Just lookup an IP address
```shell
$ mhost l github.com
```
### Just lookup an IP address, using even more than just your local name servers
```shell
$ mhost server-lists public-dns -o servers.txt
$ mhost --limit 6000 --max-concurrent-servers 1000 --timeout 1 -f servers.txt l www.github.com
```
The first command downloads a list of public available name servers that are maintained by the Public DNS community. Usually only a subset of these are reachable, but it still a large set of active name servers.
The second command uses the name servers list from before and queries all of them concurrently. These settings are very aggressive and highly stresses your internet connection. mhost default settings are set much more cautiously.
### Just lookup an IP address, using UDP, TCP, DoT, and DoH
```shell
$ mhost -s 1.1.1.1 -s tcp:1.1.1.1 -s tls:1.1.1.1:853,tls_auth_name=cloudflare-dns.com -s https:1.1.1.1:443,tls_auth_name=cloudflare-dns.com,name=Cloudflare -p l github.com
```
As already mentioned before, mhost supports DNS queries over UDP, TCP, DNS over TLS (DoT), as well as DNS over HTTPS (DoH). In the above example, mhost uses all four protocols to query Cloudflares name servers.
This command also shows the syntax for name server specification, which in general is `protocol:<host name | ip address>:port,tls_auth_name=hostname,name=human-readable-name`.
### Discover a domain
Sometimes you want to know which host names and subdomains a domain has. mhost offers a simple command to help you find these. Please mind, that mhost only uses DNS specific discovery methods. If you want even deeper discoveries using Google, Shodan etc. there are other tools available.
```shell
$ mhost -p d github.com -p
```
This command uses the predefined name servers to discover the GitHub domain. The `-s` reduces all discovered names to real subdomains of github.com..
### You can go one more step and explore the autonomous systems GitHub uses. In order to discover those, you can use the following commands:
```shell
$ mhost -p l --all -w github.com
$ mhost -p l --all 140.82.121.0/24
```
### Check your name server configuration
```shell
$ mhost -p c github.com -p
```
## Usage
mhost has three main commands: `lookup`, `discover`, and `check`. `lookup` lookups up arbitrary DNS records of a domain name. `discover` tries various methods to discover host names and subdomains of a domain. `check` uses lints to check if all records of a domain name adhere to the DNS RFC.
### General Options
| Option | Description |
| ------------------------------------------ | -------------------------------------------------------------------------------------------------- |
| `-use-system-resolv-opt` | Uses options set in `/etc/resolv.conf` |
| `-no-system-nameservers` | Ignores nameservers from `/etc/resolv.conf` |
| `-S, --no-system-lookups` | Ignores system nameservers for lookups |
| `--resolv-conf <FILE>` | Uses alternative resolv.conf file |
| `--ndots <NUMBER>` | Sets number of dots to qualify domain name as FQDN [default: 1] |
| `--search-domain <DOMAIN>` | Sets the search domain to append if HOSTNAME has less than `ndots` dots |
| `--system-nameserver <IP ADDR>...` | Adds system nameserver for system lookups; only IP addresses allowed |
| `-s, --nameserver <HOSTNAME / IP ADDR>...` | Adds nameserver for lookups |
| `-p, --predefined` | Adds predefined nameservers for lookups |
| `--predefined-filter <PROTOCOL>` | Filters predefined nameservers by protocol [default: udp] [possible values: udp, tcp, https, tls] |
| `--list-predefined` | Lists all predefined nameservers |
| `-f, --nameservers-from-file <FILE>` | Adds nameservers from file |
| `--limit <NUMBER>` | Sets max. number of nameservers to query [default: 100] |
| `--max-concurrent-servers <NUMBER>` | Sets max. concurrent nameservers [default: 10] |
| `--max-concurrent-requests <NUMBER>` | Sets max. concurrent requests per nameserver [default: 5] |
| `--retries <NUMBER>` | Sets number of retries if first lookup to nameserver fails [default: 0] |
| `--timeout <TIMEOUT>` | Sets timeout in seconds for responses [default: 5] |
| `-m, --resolvers-mode <MODE>` | Sets resolvers lookup mode [default: multi] [possible values: multi, uni] |
| `--wait-multiple-responses` | Waits until timeout for additional responses from nameservers |
| `--no-abort-on-error` | Sets do-not-ignore errors from nameservers |
| `--no-abort-on-timeout` | Sets do-not-ignore timeouts from nameservers |
| `--no-aborts` | Sets do-not-ignore errors and timeouts from nameservers |
| `-o, --output <FORMAT>` | Sets the output format for result presentation [default: summary] [possible values: json, summary] |
| `--output-options <OPTIONS>` | Sets output options |
| `--show-errors` | Shows error counts |
| `-q, --quiet` | Does not print anything but results |
| `--no-color` | Disables colorful output |
| `--ascii` | Uses only ASCII compatible characters for output |
### Lookup Options
| Option | Description |
| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--all` | Enables lookups for all record types |
| `-s`, `--service` | Parses ARG as service spec and set record type to SRV |
| `-w`, `--whois` | Retrieves Whois information about A, AAAA, and PTR records |
| `-h`, `--help` | Prints help information |
| `-t`, `--record-type <RECORD TYPE>...` | Sets record type to lookup, will be ignored in case of IP address lookup [default: A,AAAA,CNAME,MX] [possible values: A, AAAA, ANAME, ANY, CNAME, MX, NULL, NS, PTR, SOA, SRV, TXT] |
### Discover Options
```markdown
| Option | Description |
| ----------------------------------- | ------------------------------------------------------------------------------------------ |
| `-p`, `--show-partial-results` | Shows results after each lookup step |
| `-w`, `--wordlist-from-file <FILE>` | Uses wordlist from file |
| `--rnd-names-number <NUMBER>` | Sets number of random domain names to generate for wildcard resolution check [default: 3] |
| `--rnd-names-len <LEN>` | Sets length of random domain names to generate for wildcard resolution check [default: 32] |
| `-s`, `--subdomains-only` | Shows subdomains only omitting all other discovered names |
### Check Options
| Option | Description |
| ----------------------------- | ------------------------------------------- |
| `--show-partial-results` | Shows results after each check step |
| `--show-intermediate-lookups` | Shows all lookups made during by all checks |
| `--no-cnames` | Does not run cname lints |
| `--no-soa` | Does not run SOA check |
| `--no-spf` | Does not run SPF check |

View file

@ -1,17 +0,0 @@
---
obj: application
repo: https://github.com/fnichol/names
rev: 2025-01-28
---
# names
Random name generator for Rust
## Usage
```
> names
selfish-change
```
Usage: `names [-n, --number] <AMOUNT>`

View file

@ -1,31 +0,0 @@
---
obj: application
repo: https://github.com/sorairolake/qrtool
rev: 2025-01-30
---
# qrtool
qrtool is a command-line utility for encoding or decoding QR code.
## Usage
### Encode
Usage: `qrtool encode [OPTION]…​ [STRING]`
| Option | Description |
| ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-o, --output FILE` | Output the result to a file. |
| `-r, --read-from FILE` | Read input data from a file. This option conflicts with `[STRING]`. |
| `-s, --size NUMBER` | The module size in pixels. If this option is not specified, the module size is 8 when the output format is PNG or SVG, and 1 otherwise. |
| `-l, --error-correction-level LEVEL` | Error correction level. The possible values are: Level `L`. 7% of codewords can be restored. Level `M`. 15% of codewords can be restored. This is the default value. Level `Q`. 25% of codewords can be restored. Level `H`. 30% of codewords can be restored. |
| `--level LEVEL` | Alias for `-l, --error-correction-level`. |
| `-m, --margin NUMBER` | The width of margin. If this option is not specified, the margin will be 4 for normal QR code and 2 for Micro QR code. |
| `-t, --type FORMAT` | The format of the output. The possible values are: `png`, `svg`, `pic`, `ansi256`, `ansi-true-color`, `ascii`, `ascii-invert`, `unicode`, `unicode-invert` |
| `--foreground COLOR` | Foreground color. COLOR takes a CSS color string. Colored output is only available when the output format is PNG, SVG or any ANSI escape sequences. Note that lossy conversion may be performed depending on the color space supported by the method to specify a color, the color depth supported by the output format, etc. Default is black. |
| `--background COLOR` | Background color. COLOR takes a CSS color string. Colored output is only available when the output format is PNG, SVG or any ANSI escape sequences. Note that lossy conversion may be performed depending on the color space supported by the method to specify a color, the color depth supported by the output format, etc. Default is white. |
### Decode
Usage: `qrtool decode [OPTION]…​ [IMAGE]`
| Option | Description |
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-t, --type FORMAT` | The format of the input. If FORMAT is not specified, the format is determined based on the extension or the magic number. The possible values are: `bmp`, `dds`, `farbfeld`, `gif`, `hdr`, `ico`, `jpeg`, `openexr`, `png`, `pnm`, `qoi`, `svg`, `tga`, `tiff`, `webp`, `xbm` |

View file

@ -1,23 +0,0 @@
---
obj: application
repo: https://github.com/wr7/refold
rev: 2025-01-30
---
# refold
refold is a commandline tool for performing text-wrapping, similar to unix `fold`. Unlike `fold`, refold will recombine lines before performing line-wrapping, and it will automatically detect line prefixes.
## Usage
Usage: `refold [FLAGS...]`
refold reads from stdin and writes to stdout
### Options
| Option | Description |
| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------- |
| `--width, -w <width>` | Sets the width to wrap at (default 80). |
| `--prefix, -p <prefix>` | Sets the prefix for each line (default: auto detect). Set to an empty string to disable prefixing entirely. |
| `--boundaries, -b, --unicode-boundaries` | Sets the split mode to "boundaries" mode (default). In boundaries mode, line wrapping may occur in-between unicode breakable characters. |
| `--spaces, -s` | Sets the split mode to "space" mode. In space mode, line wrapping may occur in-between words separated by ASCII spaces. |
| `--characters, -c, --break-words, --break` | Sets the split mode to "character" mode. In character mode, line wrapping may occur in-between any two characters. |

View file

@ -1,45 +0,0 @@
---
obj: application
repo: https://github.com/vschwaberow/rexturl
rev: 2025-01-30
---
# rexturl
A versatile command-line tool for parsing and manipulating URLs.
## Usage
Usage: `rexturl [OPTIONS] [URLS...]`
If no URLs are provided, rexturl will read from stdin.
### Options
| Option | Description |
| ------------------- | --------------------------------------------------------- |
| `--urls <URLS>` | Input URLs to process |
| `--scheme` | Extract and display the URL scheme |
| `--username` | Extract and display the username from the URL |
| `--host` | Extract and display the hostname |
| `--port` | Extract and display the port number |
| `--path` | Extract and display the URL path |
| `--query` | Extract and display the query string |
| `--fragment` | Extract and display the URL fragment |
| `--sort` | Sort the output |
| `--unique` | Remove duplicate entries from the output |
| `--json` | Output results in JSON format |
| `--all` | Display all URL components |
| `--custom` | Enable custom output mode |
| `--format <FORMAT>` | Custom output format [default: `{scheme}://{host}{path}`] |
| '--domain' | Extract and display the domain |
### Custom Output Format
When using `--custom` and `--format`, you can use the following placeholders:
- `{scheme}`
- `{username}`
- `{host}`
- `{domain}`
- `{port}`
- `{path}`
- `{query}`
- `{fragment}`

View file

@ -1,54 +0,0 @@
---
obj: application
repo: https://gitlab.com/KodyVB/tagctl
rev: 2025-01-30
---
# tagctl
Tagctl is a command line program which can add or remove tags to files.
The tags can either be in the name or under `user.xdg.tags` in the extended attributes.
## Usage
Usage: `tagctl [OPTIONS] [FILES]...`
| Option | Description |
| ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-t, --tag <tag>` | Tag to add/remove to selected files. `%p` uses the parent directory name, `%y` uses the modified year, `%m` uses modified month, `%d` uses modified day, and `%w` uses modified weekday |
| `-d, --delimiter <delimiter>` | Separator for multiple tags (default: `,`) |
| `-i, --input` | Accepts input from stdin |
| `-x, --xattr` | Adds/removes tags via xattr under `user.xdg.tags` |
| `-r, --remove` | Removes tag instead of adding |
| `-R, --remove_all` | Removes all tags |
| `-v, --verbose` | Increases verbosity of output |
| `-g, --generate_autocomplete <generate_autocomplete>` | The shell to generate auto-completion for `bash`, `elvish`, `fish`, `zsh` |
## Examples
**Add tag `example` to current directory using file names:**
```shell
tagctl -t example "$(ls)"
ls | tagctl --input --tag example
```
**Remove tag `example` from current directory using file names:**
```shell
tagctl -r --tag=example "$(ls)"
ls | tagctl --remove -it example
```
**Add tag `example` to current directory using extended attributes:**
```shell
tagctl -xt example "$(ls)"
ls | tagctl --xattr --input --tag example
```
**Remove tag `example` from current directory using extended attributes:**
```shell
tagctl -xr --tag=example "$(ls)"
ls | tagctl --xattr --remove -it example
```
**Add tag `example` to two sets of inputs using file names:**
```shell
find /home/user/Documents | tagctl -it "example" "$(ls)"
```

View file

@ -1,23 +0,0 @@
---
obj: application
repo: https://github.com/sectore/timr-tui
rev: 2025-01-31
---
# timr-tui
TUI to organize your time: Pomodoro, Countdown, Timer.
## CLI
Usage: `timr-tui [OPTIONS]`
| Option | Description |
| -------- | ----------------------------------------------------------------------------------------------- |
| `c` | Countdown time to start from. Formats: 'ss', 'mm:ss', or 'hh:mm:ss' |
| `w` | Work time to count down from. Formats: 'ss', 'm:ss', or 'h:mm:s' |
| `p` | Pause time to count down from. Formats: 'ss', 'm:ss', or 'h:m:s' |
| `d` | Show deciseconds |
| `m` | Mode to start with. [possible values: countdown, timer, pomodoro] |
| `s` | Style to display time with. [possible values: full, light, medium, dark, thick, cross, braille] |
| `--menu` | Open the menu |
| `r` | Reset stored values to default values. |
| `n` | Toggle desktop notifications on or off. Experimental. [possible values: on, off] |

View file

@ -1,54 +0,0 @@
---
obj: application
repo: https://codeberg.org/chrysn/unionfarm
rev: 2025-01-30
---
# unionfarm
This is a small utility for managing symlink farms. It takes a "farm" directory and any number of "data" directories, and creates (or updates) the union (or overlay) of the data directories in the farm directory by placing symlinks to data directories.
It is similar to
- union mounts (overlay/overlayfs) -- but works without system privileges; it is not live, but can then again err out on duplicate files rather than picking the highest ranking
Usage: `unionfarm <FARM> [DATA]...`
## Example
```
$ tree my-photos
my-photos
├── 2018/
│ └── Rome/
│ └── ...
└── 2019/
└── Helsinki/
└── DSCN2305.jpg
```
Assume you have a collection of photos as above, and want to see them overlaid with a friend's photos:
```
$ tree ~friend/photos
/home/friend/photos
├── 2018/
│ └── Amsterdam/
│ └── ...
└── 2019/
└── Helsinki/
└── DSC_0815.jpg
```
With unionfarm, you can create a shared view on them:
```
$ unionfarm all-photos my-photos ~friend/photos
$ tree all-photos
all-photos
├── 2018/
│ ├── Amsterdam -> /home/friend/photos/2018/Amsterdam/
│ └── Rome -> ../../my-photos/2018/Rome/
└── 2019/
└── Helsinki/
├── DSC_0815.jpg -> /home/friend/photos/2019/Helsinki/DSC_0815.jpg
└── DSCN2305.jpg -> ../../../my-photos/2019/Helsinki/DSCN2305.jpg
```

View file

@ -1,22 +0,0 @@
---
obj: application
repo: https://github.com/ahamlinman/xt
rev: 2025-01-30
---
# xt
xt is a cross-format translator for JSON, MessagePack, TOML, and YAML.
## Usage
Usage: `xt [-f format] [-t format] [file ...]`
| Option | Description |
|---|---|
| `-f format` | Skip detection and convert every input from the given format |
| `-t format` | Convert to the given format (default: `json`) |
## Formats
- `json`, `j`:
- `msgpack`, `m`
- `toml`, `t`
- `yaml`, `y`

View file

@ -1,66 +0,0 @@
---
obj: application
repo: https://github.com/neuromeow/licensit
rev: 2025-01-31
---
# licensit
`licensit` is a command-line tool to create LICENSE files.
### Supported licenses
- GNU Affero General Public License v3.0 (AGPL-3.0)
- Apache License 2.0 (Apache-2.0)
- BSD 2-Clause “Simplified” License (BSD-2-Clause)
- BSD 3-Clause “New” or “Revised” License (BSD-3-Clause)
- Boost Software License 1.0 (BSL-1.0)
- Creative Commons Zero v1.0 Universal (CC0-1.0)
- Eclipse Public License 2.0 (EPL-2.0)
- GNU General Public License v2.0 (GPL-2.0)
- GNU General Public License v3.0 (GPL-3.0)
- GNU Lesser General Public License v2.1 (LGPL-2.1)
- MIT License (MIT)
- Mozilla Public License 2.0 (MPL-2.0)
- The Unlicense (Unlicense)
## Usage
`licensit` simplifies the process of creating and managing license files for your projects.
### Listing Available Licenses
```
licensit list
```
Shows all supported licenses.
### Showing License Content
To view the content of a specific license with the author and year filled in:
```
licensit show [LICENSE] [--user USER] [--year YEAR]
```
- `[LICENSE]`: The ID of the license you want to display (for example, `mit`, `apache-2.0`)
- `--user [USER]`: Specifies the license holder's name. If not provided, `licensit` will use the following sources in order to determine the user name:
- `LICENSE_AUTHOR` environment variable
- `user.name` entry in the `$HOME/.gitconfig` file
- Username associated with the current effective user ID
- `--year [YEAR]`: Sets the year during which the license is effective. Defaults to the current year if not specified
To display just the template of a license (without any specific user or year information):
```
licensit show [LICENSE] --template
```
- `[LICENSE]`: The ID of the license whose template you want to display (for example, `mit`, `apache-2.0`)
- `--template`: Displays the license template with placeholders for the user and year. This option cannot be used with `--user` or `--year`
### Adding a License to Your Project
To add a license file to your current directory:
```
licensit add [LICENSE] [--user USER] [--year YEAR]
```
Creates a `LICENSE` file in the current directory with the specified details.

View file

@ -1,127 +0,0 @@
---
obj: application
repo: https://github.com/fornwall/rust-script
website: https://rust-script.org
---
# RustScript
With rust-script Rust files and expressions can be executed just like a shell or Python script. Features include:
- Caching compiled artifacts for speed.
- Reading Cargo manifests embedded in Rust scripts.
- Supporting executable Rust scripts via Unix shebangs and Windows file associations.
- Using expressions as stream filters (i.e. for use in command pipelines).
- Running unit tests and benchmarks from scripts.
## Scripts
The primary use for rust-script is for running Rust source files as scripts. For example:
```
$ echo 'println!("Hello, World!");' > hello.rs
$ rust-script hello.rs
Hello, World!
```
Under the hood, a Cargo project will be generated and built (with the Cargo output hidden unless compilation fails or the `-c/--cargo-output` option is used). The first invocation of the script will be slower as the script is compiled - subsequent invocations of unmodified scripts will be fast as the built executable is cached.
As seen from the above example, using a `fn main() {}` function is not required. If not present, the script file will be wrapped in a `fn main() { ... }` block.
rust-script will look for embedded dependency and manifest information in the script as shown by the below two equivalent `now.rs` variants:
```rust
#!/usr/bin/env rust-script
//! This is a regular crate doc comment, but it also contains a partial
//! Cargo manifest. Note the use of a *fenced* code block, and the
//! `cargo` "language".
//!
//! ```cargo
//! [dependencies]
//! time = "0.1.25"
//! ```
fn main() {
println!("{}", time::now().rfc822z());
}
```
```rust
// cargo-deps: time="0.1.25"
// You can also leave off the version number, in which case, it's assumed
// to be "*". Also, the `cargo-deps` comment *must* be a single-line
// comment, and it *must* be the first thing in the file, after the
// shebang.
// Multiple dependencies should be separated by commas:
// cargo-deps: time="0.1.25", libc="0.2.5"
fn main() {
println!("{}", time::now().rfc822z());
}
```
The output from running one of the above scripts may look something like:
```
$ rust-script now
Wed, 28 Oct 2020 00:38:45 +0100
```
## Useful command-line arguments:
- `--bench`: Compile and run benchmarks. Requires a nightly toolchain.
- `--debug`: Build a debug executable, not an optimised one.
- `--force`: Force the script to be rebuilt. Useful if you want to force a recompile with a different toolchain.
- `--package`: Generate the Cargo package and print the path to it - but dont compile or run it. Effectively “unpacks” the script into a Cargo package.
- `--test`: Compile and run tests.
- `--wrapper`: Add a wrapper around the executable. Can be used to run debugging with e.g. `rust-script --debug --wrapper rust-lldb my-script.rs` or benchmarking with `rust-script --wrapper "hyperfine --runs 100" my-script.rs`
## Executable Scripts
On Unix systems, you can use `#!/usr/bin/env rust-script` as a shebang line in a Rust script. This will allow you to execute a script files (which dont need to have the `.rs` file extension) directly.
If you are using Windows, you can associate the `.ers` extension (executable Rust - a renamed `.rs` file) with rust-script. This allows you to execute Rust scripts simply by naming them like any other executable or script.
This can be done using the `rust-script --install-file-association` command. Uninstall the file association with `rust-script --uninstall-file-association`.
If you want to make a script usable across platforms, use both a shebang line and give the file a `.ers` file extension.
## Expressions
Using the `-e/--expr` option a Rust expression can be evaluated directly, with dependencies (if any) added using `-d/--dep`:
```
$ rust-script -e '1+2'
3
$ rust-script --dep time --expr "time::OffsetDateTime::now_utc().format(time::Format::Rfc3339).to_string()"`
"2020-10-28T11:42:10+00:00"
$ # Use a specific version of the time crate (instead of default latest):
$ rust-script --dep time=0.1.38 -e "time::now().rfc822z().to_string()"
"2020-10-28T11:42:10+00:00"
```
The code given is embedded into a block expression, evaluated, and printed out using the Debug formatter (i.e. `{:?}`).
## Filters
You can use rust-script to write a quick filter, by specifying a closure to be called for each line read from stdin, like so:
```
$ cat now.ers | rust-script --loop \
"let mut n=0; move |l| {n+=1; println!(\"{:>6}: {}\",n,l.trim_end())}"
1: // cargo-deps: time="0.1.25"
3: fn main() {
4: println!("{}", time::now().rfc822z());
5: }
```
You can achieve a similar effect to the above by using the `--count` flag, which causes the line number to be passed as a second argument to your closure:
```
$ cat now.ers | rust-script --count --loop \
"|l,n| println!(\"{:>6}: {}\", n, l.trim_end())"
1: // cargo-deps: time="0.1.25"
2: fn main() {
3: println!("{}", time::now().rfc822z());
4: }
```
## Environment Variables
The following environment variables are provided to scripts by rust-script:
- `$RUST_SCRIPT_BASE_PATH`: the base path used by rust-script to resolve relative dependency paths. Note that this is not necessarily the same as either the working directory, or the directory in which the script is being compiled.
- `$RUST_SCRIPT_PKG_NAME`: the generated package name of the script.
- `$RUST_SCRIPT_SAFE_NAME`: the file name of the script (sans file extension) being run. For scripts, this is derived from the scripts filename. May also be `expr` or `loop` for those invocations.
- `$RUST_SCRIPT_PATH`: absolute path to the script being run, assuming one exists. Set to the empty string for expressions.

View file

@ -1,7 +1,5 @@
---
obj: application
repo: https://git.launchpad.net/ufw/
arch-wiki: https://wiki.archlinux.org/title/Uncomplicated_Firewall
---
# ufw
@ -19,134 +17,19 @@ The next line is only needed _once_ the first time you install the package:
ufw enable
```
**See status:**
See status:
```shell
ufw status
```
**Enable/Disable:**
Enable/Disable
```shell
ufw enable
ufw disable
```
**Allow/Deny:**
Allow/Deny ports
```shell
ufw allow <app|port>
ufw deny <app|port>
ufw allow from <CIDR>
ufw deny from <CIDR>
```
## Forward policy
Users needing to run a VPN such as OpenVPN or WireGuard can adjust the `DEFAULT_FORWARD_POLICY` variable in `/etc/default/ufw` from a value of `DROP` to `ACCEPT` to forward all packets regardless of the settings of the user interface. To forward for a specific interface like `wg0`, user can add the following line in the filter block
```sh
# /etc/ufw/before.rules
-A ufw-before-forward -i wg0 -j ACCEPT
-A ufw-before-forward -o wg0 -j ACCEPT
```
You may also need to uncomment
```sh
# /etc/ufw/sysctl.conf
net/ipv4/ip_forward=1
net/ipv6/conf/default/forwarding=1
net/ipv6/conf/all/forwarding=1
```
## Adding other applications
The PKG comes with some defaults based on the default ports of many common daemons and programs. Inspect the options by looking in the `/etc/ufw/applications.d` directory or by listing them in the program itself:
```sh
ufw app list
```
If users are running any of the applications on a non-standard port, it is recommended to simply make `/etc/ufw/applications.d/custom` containing the needed data using the defaults as a guide.
> **Warning**: If users modify any of the PKG provided rule sets, these will be overwritten the first time the ufw package is updated. This is why custom app definitions need to reside in a non-PKG file as recommended above!
Example, deluge with custom tcp ports that range from 20202-20205:
```ini
[Deluge-my]
title=Deluge
description=Deluge BitTorrent client
ports=20202:20205/tcp
```
Should you require to define both tcp and udp ports for the same application, simply separate them with a pipe as shown: this app opens tcp ports 10000-10002 and udp port 10003:
```ini
ports=10000:10002/tcp|10003/udp
```
One can also use a comma to define ports if a range is not desired. This example opens tcp ports 10000-10002 (inclusive) and udp ports 10003 and 10009
```ini
ports=10000:10002/tcp|10003,10009/udp
```
## Deleting applications
Drawing on the Deluge/Deluge-my example above, the following will remove the standard Deluge rules and replace them with the Deluge-my rules from the above example:
```sh
ufw delete allow Deluge
ufw allow Deluge-my
```
## Black listing IP addresses
It might be desirable to add ip addresses to a blacklist which is easily achieved simply by editing `/etc/ufw/before.rules` and inserting an `iptables DROP` line at the bottom of the file right above the "COMMIT" word.
```sh
# /etc/ufw/before.rules
...
## blacklist section
# block just 199.115.117.99
-A ufw-before-input -s 199.115.117.99 -j DROP
# block 184.105.*.*
-A ufw-before-input -s 184.105.0.0/16 -j DROP
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT
```
## Rate limiting with ufw
ufw has the ability to deny connections from an IP address that has attempted to initiate 6 or more connections in the last 30 seconds. Users should consider using this option for services such as SSH.
Using the above basic configuration, to enable rate limiting we would simply replace the allow parameter with the limit parameter. The new rule will then replace the previous.
```sh
ufw limit SSH
```
## Disable remote ping
Change `ACCEPT` to `DROP` in the following lines:
```sh
/etc/ufw/before.rules
# ok icmp codes
...
-A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT
```
If you use IPv6, related rules are in `/etc/ufw/before6.rules`.
## Disable UFW logging
Disabling logging may be useful to stop UFW filling up the kernel (dmesg) and message logs:
```sh
ufw logging off
```
## UFW and Docker
Docker in standard mode writes its own iptables rules and ignores ufw ones, which could lead to security issues. A solution can be found at https://github.com/chaifeng/ufw-docker.
## GUI frontends
If you are using KDE Plasma, you can just go to `Wi-Fi & Networking > Firewall` to access and adjust firewall configurations given `plasma-firewall` is installed.
```

View file

@ -1,7 +1,7 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Pacman
rev: 2025-01-08
rev: 2024-12-19
---
# Pacman
@ -48,11 +48,6 @@ List explicitly installed packages:
pacman -Qe
```
List of packages owning a file/dir:
```shell
pacman -Qo /path/to/file
```
List orphan packages (installed as dependencies and not required anymore):
```shell
pacman -Qdt

View file

@ -1,103 +0,0 @@
---
obj: application
repo: https://github.com/containers/bubblewrap
arch-wiki: https://wiki.archlinux.org//title/Bubblewrap
rev: 2025-01-09
---
# Bubblewrap
Bubblewrap is a lightweight sandbox application used by Flatpak and other container tools. It has a small installation footprint and minimal resource requirements. Notable features include support for cgroup/IPC/mount/network/PID/user/UTS namespaces and seccomp filtering. Note that bubblewrap drops all capabilities within a sandbox and that child tasks cannot gain greater privileges than its parent.
## Configuration
Bubblewrap can be called directly from the command-line and/or within shell scripts as part of a complex wrapper.
A no-op bubblewrap invocation is as follows:
```sh
bwrap --dev-bind / / bash
```
This will spawn a Bash process which should behave exactly as outside a sandbox in most cases. If a sandboxed program misbehaves, you may want to start from the above no-op invocation, and work your way towards a more secure configuration step-by-step.
### Desktop entries
Leverage Bubblewrap within desktop entries:
- Bind as read-write the entire host `/` directory to `/` in the sandbox
- Re-bind as read-only the `/var` and `/etc` directories in the sandbox
- Mount a new devtmpfs filesystem to `/dev` in the sandbox
- Create a tmpfs filesystem over the sandboxed `/run` directory
- Disable network access by creating new network namespace
```ini
[Desktop Entry]
Name=nano Editor
Exec=bwrap --bind / / --dev /dev --tmpfs /run --unshare-net st -e nano -o . %f
Type=Application
MimeType=text/plain;
```
> **Note**: `--dev /dev` is required to write to `/dev/pty`
## Options
Usage: `bwrap [optiosn] [command]`
| Option | Description |
| ------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--args FD` | Parse nul-separated arguments from the given file descriptor. This option can be used multiple times to parse options from multiple sources. |
| `--argv0 VALUE` | Set `argv[0]` to the value VALUE before running the program |
| `--unshare-user` | Create a new user namespace |
| `--unshare-user-try` | Create a new user namespace if possible else skip it |
| `--unshare-ipc` | Create a new ipc namespace |
| `--unshare-pid` | Create a new pid namespace |
| `--unshare-net` | Create a new network namespace |
| `--unshare-uts` | Create a new uts namespace |
| `--unshare-cgroup` | Create a new cgroup namespace |
| `--unshare-cgroup-try` | Create a new cgroup namespace if possible else skip it |
| `--unshare-all` | Unshare all possible namespaces. Currently equivalent with: `--unshare-user-try --unshare-ipc --unshare-pid --unshare-net --unshare-uts --unshare-cgroup-try` |
| `--share-net` | Retain the network namespace, overriding an earlier `--unshare-all` or `--unshare-net` |
| `--userns FD` | Use an existing user namespace instead of creating a new one. The namespace must fulfil the permission requirements for `setns()`, which generally means that it must be a descendant of the currently active user namespace, owned by the same user. |
| `--disable-userns` | Prevent the process in the sandbox from creating further user namespaces, so that it cannot rearrange the filesystem namespace or do other more complex namespace modification. |
| `--assert-userns-disabled` | Confirm that the process in the sandbox has been prevented from creating further user namespaces, but without taking any particular action to prevent that. For example, this can be combined with --userns to check that the given user namespace has already been set up to prevent the creation of further user namespaces. |
| `--pidns FD` | Use an existing pid namespace instead of creating one. This is often used with `--userns`, because the pid namespace must be owned by the same user namespace that bwrap uses. |
| `--uid UID` | Use a custom user id in the sandbox (requires `--unshare-user`) |
| `--gid GID` | Use a custom group id in the sandbox (requires `--unshare-user`) |
| `--hostname HOSTNAME` | Use a custom hostname in the sandbox (requires `--unshare-uts`) |
| `--chdir DIR` | Change directory to DIR |
| `--setenv VAR VALUE` | Set an environment variable |
| `--unsetenv VAR` | Unset an environment variable |
| `--clearenv` | Unset all environment variables, except for PWD and any that are subsequently set by `--setenv` |
| `--lock-file DEST` | Take a lock on DEST while the sandbox is running. This option can be used multiple times to take locks on multiple files. |
| `--sync-fd FD` | Keep this file descriptor open while the sandbox is running |
| `--perms OCTAL` | This option does nothing on its own, and must be followed by one of the options that it affects. It sets the permissions for the next operation to OCTAL. Subsequent operations are not affected: for example, `--perms 0700 --tmpfs /a --tmpfs /b` will mount `/a` with permissions `0700`, then return to the default permissions for `/b`. Note that `--perms` and `--size` can be combined: `--perms 0700 --size 10485760 --tmpfs /s` will apply permissions as well as a maximum size to the created tmpfs. |
| `--size BYTES` | This option does nothing on its own, and must be followed by `--tmpfs`. It sets the size in bytes for the next tmpfs. For example, `--size 10485760 --tmpfs /tmp` will create a tmpfs at `/tmp` of size 10MiB. Subsequent operations are not affected. |
| `--bind SRC DEST` | Bind mount the host path SRC on DEST |
| `--bind-try SRC DEST` | Equal to `--bind` but ignores non-existent SRC |
| `--dev-bind SRC DEST` | Bind mount the host path SRC on DEST, allowing device access |
| `--dev-bind-try SRC DEST` | Equal to `--dev-bind` but ignores non-existent SRC |
| `--ro-bind SRC DEST` | Bind mount the host path SRC readonly on DEST |
| `--ro-bind-try SRC DEST` | Equal to `--ro-bind` but ignores non-existent SRC |
| `--remount-ro DEST` | Remount the path DEST as readonly. It works only on the specified mount point, without changing any other mount point under the specified path |
| `--overlay-src SRC` | This option does nothing on its own, and must be followed by one of the other overlay options. It specifies a host path from which files should be read if they aren't present in a higher layer. |
| `--overlay RWSRC WORKDIR DEST`, `--tmp-overlay DEST`, `--ro-overlay DEST` | Use overlayfs to mount the host paths specified by `RWSRC` and all immediately preceding `--overlay-src` on `DEST`. `DEST` will contain the union of all the files in all the layers. With `--overlay` all writes will go to `RWSRC`. Reads will come preferentially from `RWSRC`, and then from any `--overlay-src` paths. `WORKDIR` must be an empty directory on the same filesystem as `RWSRC`, and is used internally by the kernel. With `--tmp-overlay` all writes will go to the tmpfs that hosts the sandbox root, in a location not accessible from either the host or the child process. Writes will therefore not be persisted across multiple runs. With `--ro-overlay` the filesystem will be mounted read-only. This option requires at least two `--overlay-src` to precede it. |
| `--proc DEST` | Mount procfs on DEST |
| `--dev DEST` | Mount new devtmpfs on DEST |
| `--tmpfs DEST` | Mount new tmpfs on DEST. If the previous option was `--perms`, it sets the mode of the tmpfs. Otherwise, the tmpfs has mode `0755`. If the previous option was `--size`, it sets the size in bytes of the tmpfs. Otherwise, the tmpfs has the default size. |
| `--mqueue DEST` | Mount new mqueue on DEST |
| `--dir DEST` | Create a directory at DEST. If the directory already exists, its permissions are unmodified, ignoring `--perms` (use `--chmod` if the permissions of an existing directory need to be changed). If the directory is newly created and the previous option was `--perms`, it sets the mode of the directory. Otherwise, newly-created directories have mode `0755`. |
| `--file FD DEST` | Copy from the file descriptor FD to DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0666` (note that this is not the same as `--bind-data`). |
| `--bind-data FD DEST` | Copy from the file descriptor FD to a file which is bind-mounted on DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0600` (note that this is not the same as `--file`). |
| `--ro-bind-data FD DEST` | Copy from the file descriptor FD to a file which is bind-mounted read-only on DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0600` (note that this is not the same as `--file`). |
| `--symlink SRC DEST` | Create a symlink at DEST with target SRC. |
| `--chmod OCTAL PATH` | Set the permissions of PATH, which must already exist, to OCTAL. |
| `--seccomp FD` | Load and use seccomp rules from FD. The rules need to be in the form of a compiled cBPF program, as generated by seccomp_export_bpf. If this option is given more than once, only the last one is used. Use `--add-seccomp-fd` if multiple seccomp programs are needed. |
| `--add-seccomp-fd FD` | Load and use seccomp rules from FD. The rules need to be in the form of a compiled cBPF program, as generated by seccomp_export_bpf. This option can be repeated, in which case all the seccomp programs will be loaded in the order given (note that the kernel will evaluate them in reverse order, so the last program on the bwrap command-line is evaluated first). All of them, except possibly the last, must allow use of the PR_SET_SECCOMP prctl. This option cannot be combined with `--seccomp`. |
| `--exec-label LABEL` | Exec Label from the sandbox. On an SELinux system you can specify the SELinux context for the sandbox process(s). |
| `--file-label LABEL` | File label for temporary sandbox content. On an SELinux system you can specify the SELinux context for the sandbox content. |
| `--block-fd FD` | Block the sandbox on reading from FD until some data is available. |
| `--userns-block-fd FD` | Do not initialize the user namespace but wait on FD until it is ready. This allow external processes (like newuidmap/newgidmap) to setup the user namespace before it is used by the sandbox process. |
| `--info-fd FD` | Write information in JSON format about the sandbox to FD. |
| `--json-status-fd FD` | Multiple JSON documents are written to FD, one per line. |
| `--new-session` | Create a new terminal session for the sandbox (calls `setsid()`). This disconnects the sandbox from the controlling terminal which means the sandbox can't for instance inject input into the terminal. Note: In a general sandbox, if you don't use `--new-session`, it is recommended to use seccomp to disallow the `TIOCSTI` ioctl, otherwise the application can feed keyboard input to the terminal which can e.g. lead to out-of-sandbox command execution. |
| `--die-with-parent` | Ensures child process (COMMAND) dies when bwrap's parent dies. Kills (SIGKILL) all bwrap sandbox processes in sequence from parent to child including COMMAND process when bwrap or bwrap's parent dies. |
| `--as-pid-1` | Do not create a process with PID=1 in the sandbox to reap child processes. |
| `--cap-add CAP` | Add the specified capability CAP, e.g. `CAP_DAC_READ_SEARCH`, when running as privileged user. It accepts the special value `ALL` to add all the permitted caps. |
| `--cap-drop CAP` | Drop the specified capability when running as privileged user. It accepts the special value `ALL` to drop all the caps. By default no caps are left in the sandboxed process. The `--cap-add` and `--cap-drop` options are processed in the order they are specified on the command line. Please be careful to the order they are specified. |

View file

@ -1,19 +0,0 @@
---
obj: application
repo: https://github.com/demoray/retry-cli
rev: 2025-01-28
---
# retry-cli
retry is a command line tool written in Rust intended to automatically re-run failed commands with a user configurable delay between tries.
## Usage
Usage: `retry [OPTIONS] <COMMAND>...`
| Option | Description |
| ------------------------------- | -------------------------------------------------------------- |
| `--attempts <ATTEMPTS>` | Amount of retries (default: `3`) |
| `--min-duration <MIN_DURATION>` | minimum duration (default: `10ms`) |
| `--max-duration <MAX_DURATION>` | maximum duration |
| `--jitter <JITTER>` | amount of randomization to add to the backoff (default: `0.3`) |
| `--factor <FACTOR>` | backoff factor (default: `2`) |

View file

@ -0,0 +1,8 @@
---
obj: application
repo: https://github.com/virt-manager/virt-manager
website: https://virt-manager.org
---
# Virt Manager
#wip

View file

@ -1,457 +0,0 @@
---
obj: application
website: https://bitmagnet.io
---
# bitmagnet
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.
## Docker Compose
```yml
services:
bitmagnet:
image: ghcr.io/bitmagnet-io/bitmagnet:latest
container_name: bitmagnet
ports:
# API and WebUI port:
- "3333:3333"
# BitTorrent ports:
- "3334:3334/tcp"
- "3334:3334/udp"
restart: unless-stopped
environment:
- POSTGRES_HOST=postgres
- POSTGRES_PASSWORD=postgres
# - TMDB_API_KEY=your_api_key
command:
- worker
- run
- --keys=http_server
- --keys=queue_server
# disable the next line to run without DHT crawler
- --keys=dht_crawler
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:16-alpine
container_name: bitmagnet-postgres
volumes:
- ./data/postgres:/var/lib/postgresql/data
# ports:
# - "5432:5432" Expose this port if you'd like to dig around in the database
restart: unless-stopped
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=bitmagnet
- PGUSER=postgres
shm_size: 1g
healthcheck:
test:
- CMD-SHELL
- pg_isready
start_period: 20s
interval: 10s
```
After running `docker compose up -d` you should be able to access the web interface at http://localhost:3333. The DHT crawler should have started and you should see items appear in the web UI within around a minute.
To run the bitmagnet CLI, use `docker compose run bitmagnet bitmagnet command...`
## Configuration
- `postgres.host`, `postgres.name`, `postgres.user`, `postgres.password` (default: `localhost`, `bitmagnet`, `postgres`, `empty`): Set these values to configure connection to your Postgres database.
- `tmdb.api_key`: TMDB API Key.
- `tmdb.enabled` (default: `true`): Specify false to disable the TMDB API integration.
- `dht_crawler.save_files_threshold` (default: `100`): Some torrents contain many thousands of files, which impacts performance and uses a lot of database disk space. This parameter sets a maximum limit for the number of files saved by the crawler with each torrent.
- `dht_crawler.save_pieces` (default: `false`): If true, the DHT crawler will save the pieces bytes from the torrent metadata. The pieces take up quite a lot of space, and arent currently very useful, but they may be used by future features.
- `log.level` (default: `info`): Logging
- `log.json` (default: `false`): By default logs are output in a pretty format with colors; enable this flag if youd prefer plain JSON.
To see a full list of available configuration options using the CLI, run:
```sh
bitmagnet config show
```
### Specifying configuration values
Configuration paths are delimited by dots. If youre specifying configuration in a YAML file then each dot represents a nesting level, for example to configure `log.json`, `tmdb.api_key` and `http_server.cors.allowed_origins`:
```yml
log:
json: true
tmdb:
api_key: my-api-key
http_server:
cors:
allowed_origins:
- https://example1.com
- https://example2.com
```
This is not a suggested configuration file, its just an example of how to specify configuration values.
To configure these same values with environment variables, upper-case the path and replace all dots with underscores, for example:
```sh
LOG_JSON=true \
TMDB_API_KEY=my-api-key \
HTTP_SERVER_CORS_ALLOWED_ORIGINS=https://example1.com,https://example2.com \
bitmagnet config show
```
### Configuration precedence
In order of precedence, configuration values will be read from:
- Environment variables
- `config.yml` in the current working directory
- `config.yml` in the XDG-compliant config location for the current user (for example on MacOS this is `~/Library/Application Support/bitmagnet/config.yml`)
- Default values
Environment variables can be used to configure simple scalar types (strings, numbers, booleans) and slice types (arrays). For more complex configuration types such as maps youll have to use YAML configuration. bitmagnet will exit with an error if its unable to parse a provided configuration value.
### VPN configuration
Its recommended that you run bitmagnet behind a VPN. If youre using Docker then `gluetun` is a good solution for this, although the networking settings can be tricky.
### Classifier
The classifier can be configured and customized to do things like:
- automatically delete torrents you dont want in your index
- add custom tags to torrents youre interested in
- customize the keywords and file extensions used for determining a torrents content type
- specify completely custom logic to classify and perform other actions on torrents
#### Background
After a torrent is crawled or imported, some further processing must be done to gather metadata, have a guess at the torrents contents and finally index it in the database, allowing it to be searched and displayed in the UI/API.
bitmagnets classifier is powered by a Domain Specific Language. The aim of this is to provide a high level of customisability, along with transparency into the classification process which will hopefully aid collaboration on improvements to the core classifier logic.
The classifier is declared in YAML format. The application includes a core classifier that can be configured, extended or completely replaced with a custom classifier. This page documents the required format.
#### Source precedence
bitmagnet will attempt to load classifier source code from all the following locations. Any discovered classifier source will be merged with other sources in the following order of precedence:
- the core classifier
- `classifier.yml` in the XDG-compliant config location for the current user (for example on MacOS this is `~/Library/Application Support/bitmagnet/classifier.yml`)
- `classifier.yml` in the current working directory
- Classifier configuration
Note that multiple sources will be merged, not replaced. For example, keywords added to the classifier configuration will be merged with the core keywords.
The merged classifier source can be viewed with the CLI command `bitmagnet classifier show`.
#### Schema
A JSON schema for the classifier is available; some editors and IDEs will be able to validate the structure of your classifier document by specifying the `$schema` attribute:
```yml
$schema: bitmagnet.io/schemas/classifier-0.1.json
```
The classifier schema can also be viewed by running the cli command `bitmagnet classifier schema`.
The classifier declaration comprises the following components:
- **Workflows**
A workflow is a list of actions that will be executed on all torrents when they are classified. When no custom configuration is provided, the default workflow will be run. To use a different workflow instead, specify the classifier.workflow configuration option with the name of your custom workflow.
- **Actions**
An action is a piece of workflow to be executed. All actions either return an updated classification result or an error.
For example, the following action will set the content type of the current torrent to audiobook:
```yml
set_content_type: audiobook
```
The following action will return an unmatched error:
```yml
unmatched
```
And the following action will delete the current torrent being classified (returning a delete error):
```yml
delete
```
These actions arent much use on their own - wed want to check some conditions are satisfied before setting a content type or deleting a torrent, and for this wed use the if_else action. For example, the following action will set the content type to audiobook if the torrent name contains audiobook-related keywords, and will otherwise return an unmatched error:
```yml
if_else:
condition: "torrent.baseName.matches(keywords.audiobook)"
if_action:
set_content_type: audiobook
else_action: unmatched
```
The following action will delete a torrent if its name matches the list ofbanned keywords:
```yml
if_else:
condition: "torrent.baseName.matches(keywords.banned)"
if_action: delete
```
Actions may return the following types of error:
- An unmatched error indicates that the current action did not match for the current torrent
- A delete error indicates that the torrent should be deleted
- An unhandled error may occur, for example if the TMDB API was unreachable
Whenever an error is returned, the current classification will be terminated.
Note that a workflow should never return an unmatched error. We expect to iterate through a series of checks corresponding to each content type. If the current torrent does not match the content type being checked, well proceed to the next check until we find a match; if no match can be found, the content type will be unknown. To facilitate this, we can use the find_match action.
The find_match action is a bit like a try/catch block in some programming languages; it will try to match a particular content type, and if an unmatched error is returned, it will catch the unmatched error proceed to the next check. For example, the following action will attempt to classify a torrent as an audiobook, and then as an ebook. If both checks fail, the content type will be unknown:
```yml
find_match:
# match audiobooks:
- if_else:
condition: "torrent.baseName.matches(keywords.audiobook)"
if_action:
set_content_type: audiobook
else_action: unmatched
# match ebooks:
- if_else:
condition: "torrent.files.map(f, f.extension in extensions.ebook ? f.size : - f.size).sum() > 0"
if_action:
set_content_type: ebook
else_action: unmatched
```
For a full list of available actions, please refer to the JSON schema.
#### Conditions
Conditions are used in conjunction with the `if_else` action, in order to execute an action if a particular condition is satisfied.
The conditions in the examples above use CEL (Common Expression Language) expressions.
##### The CEL environment
CEL is already a well-documented language, so this page wont go into detail about the CEL syntax. In the context of the bitmagnet classifier, the CEL environment exposes a number of variables:
- `torrent`: The current torrent being classified (protobuf type: `bitmagnet.Torrent`)
- `result`: The current classification result (protobuf type: `bitmagnet.Classification`)
- `keywords`: A map of strings to regular expressions, representing named lists of keywords
- `extensions`: A map of strings to string lists, representing named lists of extensions
- `contentType`: A map of strings to enum values representing content types (e.g. `contentType.movie`, `contentType.music`)
- `fileType`: A map of strings to enum values representing file types (e.g. `fileType.video`, `fileType.audio`)
- `flags`: A map of strings to the configured values of flags
- `kb`, `mb`, `gb`: Variables defined for convenience, equal to the number of bytes in a kilobyte, megabyte and gigabyte respectively
For more details on the protocol buffer types, please refer to the protobuf schema.
##### Boolean logic (`or`, `and` & `not`)
In addition to CEL expressions, conditions may be declared using the boolean logic operators or, and and not. For example, the following condition evaluates to true, if either the torrent consists mostly of file extensions very commonly used for music (e.g. `flac`), OR if the torrent both has a name that includes music-related keywords, and consists mostly of audio files:
or:
- "torrent.files.map(f, f.extension in extensions.music ? f.size : - f.size).sum() > 0"
- and:
- "torrent.baseName.matches(keywords.music)"
- "torrent.files.map(f, f.fileType == fileType.audio ? f.size : - f.size).sum() > 0"
> Note that we could also have specified the above condition using just one CEL expression, but breaking up complex conditions like this is more readable.
#### Keywords
The classifier includes lists of keywords associated with different types of torrents. These aim to provide a simpler alternative to regular expressions, and the classifier will compile all keyword lists to regular expressions that can be used within CEL expressions. In order for a keyword to match, it must appear as an isolated token in the test string - that is, it must be either at the beginning or preceded by a non-word character, and either at the end or followed by a non-word character.
Reserved characters in the syntax are:
parentheses `(` and `)` enclose a group
`|` is an `OR` operator
`*` is a wildcard operator
`?` makes the previous character or group optional
`+` specifies one or more of the previous character
`#` specifies any number
` ` specifies any non-word or non-number character
For example, to define some music- and audiobook-related keywords:
```yml
keywords:
music: # define music-related keywords
- music # all letters are case-insensitive, and must be defined in lowercase unless escaped
- discography
- album
- \V.?\A # escaped letters are case-sensitive; matches "VA", "V.A" and "V.A.", but not "va"
- various artists # matches "various artists" and "Various.Artists"
audiobook: # define audiobook-related keywords
- (audio)?books?
- (un)?abridged
- narrated
- novels?
- (auto)?biograph(y|ies) # matches "biography", "autobiographies" etc.
```
If youd rather use plain old regular expressions, the CEL syntax supports that too, for example `torrent.baseName.matches("^myregex$")`.
#### Extensions
The classifier includes lists of file extensions associated with different types of content. For example, to identify torrents of type comic by their file extensions, the extensions are first declared:
```yml
extensions:
comic:
- cb7
- cba
- cbr
- cbt
- cbz
```
The extensions can now be used as part of a condition within an `if_else` action:
```yml
if_else:
condition: "torrent.files.map(f, f.extension in extensions.comic ? f.size : - f.size).sum() > 0"
if_action:
set_content_type: comic
else_action: unmatched
```
#### Flags
Flags can be used to configure workflows. In order to use a flag in a workflow, it must first be defined. For example, the core classifier defines the following flags that are used in the default workflow:
```yml
flag_definitions:
tmdb_enabled: bool
delete_content_types: content_type_list
delete_xxx: bool
```
These flags can be referenced within CEL expressions, for example to delete adult content if the `delete_xxx` flag is set to true:
```yml
if_else:
condition: "flags.delete_xxx && result.contentType == contentType.xxx"
if_action: delete
```
#### Configuration
The classifier can be customized by providing a `classifier.yml` file in a supported location as described above. If you only want to make some minor modifications, it may be convenient to specify these using the main application configuration instead, by providing values in either `config.yml` or as environment variables. The application configuration exposes some but not all properties of the classifier.
For example, in your `config.yml` you could specify:
```yml
classifier:
# specify a custom workflow to be used:
workflow: custom
# add to the core list of music keywords:
keywords:
music:
- my-custom-music-keyword
# add a file extension to the list of audiobook-related extensions:
extensions:
audiobook:
- abc
# auto-delete all comics
flags:
delete_content_types:
- comics
```
Or as environment variables you could specify:
```shell
TMDB_ENABLED=false \ # disable the TMDB API integration
CLASSIFIER_WORKFLOW=custom \ # specify a custom workflow to be used
CLASSIFIER_DELETE_XXX=true \ # auto-delete all adult content
bitmagnet worker run --all
```
#### Validation
The classifier source is compiled on initial load, and all structural and syntax errors should be caught at compile time. If there are errors in your classifier source, bitmagnet should exit with an error message indicating the location of the problem.
#### Testing on individual torrents
You can test the classifier on an individual torrent or torrents using the bitmagnet process CLI command:
```shell
bitmagnet process --infoHash=aaaaaaaaaaaaaaaaaaaa --infoHash=bbbbbbbbbbbbbbbbbbbb
```
#### Reclassify all torrents
The classifier is being updated regularly, and to reclassify already-crawled torrents youll need to run the CLI and queue them for reprocessing.
For context: after torrents are crawled or imported, they wont show up in the UI straight away. They must first be “processed” by the job queue. This involves a few steps:
- The classifier attempts to classify the torrent (determine its content type, and match it to a known piece of content)
- The search index for the torrent is built
- The torrent content record is saved to the database
The reprocess command will re-queue torrents to allow the latest updates to be applied to their content records.
To reprocess all torrents in your index, simply run `bitmagnet reprocess`. If youve indexed a lot of torrents, this will take a while, so there are a few options available to control exactly what gets reprocessed:
- `apisDisabled`: Disable API calls during classification. This makes the classifier run a lot faster, but disables identification with external services such as TMDB (metadata already gathered from external APIs is not lost).
- `contentType`: Only reprocess torrents of a certain content type. For example, `bitmagnet reprocess --contentType movie` will only reprocess movies. Multiple content types can be comma separated, and `null` refers to torrents of unknown content type.
- `orphans`: Only reprocess torrents that have no content record.
- `classifyMode`: This controls how already matched torrents are handled.
- `default`: Only attempt to match previously unmatched torrents
- `rematch`: Ignore any pre-existing match and always classify from scratch (A torrent is “matched” if its associated with a specific piece of content from one of the API integrations, currently only TMDB)
#### Practical use cases and examples
##### Auto-delete specific content types
The default workflow provides a flag that allows for automatically deleting specific content types. For example, to delete all comic, software and xxx torrents:
```yml
flags:
delete_content_types:
- comic
- software
- xxx
```
Auto-deleting adult content has been one of the most requested features. For convenience, this is exposed as the configuration option `classifier.delete_xxx`, and can be specified with the environment variable `CLASSIFIER_DELETE_XXX=true`.
##### Auto-delete torrents containing specific keywords
Any torrents containing keywords in the banned list will be automatically deleted. This is primarily used for deleting CSAM content, but the list can be extended to auto-delete any other keywords:
```yml
keywords:
banned:
- my-hated-keyword
```
##### Disable the TMDB API integration
The `tmdb_enabled` flag can be used to disable the TMDB API integration:
```yml
flags:
tmdb_enabled: false
```
For convenience, this is also exposed as the configuration option `tmdb.enabled`, and can be specified with the environment variable `$TMDB_ENABLED=false`.
The `apis_enabled` flag has the same effect, disabling TMDB and any future API integrations:
```yml
flags:
apis_enabled: false
```
API integrations can also be disabled for individual classifier runs, without disabling them globally, by passing the `--apisDisabled` flag to the reprocess command.
##### Extend the default workflow with custom logic
Custom workflows can be added in the workflows section of the classifier document. It is possible to extend the default workflow by using the `run_workflow` action within your custom workflow, for example:
```yml
workflows:
custom:
- <my custom action to be executed before the default workflow>
- run_workflow: default
- <my custom action to be executed after the default workflow>
```
A concrete example of this is adding tags to torrents based on custom criteria.
##### Use tags to create custom torrent categories
Is there a category of torrent youre interested in that isnt captured by one of the core content types? Torrent tags are intended to capture custom categories and content types.
Lets imagine youd like to surface torrents containing interesting documents. The interesting documents have specific file extensions, and their filenames contain specific keywords. Lets create a custom action to tag torrents containing interesting documents:
```yml
# define file extensions for the documents we're interested in:
extensions:
interesting_documents:
- doc
- docx
- pdf
# define keywords that must be present in the filenames of the interesting documents:
keywords:
interesting_documents:
- interesting
- fascinating
# extend the default workflow with a custom workflow to tag torrents containing interesting documents:
workflows:
custom:
# first run the default workflow:
- run_workflow: default
# then add the tag to any torrents containing interesting documents:
- if_else:
condition: "torrent.files.filter(f, f.extension in extensions.interesting_documents && f.basePath.matches(keywords.interesting_documents)).size() > 0"
if_action:
add_tag: interesting-documents
```
To specify that the custom workflow should be used, remember to specify the `classifier.workflow` configuration option, e.g. `CLASSIFIER_WORKFLOW=custom bitmagnet worker run --all`.

View file

@ -1,123 +0,0 @@
---
obj: application
repo: https://github.com/FiloSottile/age
source: https://age-encryption.org/v1
rev: 2025-01-09
---
# age
age is a simple, modern and secure file encryption tool, format, and Go library.
It features small explicit keys, no config options, and UNIX-style composability.
```sh
$ age-keygen -o key.txt
Public key: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
$ PUBLIC_KEY=$(age-keygen -y key.txt)
$ tar cvz ~/data | age -r $PUBLIC_KEY > data.tar.gz.age
$ age --decrypt -i key.txt data.tar.gz.age > data.tar.gz
```
## Usage
For the full documentation, read [the age(1) man page](https://filippo.io/age/age.1).
```
Usage:
age [--encrypt] (-r RECIPIENT | -R PATH)... [--armor] [-o OUTPUT] [INPUT]
age [--encrypt] --passphrase [--armor] [-o OUTPUT] [INPUT]
age --decrypt [-i PATH]... [-o OUTPUT] [INPUT]
Options:
-e, --encrypt Encrypt the input to the output. Default if omitted.
-d, --decrypt Decrypt the input to the output.
-o, --output OUTPUT Write the result to the file at path OUTPUT.
-a, --armor Encrypt to a PEM encoded format.
-p, --passphrase Encrypt with a passphrase.
-r, --recipient RECIPIENT Encrypt to the specified RECIPIENT. Can be repeated.
-R, --recipients-file PATH Encrypt to recipients listed at PATH. Can be repeated.
-i, --identity PATH Use the identity file at PATH. Can be repeated.
INPUT defaults to standard input, and OUTPUT defaults to standard output.
If OUTPUT exists, it will be overwritten.
RECIPIENT can be an age public key generated by age-keygen ("age1...")
or an SSH public key ("ssh-ed25519 AAAA...", "ssh-rsa AAAA...").
Recipient files contain one or more recipients, one per line. Empty lines
and lines starting with "#" are ignored as comments. "-" may be used to
read recipients from standard input.
Identity files contain one or more secret keys ("AGE-SECRET-KEY-1..."),
one per line, or an SSH key. Empty lines and lines starting with "#" are
ignored as comments. Passphrase encrypted age files can be used as
identity files. Multiple key files can be provided, and any unused ones
will be ignored. "-" may be used to read identities from standard input.
When --encrypt is specified explicitly, -i can also be used to encrypt to an
identity file symmetrically, instead or in addition to normal recipients.
```
### Multiple recipients
Files can be encrypted to multiple recipients by repeating `-r/--recipient`. Every recipient will be able to decrypt the file.
```
$ age -o example.jpg.age -r age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p \
-r age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg example.jpg
```
#### Recipient files
Multiple recipients can also be listed one per line in one or more files passed with the `-R/--recipients-file` flag.
```
$ cat recipients.txt
# Alice
age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
# Bob
age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg
$ age -R recipients.txt example.jpg > example.jpg.age
```
If the argument to `-R` (or `-i`) is `-`, the file is read from standard input.
### Passphrases
Files can be encrypted with a passphrase by using `-p/--passphrase`. By default age will automatically generate a secure passphrase. Passphrase protected files are automatically detected at decrypt time.
```
$ age -p secrets.txt > secrets.txt.age
Enter passphrase (leave empty to autogenerate a secure one):
Using the autogenerated passphrase "release-response-step-brand-wrap-ankle-pair-unusual-sword-train".
$ age -d secrets.txt.age > secrets.txt
Enter passphrase:
```
### Passphrase-protected key files
If an identity file passed to `-i` is a passphrase encrypted age file, it will be automatically decrypted.
```
$ age-keygen | age -p > key.age
Public key: age1yhm4gctwfmrpz87tdslm550wrx6m79y9f2hdzt0lndjnehwj0ukqrjpyx5
Enter passphrase (leave empty to autogenerate a secure one):
Using the autogenerated passphrase "hip-roast-boring-snake-mention-east-wasp-honey-input-actress".
$ age -r age1yhm4gctwfmrpz87tdslm550wrx6m79y9f2hdzt0lndjnehwj0ukqrjpyx5 secrets.txt > secrets.txt.age
$ age -d -i key.age secrets.txt.age > secrets.txt
Enter passphrase for identity file "key.age":
```
Passphrase-protected identity files are not necessary for most use cases, where access to the encrypted identity file implies access to the whole system. However, they can be useful if the identity file is stored remotely.
### SSH keys
As a convenience feature, age also supports encrypting to `ssh-rsa` and `ssh-ed25519` SSH public keys, and decrypting with the respective private key file. (`ssh-agent` is not supported.)
```
$ age -R ~/.ssh/id_ed25519.pub example.jpg > example.jpg.age
$ age -d -i ~/.ssh/id_ed25519 example.jpg.age > example.jpg
```
Note that SSH key support employs more complex cryptography, and embeds a public key tag in the encrypted file, making it possible to track files that are encrypted to a specific public key.
#### Encrypting to a GitHub user
Combining SSH key support and `-R`, you can easily encrypt a file to the SSH keys listed on a GitHub profile.
```
$ curl https://github.com/benjojo.keys | age -R - example.jpg > example.jpg.age
```

View file

@ -1,126 +0,0 @@
---
obj: concept
repo: https://github.com/ulid/spec
aliases: ["Universally Unique Lexicographically Sortable Identifier"]
---
# ULID (Universally Unique Lexicographically Sortable Identifier)
UUID can be suboptimal for many use-cases because:
- It isn't the most character efficient way of encoding 128 bits of randomness
- UUID v1/v2 is impractical in many environments, as it requires access to a unique, stable MAC address
- UUID v3/v5 requires a unique seed and produces randomly distributed IDs, which can cause fragmentation in many data structures
- UUID v4 provides no other information than randomness which can cause fragmentation in many data structures
Instead, herein is proposed ULID:
```javascript
ulid() // 01ARZ3NDEKTSV4RRFFQ69G5FAV
```
- 128-bit compatibility with UUID
- 1.21e+24 unique ULIDs per millisecond
- Lexicographically sortable!
- Canonically encoded as a 26 character string, as opposed to the 36 character UUID
- Uses Crockford's base32 for better efficiency and readability (5 bits per character)
- Case insensitive
- No special characters (URL safe)
- Monotonic sort order (correctly detects and handles the same millisecond)
## Specification
Below is the current specification of ULID as implemented in [ulid/javascript](https://github.com/ulid/javascript).
*Note: the binary format has not been implemented in JavaScript as of yet.*
```
01AN4Z07BY 79KA1307SR9X4MV3
|----------| |----------------|
Timestamp Randomness
48bits 80bits
```
### Components
**Timestamp**
- 48 bit integer
- UNIX-time in milliseconds
- Won't run out of space 'til the year 10889 AD.
**Randomness**
- 80 bits
- Cryptographically secure source of randomness, if possible
### Sorting
The left-most character must be sorted first, and the right-most character sorted last (lexical order). The default ASCII character set must be used. Within the same millisecond, sort order is not guaranteed
### Canonical String Representation
```
ttttttttttrrrrrrrrrrrrrrrr
where
t is Timestamp (10 characters)
r is Randomness (16 characters)
```
#### Encoding
Crockford's Base32 is used as shown. This alphabet excludes the letters I, L, O, and U to avoid confusion and abuse.
```
0123456789ABCDEFGHJKMNPQRSTVWXYZ
```
### Monotonicity
When generating a ULID within the same millisecond, we can provide some guarantees regarding sort order. Namely, if the same millisecond is detected, the `random` component is incremented by 1 bit in the least significant bit position (with carrying). For example:
```javascript
import { monotonicFactory } from 'ulid'
const ulid = monotonicFactory()
// Assume that these calls occur within the same millisecond
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRZ
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS0
```
If, in the extremely unlikely event that, you manage to generate more than $2^{80}$ ULIDs within the same millisecond, or cause the random component to overflow with less, the generation will fail.
```javascript
import { monotonicFactory } from 'ulid'
const ulid = monotonicFactory()
// Assume that these calls occur within the same millisecond
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRY
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRZ
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS0
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS1
...
ulid() // 01BX5ZZKBKZZZZZZZZZZZZZZZX
ulid() // 01BX5ZZKBKZZZZZZZZZZZZZZZY
ulid() // 01BX5ZZKBKZZZZZZZZZZZZZZZZ
ulid() // throw new Error()!
```
#### Overflow Errors when Parsing Base32 Strings
Technically, a 26-character Base32 encoded string can contain 130 bits of information, whereas a ULID must only contain 128 bits. Therefore, the largest valid ULID encoded in Base32 is `7ZZZZZZZZZZZZZZZZZZZZZZZZZ`, which corresponds to an epoch time of `281474976710655` or $2^{48}-1$.
Any attempt to decode or encode a ULID larger than this should be rejected by all implementations, to prevent overflow bugs.
### Binary Layout and Byte Order
The components are encoded as 16 octets. Each component is encoded with the Most Significant Byte first (network byte order).
```
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 32_bit_uint_time_high |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 16_bit_uint_time_low | 16_bit_uint_random |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 32_bit_uint_random |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 32_bit_uint_random |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
```

View file

@ -401,11 +401,11 @@ You've seen most of the remaining operators in other examples:
### Comments
Dart supports single-line comments, multi-line comments, and documentation comments.
#### Single-line comments
####Single-line comments
A single-line comment begins with `//`. Everything between `//` and the end of line is ignored by the Dart compiler.
```dart
void main() {
// refactor into an AbstractLlamaGreetingFactory?
// TODO: refactor into an AbstractLlamaGreetingFactory?
print('Welcome to my Llama farm!');
}
```

View file

@ -941,147 +941,35 @@ The exact assembly code syntax is target-specific and opaque to the compiler exc
Currently, all supported targets follow the assembly code syntax used by LLVM's internal assembler which usually corresponds to that of the GNU assembler (GAS). On x86, the .intel_syntax noprefix mode of GAS is used by default. On ARM, the .syntax unified mode is used. These targets impose an additional restriction on the assembly code: any assembler state (e.g. the current section which can be changed with `.section`) must be restored to its original value at the end of the asm string. Assembly code that does not conform to the GAS syntax will result in assembler-specific behavior. Further constraints on the directives used by inline assembly are indicated by Directives Support.
## [Crates](https://lib.rs)
### Filesystem
- [itertools](https://lib.rs/crates/itertools): Extra iterator adaptors, iterator methods, free functions, and macros
- [num_enum](https://lib.rs/crates/num_enum): Procedural macros to make inter-operation between primitives and enums easier
- [cached](https://crates.io/crates/cached): Caching Crate
- [tempfile](https://lib.rs/crates/tempfile): Temporary files and directories
- [temp-dir](https://lib.rs/crates/temp-dir): Simple temporary directory with cleanup
- [walkdir](https://crates.io/crates/walkdir): recursively scan directories
- [jwalk](https://lib.rs/crates/jwalk): Filesystem walk performed in parallel with streamed and sorted results
- [glob](https://lib.rs/crates/glob): Support for matching file paths against Unix shell style patterns
- [notify](https://lib.rs/crates/notify): filesystem watcher
- [camino](https://lib.rs/crates/camino): UTF-8 paths
- [sugar_path](https://lib.rs/crates/sugar_path): Sugar functions for manipulating paths
- [path-absolutize](https://lib.rs/crates/path-absolutize): A library for extending Path and PathBuf in order to get an absolute path and remove the containing dots
- [fs_extra](https://lib.rs/crates/fs_extra): Expanding std::fs and std::io. Recursively copy folders with information about process and much more.
- [vfs](https://lib.rs/crates/vfs): A virtual filesystem for Rust
- [fuser](https://lib.rs/crates/fuser): Filesystem in Userspace (FUSE) for Rust
- [directories](https://lib.rs/crates/directories): A tiny mid-level library that provides platform-specific standard locations of directories for config, cache and other data on Linux, Windows and macOS
- [xattr](https://lib.rs/crates/xattr): unix extended filesystem attributes
- [open](https://lib.rs/crates/open): Open a path or URL using the program configured on the system
- [infer](https://lib.rs/crates/infer): Small crate to infer file type based on magic number signatures
### Error Handling
- [anyhow](https://lib.rs/crates/anyhow): Flexible concrete Error type built on `std::error::Error`
- [color-eyre](https://lib.rs/crates/color-eyre): Styled error messages
- [thiserror](https://lib.rs/crates/thiserror): macros for creating error types
- [user-error](https://lib.rs/crates/user-error): Pretty printed errors for your CLI application.
- [eyre](https://lib.rs/crates/eyre): Flexible concrete Error Reporting type built on `std::error::Error` with customizable Reports
- [color-eyre](https://lib.rs/crates/color-eyre): An error report handler for panics and `eyre::Reports` for colorful, consistent, and well formatted error reports for all kinds of errors
### Data Structures
- [hashbrown](https://lib.rs/crates/hashbrown): A Rust port of Google's SwissTable hash map
- [bitvec](https://lib.rs/crates/bitvec): Addresses memory by bits, for packed collections and bitfields
- [bitflags](https://lib.rs/crates/bitflags): A macro to generate structures which behave like bitflags
- [smallvec](https://lib.rs/crates/smallvec): 'Small vector' optimization: store up to a small number of items on the stack
- [ndarray](https://lib.rs/crates/ndarray): An n-dimensional array for general elements and for numerics. Lightweight array views and slicing; views support chunking and splitting.
- [zerovec](https://lib.rs/crates/zerovec): Zero-copy vector backed by a byte array
- [priority-queue](https://lib.rs/crates/priority-queue): A Priority Queue implemented as a heap with a function to efficiently change the priority of an item
- [histogram](https://lib.rs/crates/histogram): A collection of histogram data structures
- [fraction](https://lib.rs/crates/fraction): Lossless fractions and decimals; drop-in float replacement
- [ringbuffer](https://lib.rs/crates/ringbuffer): A fixed-size circular buffer
- [grid](https://lib.rs/crates/grid): Dynamic generic 2D data structure
- [datas](https://lib.rs/crates/datas): A library for data structures and algorithms and data analisys
- [trees](https://lib.rs/crates/trees): General purpose tree data structures
- [either](https://lib.rs/crates/either): The enum Either with variants Left and Right is a general purpose sum type with two cases
- [either_of](https://lib.rs/crates/either_of): Utilities for working with enumerated types that contain one of 2..n other types
- [petgraph](https://lib.rs/crates/petgraph): Graph data structure library. Provides graph types and graph algorithms.
- [hypergraph](https://lib.rs/crates/hypergraph): Hypergraph is data structure library to create a directed hypergraph in which an hyperedge can join any number of vertices
- [gix](https://crates.io/crates/gix): Interact with git repositories just like git would
- [git2](https://lib.rs/crates/git2): Bindings to libgit2 for interoperating with git repositories.
### Parser
- [nom](https://lib.rs/crates/nom): A byte-oriented, zero-copy, parser combinators library
- [pest](https://lib.rs/crates/pest): pest is a general purpose parser written in Rust
- [keepass](https://lib.rs/crates/keepass): KeePass .kdbx database file parser
- [html5ever](https://lib.rs/crates/html5ever): High-performance browser-grade HTML5 parser
- [comrak](https://lib.rs/crates/comrak): A 100% CommonMark-compatible GitHub Flavored Markdown parser and formatter
- [uriparse](https://lib.rs/crates/uriparse): A URI parser including relative references
- [markdown](https://lib.rs/crates/markdown): CommonMark compliant markdown parser in Rust with ASTs and extensions
- [evalexpr](https://lib.rs/crates/evalexpr): A powerful arithmetic and boolean expression evaluator
- [uuid](https://lib.rs/crates/uuid): A library to generate and parse UUIDs
- [semver](https://lib.rs/crates/semver): Parser and evaluator for Cargo's flavor of Semantic Versioning
- [url](https://lib.rs/crates/url): URL library for Rust, based on the WHATWG URL Standard
- [httparse](https://lib.rs/crates/httparse): A tiny, safe, speedy, zero-copy HTTP/1.x parser
- [syntect](https://lib.rs/crates/syntect): library for high quality syntax highlighting and code intelligence using Sublime Text's grammars
### Serialization
- [serde](https://lib.rs/crates/serde): A generic serialization/deserialization framework
- [serde_with](https://lib.rs/crates/serde_with): Custom de/serialization functions for Rust's serde
- [bincode](https://lib.rs/crates/bincode): A binary serialization / deserialization strategy for transforming structs into bytes and vice versa!
- [serde_json](https://lib.rs/crates/serde_json): A [JSON](../../../files/JSON.md) serialization file format
- [serde_jsonc](https://lib.rs/crates/serde_jsonc): A JSON serialization file format
- [serde_yaml](https://lib.rs/crates/serde_yaml): [YAML](../../../files/YAML.md) data format for Serde
- [bson](https://lib.rs/crates/bson): Encoding and decoding support for [BSON](../../../files/BSON.md) in Rust
- [toml](https://lib.rs/crates/toml): A native Rust encoder and decoder of [TOML](../../../files/TOML.md)-formatted files and streams.
- [gray_matter](https://lib.rs/crates/gray_matter): Smart front matter parser. An implementation of gray-matter in rust. Parses YAML, JSON, TOML and support for custom parsers.
- [schemars](https://lib.rs/crates/schemars): Generate JSON Schemas from Rust code
- [jsonschema](https://lib.rs/crates/jsonschema): JSON schema validaton library
- [json-patch](https://lib.rs/crates/json-patch): RFC 6902, JavaScript Object Notation (JSON) Patch
- [rss](https://lib.rs/crates/rss): Library for serializing the RSS web content syndication format
- [postcard](https://lib.rs/crates/postcard): A no_std + serde compatible message library for Rust
### Encoding
- [bincode](https://lib.rs/crates/bincode): A binary serialization / deserialization strategy for transforming structs into bytes and vice versa!
- [serde](https://lib.rs/crates/serde): A generic serialization/deserialization framework
- [serde_json](https://lib.rs/crates/serde_json): A [JSON](../../../files/JSON.md) serialization file format
- [serde_yaml](https://lib.rs/crates/serde_yaml): [YAML](../../../files/YAML.md) data format for Serde
- [bson](https://lib.rs/crates/bson): Encoding and decoding support for [BSON](../../../files/BSON.md) in Rust
- [hex](https://lib.rs/crates/hex): Encoding and decoding data into/from hexadecimal representation
- [base62](https://lib.rs/crates/base62): A Base62 encoding/decoding library
- [toml](https://lib.rs/crates/toml): A native Rust encoder and decoder of [TOML](../../../files/TOML.md)-formatted files and streams.
- [base64](https://lib.rs/crates/base64): encodes and decodes [base64](../../../files/Base64.md) as bytes or utf8
- [base64-url](https://lib.rs/crates/base64-url): Base64 encode, decode, escape and unescape for URL applications
- [encoding_rs](https://lib.rs/crates/encoding_rs): A Gecko-oriented implementation of the Encoding Standard
- [data-encoding](https://lib.rs/crates/data-encoding): Efficient and customizable data-encoding functions like base64, base32, and hex
- [shell-quote](https://lib.rs/crates/shell-quote): A Rust library for shell-quoting strings, e.g. for interpolating into a Bash script.
- [urlencoding](https://lib.rs/crates/urlencoding): A Rust library for doing URL percentage encoding
- [bytesize](https://lib.rs/crates/bytesize): Semantic wrapper for byte count representations
- [hex-literal](https://lib.rs/crates/hex-literal): Macro for converting hexadecimal string to a byte array at compile time
- [byte-unit](https://lib.rs/crates/byte-unit): A library for interacting with units of bytes
- [bytes](https://lib.rs/crates/bytes): Types and traits for working with bytes
### Algorithms
- [rand](https://lib.rs/crates/rand): Random number generators and other randomness functionality
- [bonsai-bt](https://lib.rs/crates/bonsai-bt): Behaviour trees
- [pathfinding](https://lib.rs/crates/pathfinding): Pathfinding, flow, and graph algorithms
- [treediff](https://lib.rs/crates/treediff): Find the difference between arbitrary data structures
- [raft](https://lib.rs/crates/raft): The rust language implementation of Raft algorithm
### Crypto
- [rustls](https://lib.rs/crates/rustls): Rustls is a modern TLS library written in Rust
- [rustls-pemfile](https://lib.rs/crates/rustls-pemfile): Basic .pem file parser for keys and certificates
- [pem](https://lib.rs/crates/pem): Parse and encode PEM-encoded data
- [x509-parser](https://lib.rs/crates/x509-parser): Parser for the X.509 v3 format (RFC 5280 certificates)
- [openssl](https://lib.rs/crates/openssl): OpenSSL bindings
- [hkdf](https://lib.rs/crates/hkdf): HMAC-based Extract-and-Expand Key Derivation Function (HKDF)
- [ed25519-compact](https://lib.rs/crates/ed25519-compact): A small, self-contained, wasm-friendly Ed25519 implementation
- [snow](https://lib.rs/crates/snow): A pure-rust implementation of the Noise Protocol Framework
- [keyring](https://lib.rs/crates/keyring): Cross-platform library for managing passwords/credentials
- [scrypt](https://lib.rs/crates/scrypt): Scrypt password-based key derivation function
- [totp-rs](https://lib.rs/crates/totp-rs): RFC-compliant TOTP implementation with ease of use as a goal and additionnal QoL features
- [mnemonic](https://lib.rs/crates/mnemonic): Encode any data into a sequence of English words
- [jwt](https://lib.rs/crates/jwt): JSON Web Token library
- [secrets](https://lib.rs/crates/secrets): Protected-access memory for cryptographic secrets
- [redact](https://lib.rs/crates/redact): A simple library for keeping secrets out of logs
- [noise](https://lib.rs/crates/noise): Procedural noise generation library
- [ulid](https://lib.rs/crates/ulid): a Universally Unique Lexicographically Sortable Identifier implementation
#### Hashes
- [digest](https://lib.rs/crates/digest): Traits for cryptographic hash functions and message authentication codes
- [seahash](https://lib.rs/crates/seahash): A blazingly fast, portable hash function with proven statistical guarantees
- [highway](https://lib.rs/crates/highway): Native Rust port of Google's HighwayHash, which makes use of SIMD instructions for a fast and strong hash function
- [md5](https://lib.rs/crates/md5): The package provides the MD5 hash function
- [crc32c](https://lib.rs/crates/crc32c): Safe implementation for hardware accelerated CRC32C instructions with software fallback
- [blake3](https://lib.rs/crates/blake3): the BLAKE3 hash function
- [siphasher](https://lib.rs/crates/siphasher): SipHash-2-4, SipHash-1-3 and 128-bit variants in pure Rust
- [bcrypt](https://lib.rs/crates/bcrypt): Easily hash and verify passwords using bcrypt
- [sha1](https://lib.rs/crates/sha1): SHA-1 hash function
- [sha2](https://lib.rs/crates/sha2): Pure Rust implementation of the SHA-2 hash function family including SHA-224, SHA-256, SHA-384, and SHA-512
- [sha3](https://lib.rs/crates/sha3): Pure Rust implementation of SHA-3, a family of Keccak-based hash functions including the SHAKE family of eXtendable-Output Functions (XOFs), as well as the accelerated variant TurboSHAKE
### Logging
- [log](https://lib.rs/crates/log): A lightweight logging facade for Rust
- [env_logger](https://lib.rs/crates/env_logger): A logging implementation for `log` which is configured via an environment variable
- [prometheus](https://lib.rs/crates/prometheus): Prometheus instrumentation library for Rust applications
- [opentelemetry](https://lib.rs/crates/opentelemetry): OpenTelemetry API for Rust
- [sentry-core](https://lib.rs/crates/sentry-core): Core sentry library used for instrumentation and integration development
- [logging_timer](https://lib.rs/crates/logging_timer): Simple timers that log the elapsed time when dropped
- [dioxus-logger](https://lib.rs/crates/dioxus-logger): A logging utility to provide a standard interface whether you're targeting web desktop, fullstack, and more in Dioxus
- [tracing](https://lib.rs/crates/tracing): advanced logger
- [tracing-appender](https://lib.rs/crates/tracing-appender): Provides utilities for file appenders and making non-blocking writers
- [tracing-loki](https://lib.rs/crates/tracing-loki): A tracing layer for shipping logs to Grafana Loki
- [env_logger](https://lib.rs/crates/env_logger): A logging implementation for `log` which is configured via an environment variable
### Mail
- [lettre](https://lib.rs/crates/lettre): [Email](../../../internet/eMail.md) client
@ -1094,93 +982,24 @@ Currently, all supported targets follow the assembly code syntax used by LLVM's
### Templates
- [maud](https://lib.rs/crates/maud): Compile-time [HTML](../../../internet/HTML.md) templates
- [tera](https://lib.rs/crates/tera): Template engine based on [Jinja](../../../tools/Jinja.md) templates
- [subst](https://lib.rs/crates/subst): shell-like variable substitution
- [minijinja](https://lib.rs/crates/minijinja): a powerful template engine for Rust with minimal dependencies
- [handlebars](https://lib.rs/crates/handlebars): Handlebars templating implemented in Rust
### Media
#### Images
- [image](https://lib.rs/crates/image): Imaging library. Provides basic image processing and encoders/decoders for common image formats.
- [rgb](https://lib.rs/crates/rgb): Pixel types for Rust
- [qrcode](https://lib.rs/crates/qrcode): QR code encoder in Rust
- [gif](https://lib.rs/crates/gif): GIF de- and encoder
- [opencv](https://lib.rs/crates/opencv): Rust bindings for OpenCV
- [imgref](https://lib.rs/crates/imgref): A basic 2-dimensional slice for safe and convenient handling of pixel buffers with width, height & stride
- [palette](https://lib.rs/crates/palette): Convert and manage colors with a focus on correctness, flexibility and ease of use
- [imageproc](https://lib.rs/crates/imageproc): Image processing operations
- [resvg](https://lib.rs/crates/resvg): An SVG rendering library
- [png](https://lib.rs/crates/png): PNG decoding and encoding library in pure Rust
- [webp](https://lib.rs/crates/webp): WebP conversion library
- [image_hasher](https://lib.rs/crates/image_hasher): A simple library that provides perceptual hashing and difference calculation for images
- [dify](https://lib.rs/crates/dify): A fast pixel-by-pixel image comparison tool in Rust
- [qoi](https://lib.rs/crates/qoi): VERY fast encoder/decoder for QOI (Quite Okay Image) format
- [auto-palette](https://lib.rs/crates/auto-palette): 🎨 A Rust library that extracts prominent color palettes from images automatically
- [blockhash](https://lib.rs/crates/blockhash): A perceptual hashing algorithm for detecting similar images
#### Video
- [ffmpeg-next](https://lib.rs/crates/ffmpeg-next): Safe FFmpeg wrapper
- [video-rs](https://lib.rs/crates/video-rs): High-level video toolkit based on ffmpeg
- [ffprobe](https://lib.rs/crates/ffprobe): Typed wrapper for the ffprobe CLI
#### Audio
- [symphonia](https://lib.rs/crates/symphonia): Pure Rust media container and audio decoding library
- [hound](https://lib.rs/crates/hound): A wav encoding and decoding library
- [id3](https://lib.rs/crates/id3): A library for reading and writing ID3 metadata
- [metaflac](https://lib.rs/crates/metaflac): A library for reading and writing FLAC metadata
- [bliss-audio](https://lib.rs/crates/bliss-audio): A song analysis library for making playlists
### 3D
- [glam](https://lib.rs/crates/glam): A simple and fast 3D math library for games and graphics
- [tobj](https://lib.rs/crates/tobj): A lightweight OBJ loader in the spirit of tinyobjloader
- [obj-rs](https://lib.rs/crates/obj-rs): Wavefront obj parser for Rust. It handles both 'obj' and 'mtl' formats.
### CLI
- [argh](https://lib.rs/crates/argh): Derive-based argument parser optimized for code size
- [clap](https://lib.rs/crates/clap): A simple to use, efficient, and full-featured Command Line Argument Parser
- [yansi](https://lib.rs/crates/yansi): A dead simple ANSI terminal color painting library
- [owo-colors](https://lib.rs/crates/owo-colors): Zero-allocation terminal colors that'll make people go owo
- [named-colour](https://lib.rs/crates/named-colour): named-colour provides Hex Codes for popular colour names
- [colored](https://lib.rs/crates/colored): The most simple way to add colors in your terminal
- [crossterm](https://lib.rs/crates/crossterm): A crossplatform terminal library for manipulating terminals
- [trauma](https://lib.rs/crates/trauma): Simplify and prettify HTTP downloads
- [comfy-table](https://lib.rs/crates/comfy-table): An easy to use library for building beautiful tables with automatic content wrapping
- [tabled](https://lib.rs/crates/tabled): An easy to use library for pretty print tables of Rust structs and enums
- [tabular](https://lib.rs/crates/tabular): Plain text tables, aligned automatically
- [rustyline](https://lib.rs/crates/rustyline): Rustyline, a readline implementation based on Antirez's Linenoise
- [rpassword](https://lib.rs/crates/rpassword): Read passwords in console applications
- [inquire](https://lib.rs/crates/inquire): inquire is a library for building interactive prompts on terminals
- [clap](https://lib.rs/crates/clap): A simple to use, efficient, and full-featured Command Line Argument Parser
- [crossterm](https://lib.rs/crates/crossterm): A crossplatform terminal library for manipulating terminals
- [indicatif](https://lib.rs/crates/indicatif): A progress bar and cli reporting library for Rust
- [spinners](https://lib.rs/crates/spinners): Elegant terminal spinners for Rust
- [is-terminal](https://lib.rs/crates/is-terminal): Test whether a given stream is a terminal
- [bishop](https://lib.rs/crates/bishop): Library for visualizing keys and hashes using OpenSSH's Drunken Bishop algorithm
- [termimad](https://lib.rs/crates/termimad): Markdown Renderer for the Terminal
- [rust-script](https://lib.rs/crates/rust-script): Command-line tool to run Rust "scripts" which can make use of crates
- [sysinfo](https://lib.rs/crates/sysinfo): Library to get system information such as processes, CPUs, disks, components and networks
- [which](https://lib.rs/crates/which): A Rust equivalent of Unix command "which". Locate installed executable in cross platforms.
- [ctrlc](https://lib.rs/crates/ctrlc): Easy Ctrl-C handler for Rust projects
- [subprocess](https://lib.rs/crates/subprocess): Execution of child processes and pipelines, inspired by Python's subprocess module, with Rust-specific extensions
- [cmd_lib](https://lib.rs/crates/cmd_lib): Common rust commandline macros and utils, to write shell script like tasks easily
- [argh](https://lib.rs/crates/argh): Derive-based argument parser optimized for code size
- [owo-colors](https://lib.rs/crates/owo-colors): Zero-allocation terminal colors that'll make people go owo
- [yansi](https://lib.rs/crates/yansi): A dead simple ANSI terminal color painting library
### Compression
- [flate2](https://lib.rs/crates/flate2): DEFLATE compression and decompression exposed as Read/BufRead/Write streams. Supports miniz_oxide and multiple zlib implementations. Supports zlib, gzip, and raw deflate streams.
- [tar](https://lib.rs/crates/tar): A Rust implementation of a [TAR](../../../applications/cli/compression/tar.md) file reader and writer.
- [zstd](https://lib.rs/crates/zstd): Binding for the [zstd compression](../../../files/Zstd%20Compression.md) library
- [unrar](https://lib.rs/crates/unrar): list and extract RAR archives
- [zip](https://lib.rs/crates/zip): Library to support the reading and writing of zip files
- [brotli](https://lib.rs/crates/brotli): A brotli compressor and decompressor
- [huffman-compress2](https://lib.rs/crates/huffman-compress2): Huffman compression given a probability distribution over arbitrary symbols
- [arithmetic-coding](https://lib.rs/crates/arithmetic-coding): fast and flexible arithmetic coding library
### Cache
- [lru](https://lib.rs/crates/lru): A LRU cache implementation
- [moka](https://lib.rs/crates/moka): A fast and concurrent cache library inspired by Java Caffeine
- [ustr](https://lib.rs/crates/ustr): Fast, FFI-friendly string interning
- [cacache](https://lib.rs/crates/cacache): Content-addressable, key-value, high-performance, on-disk cache
- [cached](https://crates.io/crates/cached): Caching Crate
- [memoize](https://lib.rs/crates/memoize): Attribute macro for auto-memoizing functions with somewhat-simple signatures
- [internment](https://lib.rs/crates/internment): Easy interning of data
- [http-cache-semantics](https://lib.rs/crates/http-cache-semantics): RFC 7234. Parses HTTP headers to correctly compute cacheability of responses, even in complex cases
- [assets_manager](https://lib.rs/crates/assets_manager): Conveniently load, cache, and reload external resources
### Databases
- [rusqlite](https://lib.rs/crates/rusqlite): Ergonomic wrapper for [SQLite](../SQLite.md)
@ -1189,291 +1008,34 @@ Currently, all supported targets follow the assembly code syntax used by LLVM's
- [rocksdb](https://lib.rs/crates/rocksdb): embedded database
- [uuid](https://lib.rs/crates/uuid): UUID Generation
- [polars](https://lib.rs/crates/polars): Dataframes computation
- [surrealdb](https://crates.io/crates/surrealdb): A scalable, distributed, collaborative, document-graph database, for the realtime web
- [sql-builder](https://lib.rs/crates/sql-builder): Simple SQL code generator
- [pgvector](https://lib.rs/crates/pgvector): pgvector support for Rust
- [sea-orm](https://lib.rs/crates/sea-orm): 🐚 An async & dynamic ORM for Rust
- [sled](https://lib.rs/crates/sled): Lightweight high-performance pure-rust transactional embedded database
### Date and Time
- [chrono](https://lib.rs/crates/chrono): Date and time library for Rust
- [chrono-tz](https://lib.rs/crates/chrono-tz): TimeZone implementations for chrono from the IANA database
- [humantime](https://lib.rs/crates/humantime): A parser and formatter for `std::time::{Duration, SystemTime}`
- [duration-str](https://lib.rs/crates/duration-str): duration string parser
- [cron](https://lib.rs/crates/cron): A cron expression parser and schedule explorer
- [dateparser](https://lib.rs/crates/dateparser): Parse dates in string formats that are commonly used
- [icalendar](https://lib.rs/crates/icalendar): Strongly typed iCalendar builder and parser
### Network
- [tower](https://lib.rs/crates/tower): Tower is a library of modular and reusable components for building robust clients and servers
- [tungstenite](https://lib.rs/crates/tungstenite): Lightweight stream-based WebSocket implementation
- [tokio-websockets](http://ocean.hydrar.de/s/lib.rs/crates/tokio-websockets): High performance, strict, tokio-util based WebSockets implementation
- [message-io](https://lib.rs/crates/message-io): Fast and easy-to-use event-driven network library
- [ipnet](https://lib.rs/crates/ipnet): Provides types and useful methods for working with IPv4 and IPv6 network addresses
- [object_store](https://lib.rs/crates/object_store): A generic object store interface for uniformly interacting with AWS S3, Google Cloud Storage, Azure Blob Storage and local files
- [matchit](https://lib.rs/crates/matchit): A high performance, zero-copy URL router
- [tun](https://lib.rs/crates/tun): TUN device creation and handling
- [quiche](https://lib.rs/crates/quiche): 🥧 Savoury implementation of the QUIC transport protocol and HTTP/3
- [arti-client](https://lib.rs/crates/arti-client): Library for connecting to the Tor network as an anonymous client
- [etherparse](https://lib.rs/crates/etherparse): A library for parsing & writing a bunch of packet based protocols (EthernetII, IPv4, IPv6, UDP, TCP ...)
- [ldap3](https://lib.rs/crates/ldap3): Pure-Rust LDAP Client
- [hyperlocal](https://lib.rs/crates/hyperlocal): Hyper bindings for Unix domain sockets
- [openssh-sftp-client](https://lib.rs/crates/openssh-sftp-client): Highlevel API used to communicate with openssh sftp server
- [swarm-discovery](https://lib.rs/crates/swarm-discovery): Discovery service for IP-based swarms
- [libmdns](https://lib.rs/crates/libmdns): mDNS Responder library for building discoverable LAN services in Rust
- [networkmanager](https://lib.rs/crates/networkmanager): Bindings for the Linux NetworkManager
- [renet](https://lib.rs/crates/renet): Server/Client network library for multiplayer games with authentication and connection management
- [dhcproto](https://lib.rs/crates/dhcproto): A DHCP parser and encoder for DHCPv4/DHCPv6. dhcproto aims to be a functionally complete DHCP implementation.
- [irc](https://lib.rs/crates/irc): the irc crate usable, async IRC for Rust
- [ssh2](https://lib.rs/crates/ssh2): Bindings to libssh2 for interacting with SSH servers and executing remote commands, forwarding local ports, etc
- [openssh](https://lib.rs/crates/openssh): SSH through OpenSSH
- [amqprs](https://lib.rs/crates/amqprs): AMQP 0-9-1 client implementation for RabbitMQ
- [wyoming](https://lib.rs/crates/wyoming): Abstractions over the Wyoming protocol
### HTTP
- [hyper](https://lib.rs/crates/hyper): A fast and correct [HTTP](../../../internet/HTTP.md) library
- [reqwest](https://lib.rs/crates/reqwest): higher level [HTTP](../../../internet/HTTP.md) client library
- [ureq](https://lib.rs/crates/ureq): Simple, safe HTTP client
- [curl](https://lib.rs/crates/curl): Rust bindings to libcurl for making HTTP requests
- [actix-web](https://lib.rs/crates/actix-web): Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust
- [rocket](https://lib.rs/crates/rocket): web server framework for Rust
- [thirtyfour](https://lib.rs/crates/thirtyfour): Thirtyfour is a Selenium / WebDriver library for Rust, for automated website UI testing
- [http-types](https://lib.rs/crates/http-types): Common types for HTTP operations
- [headers](https://lib.rs/crates/headers): typed HTTP headers
- [cookie](https://lib.rs/crates/cookie): HTTP cookie parsing and cookie jar management. Supports signed and private (encrypted, authenticated) jars.
- [http](https://lib.rs/crates/http): A set of types for representing HTTP requests and responses
- [h2](https://lib.rs/crates/h2): An HTTP/2 client and server
- [h3](https://lib.rs/crates/h3): An async HTTP/3 implementation
- [mime](https://lib.rs/crates/mime): Strongly Typed Mimes
- [scraper](https://lib.rs/crates/scraper): HTML parsing and querying with CSS selectors
- [selectors](https://lib.rs/crates/selectors): CSS Selectors matching for Rust
- [spider](https://lib.rs/crates/spider): A web crawler and scraper, building blocks for data curation workloads
- [htmlize](https://lib.rs/crates/htmlize): Encode and decode HTML entities in UTF-8 according to the standard
- [ammonia](https://lib.rs/crates/ammonia): HTML Sanitization
- [rookie](https://lib.rs/crates/rookie): Load cookie from your web browsers
- [tonic](https://lib.rs/crates/tonic): A gRPC over HTTP/2 implementation focused on high performance, interoperability, and flexibility
- [web-sys](https://lib.rs/crates/web-sys): Bindings for all Web APIs, a procedurally generated crate from WebIDL
- [jsonwebtoken](https://lib.rs/crates/jsonwebtoken): Create and decode JWTs in a strongly typed way
- [http-range-header](https://lib.rs/crates/http-range-header): No-dep range header parser
#### Axum
- [axum](https://lib.rs/crates/axum): Web framework that focuses on ergonomics and modularity
- [axum-valid](https://crates.io/crates/axum-valid): Provides validation extractors for your Axum application, allowing you to validate data using validator, garde, validify or all of them.
- [axum-prometheus](https://crates.io/crates/axum-prometheus): A tower middleware to collect and export HTTP metrics for Axum
- [axum-htmx](https://crates.io/crates/axum-htmx): A set of htmx extractors, responders, and request guards for axum.
- [axum_session](https://crates.io/crates/axum_session): 📝 Session management layer for axum that supports HTTP and Rest.
- [axum_csrf](https://crates.io/crates/axum_csrf): Library to Provide a CSRF (Cross-Site Request Forgery) protection layer.
### Text
- [regex](https://lib.rs/crates/regex): An implementation of [regular expressions](../../../tools/Regex.md) for Rust. This implementation uses finite automata and guarantees linear time matching on all inputs.
- [fancy-regex](https://lib.rs/crates/fancy-regex): An implementation of regexes, supporting a relatively rich set of features, including backreferences and look-around
- [pretty_regex](https://lib.rs/crates/pretty_regex): 🧶 Elegant and readable way of writing regular expressions
- [comfy-table](https://lib.rs/crates/comfy-table): An easy to use library for building beautiful tables with automatic content wrapping
- [similar](https://lib.rs/crates/similar): A diff library for Rust
- [dissimilar](https://lib.rs/crates/dissimilar): Diff library with semantic cleanup, based on Google's diff-match-patch
- [strsim](https://lib.rs/crates/strsim): Implementations of string similarity metrics. Includes Hamming, Levenshtein, OSA, Damerau-Levenshtein, Jaro, Jaro-Winkler, and Sørensen-Dice.
- [enquote](https://lib.rs/crates/enquote): Quotes and unquotes strings
- [emojis](https://lib.rs/crates/emojis): ✨ Lookup emoji in *O(1)* time, access metadata and GitHub shortcodes, iterate over all emoji, and more!
- [text-splitter](https://lib.rs/crates/text-splitter): Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rust and Python.
- [wildcard](https://lib.rs/crates/wildcard): Wildcard matching
- [wildmatch](https://lib.rs/crates/wildmatch): Simple string matching with single- and multi-character wildcard operator
- [textwrap](https://lib.rs/crates/textwrap): Library for word wrapping, indenting, and dedenting strings. Has optional support for Unicode and emojis as well as machine hyphenation.
- [pad](https://lib.rs/crates/pad): Library for padding strings at runtime
- [const-str](https://lib.rs/crates/const-str): compile-time string operations
- [const_format](https://lib.rs/crates/const_format): Compile-time string formatting
- [convert_case](https://lib.rs/crates/convert_case): Convert strings into any case
- [heck](https://lib.rs/crates/heck): heck is a case conversion library
- [html2md](https://lib.rs/crates/html2md): Library to convert simple html documents into markdown
### AI
- [safetensors](https://lib.rs/crates/safetensors): Provides functions to read and write safetensors which aim to be safer than their PyTorch counterpart.
- [burn](https://lib.rs/crates/burn): Flexible and Comprehensive Deep Learning Framework in Rust
- [ollama-rs](https://lib.rs/crates/ollama-rs): A Rust library for interacting with the Ollama API
- [linfa](https://lib.rs/crates/linfa): A Machine Learning framework for Rust
- [neurons](https://lib.rs/crates/neurons): Neural networks from scratch, in Rust
### Concurrency
- [parking_lot](https://lib.rs/crates/parking_lot): More compact and efficient implementations of the standard synchronization primitives
- [crossbeam](https://lib.rs/crates/crossbeam): Tools for concurrent programming
- [rayon](https://lib.rs/crates/rayon): Simple work-stealing parallelism for Rust
- [dashmap](https://lib.rs/crates/dashmap): fast hashmap
- [spin](https://lib.rs/crates/spin): Spin-based synchronization primitives
- [flume](https://lib.rs/crates/flume): A blazingly fast multi-producer channel
- [state](https://lib.rs/crates/state): A library for safe and effortless global and thread-local state management
- [atomic](https://lib.rs/crates/atomic): Generic `Atomic<T>` wrapper type
- [yaque](https://lib.rs/crates/yaque): Yaque is yet another disk-backed persistent queue for Rust
- [kanal](https://lib.rs/crates/kanal): The fast sync and async channel that Rust deserves
### Memory Management
- [jemallocator](https://lib.rs/crates/jemallocator): jemalloc allocator
- [memmap2](https://lib.rs/crates/memmap2): Map something to memory
- [sharded-slab](https://lib.rs/crates/sharded-slab): lock free concurrent slab allocation
- [heapless](https://lib.rs/crates/heapless): static friendly data structures without heap allocation
- [bumpalo](https://lib.rs/crates/bumpalo): bump allocation arena
- [singlyton](https://lib.rs/crates/singlyton): [Singleton](../patterns/creational/Singleton%20Pattern.md) for Rust
- [pipe](https://lib.rs/crates/pipe): Synchronous Read/Write memory pipe
- [memory_storage](https://lib.rs/crates/memory_storage): Vec like data structure with constant index
- [effective-limits](https://lib.rs/crates/effective-limits): Estimate effective resource limits for a process
- [iter-chunks](https://lib.rs/crates/iter-chunks): Extend Iterator with chunks
- [shared_vector](https://lib.rs/crates/shared_vector): Reference counted vector data structure
- [census](https://lib.rs/crates/census): Keeps an inventory of living objects
- [static_cell](https://lib.rs/crates/static_cell): Statically allocated, initialized at runtime cell
- [arcstr](https://lib.rs/crates/arcstr): A better reference-counted string type, with zero-cost (allocation-free) support for string literals, and reference counted substrings
- [bytebuffer](https://lib.rs/crates/bytebuffer): A bytebuffer for networking and binary protocols
### Science
- [syunit](https://lib.rs/crates/syunit): SI Units
- [uom](https://lib.rs/crates/uom): Units of measurement
- [measurements](https://lib.rs/crates/measurements): Handle metric, imperial, and other measurements with ease! Types: Length, Temperature, Weight, Volume, Pressure
- [t4t](https://lib.rs/crates/t4t): game theory toolbox
### Hardware / Embedded
- [virt](https://lib.rs/crates/virt): Rust bindings to the libvirt C library
- [qapi](https://lib.rs/crates/qapi): QEMU QMP and Guest Agent API
- [bootloader](https://lib.rs/crates/bootloader): An experimental x86_64 bootloader that works on both BIOS and UEFI systems
- [embedded-graphics](https://lib.rs/crates/embedded-graphics): Embedded graphics library for small hardware displays
- [riscv](https://lib.rs/crates/riscv): Low level access to RISC-V processors
- [aarch64-cpu](https://lib.rs/crates/aarch64-cpu): Low level access to processors using the AArch64 execution state
- [uefi](https://lib.rs/crates/uefi): safe UEFI wrapper
- [elf](https://lib.rs/crates/elf): A pure-rust library for parsing ELF files
- [smoltcp](https://lib.rs/crates/smoltcp): A TCP/IP stack designed for bare-metal, real-time systems without a heap
- [fatfs](https://lib.rs/crates/fatfs): FAT filesystem library
### Metrics
- [criterion2](https://lib.rs/crates/criterion2): Statistics-driven micro-benchmarking library
- [inferno](https://lib.rs/crates/inferno): Rust port of the FlameGraph performance profiling tool suite
- [divan](https://lib.rs/crates/divan): Statistically-comfy benchmarking library
### Testing
- [test-log](https://lib.rs/crates/test-log): A replacement of the `#[test]` attribute that initializes logging and/or tracing infrastructure before running tests
- [googletest](https://lib.rs/crates/googletest): A rich assertion and matcher library inspired by GoogleTest for C++
- [predicates](https://lib.rs/crates/predicates): An implementation of boolean-valued predicate functions
- [validator](https://lib.rs/crates/validator): Common validation functions (email, url, length, …) and trait - to be used with validator_derive
- [garde](https://lib.rs/crates/garde): Validation library
- [fake](https://lib.rs/crates/fake): An easy to use library and command line for generating fake data like name, number, address, lorem, dates, etc
- [static_assertions](https://lib.rs/crates/static_assertions): Compile-time assertions to ensure that invariants are met
### i18n
- [iso_currency](https://lib.rs/crates/iso_currency): ISO 4217 currency codes
- [iso_country](https://lib.rs/crates/iso_country): ISO3166-1 countries
- [sys-locale](https://lib.rs/crates/sys-locale): Small and lightweight library to obtain the active system locale
### Async
- [tokio](https://lib.rs/crates/tokio): An event-driven, non-blocking I/O platform for writing asynchronous I/O backed applications
- [futures](https://lib.rs/crates/futures): An implementation of futures and streams featuring zero allocations, composability, and iterator-like interfaces
- [mio](https://lib.rs/crates/mio): Lightweight non-blocking I/O
- [deadpool](https://lib.rs/crates/deadpool): Dead simple async pool
- [blocking](https://lib.rs/crates/blocking): A thread pool for isolating blocking I/O in async programs
- [pollster](https://lib.rs/crates/pollster): Synchronously block the thread until a future completes
- [smol](https://lib.rs/crates/smol): A small and fast async runtime
- [async-stream](https://lib.rs/crates/async-stream): Asynchronous streams using async & await notation
- [async-trait](https://lib.rs/crates/async-trait): Type erasure for async trait methods
- [once_cell](https://lib.rs/crates/once_cell): Lazy values
### Macros
- [proc-macro2](https://lib.rs/crates/proc-macro2): A substitute implementation of the compilers proc_macro API to decouple token-based libraries from the procedural macro use case
- [syn](https://lib.rs/crates/syn): Parse Rust syntax into AST
- [quote](https://lib.rs/crates/quote): Turn Rust syntax into TokenStream
- [paste](https://lib.rs/crates/paste): Concat Rust idents
### Build Tools
- [flamegraph](https://lib.rs/crates/flamegraph): A simple cargo subcommand for generating flamegraphs, using inferno under the hood
- [cargo-hack](https://lib.rs/crates/cargo-hack): Cargo subcommand to provide various options useful for testing and continuous integration
- [cargo-outdated](https://lib.rs/crates/cargo-outdated): Cargo subcommand for displaying when dependencies are out of date
- [cargo-binstall](https://lib.rs/crates/cargo-binstall): Binary installation for rust projects
- [cargo-cache](https://lib.rs/crates/cargo-cache): Manage cargo cache, show sizes and remove directories selectively
- [cargo-watch](https://lib.rs/crates/cargo-watch): Watches over your Cargo projects source
- [cargo-expand](https://lib.rs/crates/cargo-expand): Wrapper around `rustc -Zunpretty=expanded`. Shows the result of macro expansion and `#[derive]` expansion.
- [cargo-audit](https://lib.rs/crates/cargo-audit): Audit Cargo.lock for crates with security vulnerabilities
- [cargo-aur](https://lib.rs/crates/cargo-aur): Prepare Rust projects to be released on the Arch Linux User Repository
- [cargo-bom](https://lib.rs/crates/cargo-bom): Bill of Materials for Rust Crates
- [cc](https://lib.rs/crates/cc): A build-time dependency for Cargo build scripts to assist in invoking the native C compiler to compile native C code into a static archive to be linked into Rust code
- [cmake](https://lib.rs/crates/cmake): A build dependency for running cmake to build a native library
- [cross](https://lib.rs/crates/cross): Zero setup cross compilation and cross testing
- [wasm-bindgen](https://lib.rs/crates/wasm-bindgen): Easy support for interacting between JS and Rust
### Math
- [num](https://lib.rs/crates/num): A collection of numeric types and traits for Rust, including bigint, complex, rational, range iterators, generic integers, and more!
- [num-format](https://lib.rs/crates/num-format): A Rust crate for producing string-representations of numbers, formatted according to international standards
- [num-rational](https://lib.rs/crates/num-rational): Rational numbers implementation for Rust
- [num-complex](https://lib.rs/crates/num-complex): Complex numbers implementation for Rust
- [statrs](https://lib.rs/crates/statrs): Statistical computing library for Rust
- [bigdecimal](https://lib.rs/crates/bigdecimal): Arbitrary precision decimal numbers
- [nalgebra](https://lib.rs/crates/nalgebra): General-purpose linear algebra library with transformations and statically-sized or dynamically-sized matrices
- [euclid](https://lib.rs/crates/euclid): Geometry primitives
- [ultraviolet](https://lib.rs/crates/ultraviolet): A crate to do linear algebra, fast
- [peroxide](https://lib.rs/crates/peroxide): Rust comprehensive scientific computation library contains linear algebra, numerical analysis, statistics and machine learning tools with farmiliar syntax
### Desktop
- [notify-rust](https://lib.rs/crates/notify-rust): Show desktop notifications (linux, bsd, mac). Pure Rust dbus client and server.
- [arboard](https://lib.rs/crates/arboard): Image and text handling for the OS clipboard
### Configuration
- [config](https://lib.rs/crates/config): Layered configuration system for Rust applications
- [envy](https://lib.rs/crates/envy): deserialize env vars into typesafe structs
### Language Extensions
#### Enums
- [strum](https://lib.rs/crates/strum): Helpful macros for working with enums and strings
- [enum_dispatch](https://lib.rs/crates/enum_dispatch): Near drop-in replacement for dynamic-dispatched method calls with up to 10x the speed
- [num_enum](https://lib.rs/crates/num_enum): Procedural macros to make inter-operation between primitives and enums easier
- [enum-display](https://lib.rs/crates/enum-display): A macro to derive Display for enums
#### Memory
- [smol_str](https://lib.rs/crates/smol_str): small-string optimized string type with O(1) clone
- [beef](https://lib.rs/crates/beef): More compact Cow
- [dyn-clone](https://lib.rs/crates/dyn-clone): Clone trait that is dyn-compatible
- [memoffset](https://lib.rs/crates/memoffset): offset_of functionality for Rust structs
- [az](https://lib.rs/crates/az): Casts and checked casts
- [zerocopy](https://lib.rs/crates/zerocopy): Zerocopy makes zero-cost memory manipulation effortless. We write "unsafe" so you don't have to.
- [once_cell](https://lib.rs/crates/once_cell): Single assignment cells and lazy values
- [lazy_static](https://lib.rs/crates/lazy_static): A macro for declaring lazily evaluated statics in Rust
- [globals](https://lib.rs/crates/globals): Painless global variables in Rust
- [lazy_format](https://lib.rs/crates/lazy_format): A utility crate for lazily formatting values for later
- [fragile](https://lib.rs/crates/fragile): Provides wrapper types for sending non-send values to other threads
#### Syntax
- [tap](https://lib.rs/crates/tap): Generic extensions for tapping values in Rust
- [option_trait](https://lib.rs/crates/option_trait): Helper traits for more generalized options
- [cascade](https://lib.rs/crates/cascade): Dart-like cascade macro for Rust
- [enclose](https://lib.rs/crates/enclose): A convenient macro, for cloning values into a closure
- [extend](https://lib.rs/crates/extend): Create extensions for types you don't own with extension traits but without the boilerplate
- [hex_lit](https://lib.rs/crates/hex_lit): Hex macro literals without use of hex macros
- [replace_with](https://lib.rs/crates/replace_with): Temporarily take ownership of a value at a mutable location, and replace it with a new value based on the old one
- [scopeguard](https://lib.rs/crates/scopeguard): A RAII scope guard that will run a given closure when it goes out of scope, even if the code between panics (assuming unwinding panic).
- [backon](https://lib.rs/crates/backon): Make retry like a built-in feature provided by Rust
- [tryhard](https://lib.rs/crates/tryhard): Easily retry futures
- [retry](https://lib.rs/crates/retry): Utilities for retrying operations that can fail
- [statum](https://lib.rs/crates/statum): Compile-time state machine magic for Rust: Zero-boilerplate typestate patterns with automatic transition validation
- [formatx](https://lib.rs/crates/formatx): A macro for formatting non literal strings at runtime
- [erased](https://lib.rs/crates/erased): Erase the type of a reference or box, retaining the lifetime
- [include_dir](https://lib.rs/crates/include_dir): Embed the contents of a directory in your binary
- [stacker](https://lib.rs/crates/stacker): A stack growth library useful when implementing deeply recursive algorithms that may accidentally blow the stack
- [recursive](https://lib.rs/crates/recursive): Easy recursion without stack overflows
#### Type Extensions
- [itertools](https://lib.rs/crates/itertools): Extra iterator adaptors, iterator methods, free functions, and macros
- [itermore](https://lib.rs/crates/itermore): 🤸‍♀️ More iterator adaptors
- [derive_more](https://lib.rs/crates/derive_more): Adds #[derive(x)] macros for more traits
- [derive_builder](https://lib.rs/crates/derive_builder): Rust macro to automatically implement the builder pattern for arbitrary structs
- [ordered-float](https://lib.rs/crates/ordered-float): Wrappers for total ordering on floats
- [stdext](https://lib.rs/crates/stdext): Extensions for the Rust standard library structures
- [bounded-integer](https://lib.rs/crates/bounded-integer): Bounded integers
- [tuples](https://lib.rs/crates/tuples): Provides many useful tools related to tuples
- [fallible-iterator](https://lib.rs/crates/fallible-iterator): Fallible iterator traits
- [sequential](https://lib.rs/crates/sequential): A configurable sequential number generator
#### Compilation
- [cfg-if](https://lib.rs/crates/cfg-if): A macro to ergonomically define an item depending on a large number of #[cfg] parameters. Structured like an if-else chain, the first matching branch is the item that gets emitted.
- [cfg_aliases](https://lib.rs/crates/cfg_aliases): A tiny utility to help save you a lot of effort with long winded #[cfg()] checks
- [nameof](https://lib.rs/crates/nameof): Provides a Rust macro to determine the string name of a binding, type, const, or function
- [tynm](https://lib.rs/crates/tynm): Returns type names in shorter form
#### Const
- [constcat](https://lib.rs/crates/constcat): concat! with support for const variables and expressions
- [konst](https://lib.rs/crates/konst): Const equivalents of std functions, compile-time comparison, and parsing
### Geo
- [geo](https://lib.rs/crates/geo): Geospatial primitives and algorithms
- [geojson](https://lib.rs/crates/geojson): Read and write GeoJSON vector geographic data
- [geozero](https://lib.rs/crates/geozero): Zero-Copy reading and writing of geospatial data in WKT/WKB, GeoJSON, MVT, GDAL, and other formats
- [versatiles](https://lib.rs/crates/versatiles): A toolbox for converting, checking and serving map tiles in various formats
- [ipcap](https://lib.rs/crates/ipcap): 🌍 A CLI & library for decoding IP addresses into state, postal code, country, coordinates, etc without internet access

11
technology/libvirt.md Normal file
View file

@ -0,0 +1,11 @@
---
obj: application
repo: https://gitlab.com/libvirt
website: https://libvirt.org
arch-wiki: https://wiki.archlinux.org/title/Libvirt
---
# libvirt
#wip
libvirt has a GUI frontend with [virt-manager](./applications/utilities/virt-manager.md).

View file

@ -17,4 +17,4 @@ Proxmox Virtual Environment (Proxmox VE) is an open-source virtualization platfo
### 4. **Storage Options:**
- Proxmox VE offers various storage options, including local storage, networked storage (Ceph, NFS, iSCSI), and [ZFS](filesystems/ZFS.md) (Zettabyte File System) support. This allows users to choose the storage solution that best fits their requirements.
### 5. **Backup and Restore:**
- The built-in backup and restore features simplify data protection in Proxmox VE. Users can create scheduled backups of VMs and containers, allowing for quick recovery in case of data loss or system failures.
- The built-in backup and restore features simplify data protection in Proxmox VE. Users can create scheduled backups of VMs and containers, allowing for quick recovery in case of data loss or system failures.

View file

@ -1,74 +0,0 @@
---
obj: concept
arch-wiki: https://wiki.archlinux.org/title/XDG_user_directories
rev: 2025-01-08
---
# XDG Directories
The XDG User Directories are a standardized way to define and access common user directories in Unix-like operating systems, primarily defined by the XDG Base Directory Specification from the FreeDesktop.org project.
These directories provide users and applications with predefined paths for storing specific types of files, such as documents, downloads, music, and more. By using these directories, applications can integrate better with the operating system's file structure and provide a consistent experience for users.
## Creating default directories
Creating a full suite of localized default user directories within the `$HOME` directory can be done automatically by running:
```sh
xdg-user-dirs-update
```
> **Tip**: To force the creation of English-named directories, `LC_ALL=C.UTF-8 xdg-user-dirs-update --force` can be used.
When executed, it will also automatically:
- Create a local `~/.config/user-dirs.dirs` configuration file: used by applications to find and use home directories specific to an account.
- Create a local `~/.config/user-dirs.locale` configuration file: used to set the language according to the locale in use.
The user service `xdg-user-dirs-update.service` will also be installed and enabled by default, in order to keep your directories up to date by running this command at the beginning of each login session.
## Creating custom directories
Both the local `~/.config/user-dirs.dirs` and global `/etc/xdg/user-dirs.defaults` configuration files use the following environmental variable format to point to user directories: `XDG_DIRNAME_DIR="$HOME/directory_name"` An example configuration file may likely look like this (these are all the template directories):
```sh
# ~/.config/user-dirs.dirs
XDG_DESKTOP_DIR="$HOME/Desktop"
XDG_DOCUMENTS_DIR="$HOME/Documents"
XDG_DOWNLOAD_DIR="$HOME/Downloads"
XDG_MUSIC_DIR="$HOME/Music"
XDG_PICTURES_DIR="$HOME/Pictures"
XDG_PUBLICSHARE_DIR="$HOME/Public"
XDG_TEMPLATES_DIR="$HOME/Templates"
XDG_VIDEOS_DIR="$HOME/Videos"
```
As xdg-user-dirs will source the local configuration file to point to the appropriate user directories, it is therefore possible to specify custom folders. For example, if a custom folder for the `XDG_DOWNLOAD_DIR` variable has named `$HOME/Internet` in `~/.config/user-dirs.dirs` any application that uses this variable will use this directory.
> **Note**: Like with many configuration files, local settings override global settings. It will also be necessary to create any new custom directories.
Alternatively, it is also possible to specify custom folders using the command line. For example, the following command will produce the same results as the above configuration file edit:
```sh
xdg-user-dirs-update --set DOWNLOAD ~/Internet
```
## Querying configured directories
Once set, any user directory can be viewed with xdg-user-dirs. For example, the following command will show the location of the Templates directory, which of course corresponds to the `XDG_TEMPLATES_DIR` variable in the local configuration file:
```sh
xdg-user-dir TEMPLATES
```
## Specification
Please read the full specification. This section will attempt to break down the essence of what it tries to achieve.
Only `XDG_RUNTIME_DIR` is set by default through `pam_systemd`. It is up to the user to explicitly define the other variables according to the specification.
### User directories
- `XDG_CONFIG_HOME`: Where user-specific configurations should be written (analogous to `/etc`). Should default to `$HOME/.config`.
- `XDG_CACHE_HOME`: Where user-specific non-essential (cached) data should be written (analogous to `/var/cache`). Should default to `$HOME/.cache`.
- `XDG_DATA_HOME`: Where user-specific data files should be written (analogous to `/usr/share`). Should default to `$HOME/.local/share`.
- `XDG_STATE_HOME`: Where user-specific state files should be written (analogous to `/var/lib`). Should default to `$HOME/.local/state`.
- `XDG_RUNTIME_DIR`: Used for non-essential, user-specific data files such as sockets, named pipes, etc. Not required to have a default value; warnings should be issued if not set or equivalents provided. Must be owned by the user with an access mode of `0700`. Filesystem fully featured by standards of OS. Must be on the local filesystem. May be subject to periodic cleanup. Modified every 6 hours or set sticky bit if persistence is desired. Can only exist for the duration of the user's login. Should not store large files as it may be mounted as a tmpfs.`pam_systemd` sets this to `/run/user/$UID`.
### System directories
- `XDG_DATA_DIRS`: List of directories separated by `:` (analogous to `PATH`). Should default to `/usr/local/share:/usr/share`.
- `XDG_CONFIG_DIRS`: List of directories separated by `:` (analogous to `PATH`). Should default to `/etc/xdg`.

View file

@ -5,6 +5,8 @@ repo: https://github.com/qemu/qemu
rev: 2024-05-02
---
#refactor -> https://wiki.archlinux.org/title/QEMU
# QEMU
QEMU is an open-source emulator and virtualizer that enables running operating systems and various software applications on different hardware architectures. It supports [emulation](../emulation/Emulation.md) of various CPU architectures, including x86, ARM, PowerPC, and SPARC, among others. It allows running [virtual machines](../tools/Virtual%20Machine.md).

View file

@ -3,6 +3,7 @@ aliases: ["VM"]
obj: concept
wiki: https://en.wikipedia.org/wiki/Virtual_machine
---
# Virtual Machine
In computing, a virtual machine (VM) is the virtualization or [emulation](../emulation/Emulation.md) of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer.
@ -10,4 +11,5 @@ Virtual Machine can be used to run operating systems in an isolated environment.
## Virtual Machine Software
- [qemu](../linux/qemu.md)
- [Proxmox](../linux/Proxmox.md)
- [Proxmox](../linux/Proxmox.md)
- [libvirt](../libvirt.md)