Compare commits
4 commits
Author | SHA1 | Date | |
---|---|---|---|
51859b6171 | |||
8289890ccd | |||
f715b43402 | |||
9d67459479 |
57 changed files with 413 additions and 6616 deletions
20
.gitea/workflows/validate_schema.yml
Normal file
20
.gitea/workflows/validate_schema.yml
Normal file
|
@ -0,0 +1,20 @@
|
|||
name: Validate Schema
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
validate:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Validation
|
||||
uses: docker://git.hydrar.de/mdtools/mdtools:latest
|
||||
with:
|
||||
entrypoint: /bin/bash
|
||||
args: scripts/validate_schema.sh
|
|
@ -1,9 +0,0 @@
|
|||
when:
|
||||
- event: push
|
||||
branch: main
|
||||
|
||||
steps:
|
||||
- name: "Validate Schema"
|
||||
image: git.hydrar.de/mdtools/mdtools:latest
|
||||
commands:
|
||||
- /bin/bash scripts/validate_schema.sh
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
obj: meta/collection
|
||||
rev: 2025-01-30
|
||||
rev: 2024-07-14
|
||||
---
|
||||
|
||||
# Applications
|
||||
|
@ -38,7 +38,6 @@ rev: 2025-01-30
|
|||
|
||||
## Desktop
|
||||
- [KDE Plasma](./desktops/KDE%20Plasma.md)
|
||||
- [SDDM](./desktops/SDDM.md)
|
||||
- [dwm](./desktops/dwm.md)
|
||||
- [picom](./desktops/picom.md)
|
||||
- [Hyprland](./desktops/hyprland.md)
|
||||
|
@ -105,7 +104,6 @@ rev: 2025-01-30
|
|||
- [SnapDrop](./network/SnapDrop.md)
|
||||
- [OnionShare](./network/OnionShare.md)
|
||||
- [qBittorrent](./network/qBittorrent.md)
|
||||
- [bitmagnet](./web/bitmagnet.md)
|
||||
|
||||
## Utilities
|
||||
- [Bottles](./utilities/Bottles.md)
|
||||
|
@ -120,6 +118,8 @@ rev: 2025-01-30
|
|||
- [Wildcard](utilities/Wildcard.md)
|
||||
- [Textpieces](utilities/Textpieces.md)
|
||||
- [ImHex](utilities/ImHex.md)
|
||||
- [Node Exporter](utilities/node-exporter.md)
|
||||
- [cAdvisor](utilities/cAdvisor.md)
|
||||
|
||||
# Mobile
|
||||
- [Aegis](./utilities/Aegis.md)
|
||||
|
@ -141,7 +141,6 @@ rev: 2025-01-30
|
|||
- [AdGuard](./web/AdGuard.md)
|
||||
- [Gitea](./web/Gitea.md)
|
||||
- [Forgejo](./web/Forgejo.md)
|
||||
- [Woodpecker CI](./web/WoodpeckerCI.md)
|
||||
- [SearXNG](./web/Searxng.md)
|
||||
- [Grocy](./web/Grocy.md)
|
||||
- [Guacamole](./web/Guacamole.md)
|
||||
|
@ -168,6 +167,9 @@ rev: 2025-01-30
|
|||
- [Caddy](./web/Caddy.md)
|
||||
- [zigbee2MQTT](./web/zigbee2mqtt.md)
|
||||
- [dawarich](./web/dawarich.md)
|
||||
- [Grafana](./web/Grafana.md)
|
||||
- [Prometheus](./web/Prometheus.md)
|
||||
- [Loki](./web/loki.md)
|
||||
|
||||
# CLI
|
||||
## Terminal
|
||||
|
@ -199,7 +201,6 @@ rev: 2025-01-30
|
|||
- [bat](./cli/bat.md)
|
||||
- [glow](./cli/glow.md)
|
||||
- [tailspin](./cli/tailspin.md)
|
||||
- [csvlens](./cli/csvlens.md)
|
||||
|
||||
### Editor
|
||||
- [nano](./cli/nano.md)
|
||||
|
@ -235,21 +236,11 @@ rev: 2025-01-30
|
|||
- [yazi](./cli/yazi.md)
|
||||
- [GPG](../cryptography/GPG.md)
|
||||
- [OpenSSL](../cryptography/OpenSSL.md)
|
||||
- [age](../cryptography/age.md)
|
||||
- [tomb](./cli/tomb.md)
|
||||
- [dysk](./cli/dysk.md)
|
||||
- [pass](./cli/pass.md)
|
||||
- [ocrs](./cli/ocrs.md)
|
||||
- [stew](./cli/stew.md)
|
||||
- [names](./cli/names.md)
|
||||
- [qrtool](./cli/qrtool.md)
|
||||
- [tagctl](./cli/tagctl.md)
|
||||
- [unionfarm](./cli/unionfarm.md)
|
||||
- [xt](./cli/xt.md)
|
||||
- [refold](./cli/refold.md)
|
||||
- [rexturl](./cli/rexturl.md)
|
||||
- [mhost](./cli/mhost.md)
|
||||
- [timr-tui](./cli/timr-tui.md)
|
||||
|
||||
## System
|
||||
- [Core Utils](./cli/system/Core%20Utils.md)
|
||||
|
@ -262,10 +253,6 @@ rev: 2025-01-30
|
|||
- [mergerfs](../linux/filesystems/MergerFS.md)
|
||||
- [sshfs](../linux/filesystems/SSHFS.md)
|
||||
- [wine](../windows/Wine.md)
|
||||
- [sbctl](../linux/sbctl.md)
|
||||
- [systemd-cryptenroll](../linux/systemd/systemd-cryptenroll.md)
|
||||
- [bubblewrap](./utilities/bubblewrap.md)
|
||||
- [retry-cli](./utilities/retry-cli.md)
|
||||
|
||||
## Development
|
||||
- [act](./development/act.md)
|
||||
|
@ -279,9 +266,6 @@ rev: 2025-01-30
|
|||
- [Docker](../tools/Docker.md)
|
||||
- [Podman](../tools/Podman.md)
|
||||
- [serie](./cli/serie.md)
|
||||
- [usql](./cli/usql.md)
|
||||
- [kondo](./cli/kondo.md)
|
||||
- [licensit](./development/licensit.md)
|
||||
|
||||
## Media
|
||||
- [yt-dlp](./media/yt-dlp.md)
|
||||
|
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/ys-l/csvlens
|
||||
rev: 2025-01-31
|
||||
---
|
||||
|
||||
# csvlens
|
||||
`csvlens` is a command line CSV file viewer. It is like `less` but made
|
||||
for CSV.
|
||||
|
||||
## Usage
|
||||
Run `csvlens` by providing the CSV filename:
|
||||
|
||||
```
|
||||
csvlens <filename>
|
||||
```
|
||||
|
||||
Pipe CSV data directly to `csvlens`:
|
||||
|
||||
```
|
||||
<your commands producing some csv data> | csvlens
|
||||
```
|
||||
|
||||
### Key bindings
|
||||
|
||||
| Key | Action |
|
||||
| ---------------------------- | ------------------------------------------------------------------ |
|
||||
| `hjkl` (or `← ↓ ↑→ `) | Scroll one row or column in the given direction |
|
||||
| `Ctrl + f` (or `Page Down`) | Scroll one window down |
|
||||
| `Ctrl + b` (or `Page Up`) | Scroll one window up |
|
||||
| `Ctrl + d` (or `d`) | Scroll half a window down |
|
||||
| `Ctrl + u` (or `u`) | Scroll half a window up |
|
||||
| `Ctrl + h` | Scroll one window left |
|
||||
| `Ctrl + l` | Scroll one window right |
|
||||
| `Ctrl + ←` | Scroll left to first column |
|
||||
| `Ctrl + →` | Scroll right to last column |
|
||||
| `G` (or `End`) | Go to bottom |
|
||||
| `g` (or `Home`) | Go to top |
|
||||
| `<n>G` | Go to line `n` |
|
||||
| `/<regex>` | Find content matching regex and highlight matches |
|
||||
| `n` (in Find mode) | Jump to next result |
|
||||
| `N` (in Find mode) | Jump to previous result |
|
||||
| `&<regex>` | Filter rows using regex (show only matches) |
|
||||
| `*<regex>` | Filter columns using regex (show only matches) |
|
||||
| `TAB` | Toggle between row, column or cell selection modes |
|
||||
| `>` | Increase selected column's width |
|
||||
| `<` | Decrease selected column's width |
|
||||
| `Shift + ↓` (or `Shift + j`) | Sort rows or toggle sort direction by the selected column |
|
||||
| `#` (in Cell mode) | Find and highlight rows like the selected cell |
|
||||
| `@` (in Cell mode) | Filter rows like the selected cell |
|
||||
| `y` | Copy the selected row or cell to clipboard |
|
||||
| `Enter` (in Cell mode) | Print the selected cell to stdout and exit |
|
||||
| `-S` | Toggle line wrapping |
|
||||
| `-W` | Toggle line wrapping by words |
|
||||
| `r` | Reset to default view (clear all filters and custom column widths) |
|
||||
| `H` (or `?`) | Display help |
|
||||
| `q` | Exit |
|
||||
|
||||
### Optional parameters
|
||||
|
||||
* `-d <char>`: Use this delimiter when parsing the CSV
|
||||
(e.g. `csvlens file.csv -d '\t'`).
|
||||
|
||||
Specify `-d auto` to auto-detect the delimiter.
|
||||
|
||||
* `-t`, `--tab-separated`: Use tab as the delimiter (when specified, `-d` is ignored).
|
||||
|
||||
* `-i`, `--ignore-case`: Ignore case when searching. This flag is ignored if any
|
||||
uppercase letters are present in the search string.
|
||||
|
||||
* `--no-headers`: Do not interpret the first row as headers.
|
||||
|
||||
* `--columns <regex>`: Use this regex to select columns to display by default.
|
||||
|
||||
* `--filter <regex>`: Use this regex to filter rows to display by default.
|
||||
|
||||
* `--find <regex>`: Use this regex to find and highlight matches by default.
|
||||
|
||||
* `--echo-column <column_name>`: Print the value of this column at the selected
|
||||
row to stdout on `Enter` key and then exit.
|
|
@ -1,71 +1,38 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/casey/intermodal
|
||||
website: https://imdl.io
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# Intermodal
|
||||
[Repo](https://github.com/casey/intermodal)
|
||||
Intermodal is a user-friendly and featureful command-line [BitTorrent](../../internet/BitTorrent.md) metainfo utility. The binary is called `imdl` and runs on [Linux](../../linux/Linux.md), [Windows](../../windows/Windows.md), and [macOS](../../macos/macOS.md).
|
||||
|
||||
## Usage
|
||||
### Create torrent file:
|
||||
```shell
|
||||
imdl torrent create [OPTIONS] <FILES>
|
||||
imdl torrent create file
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
| -------------------------------- | ----------------------------------------------------------------------------------------------------------- |
|
||||
| `-F, --follow-symlinks` | Follow symlinks in torrent input (default: no) |
|
||||
| `-f, --force` | Overwrite destination `.torrent` file if it exists |
|
||||
| `--ignore` | Skip files listed in `.gitignore`, `.ignore`, `.git/info/exclude`, and `git config --get core.excludesFile` |
|
||||
| `-h, --include-hidden` | Include hidden files that would otherwise be skipped |
|
||||
| `-j, --include-junk` | Include junk files that would otherwise be skipped |
|
||||
| `-M, --md5` | Include MD5 checksum of each file in the torrent ( warning: MD5 is broken) |
|
||||
| `--no-created-by` | Do not populate `created by` key with imdl version information |
|
||||
| `--no-creation-date` | Do not populate `creation date` key with current time |
|
||||
| `-O, --open` | Open `.torrent` file after creation (uses platform-specific opener) |
|
||||
| `--link` | Print created torrent `magnet:` URL to standard output |
|
||||
| `-P, --private` | Set private flag, restricting peer discovery |
|
||||
| `-S, --show` | Display information about the created torrent file |
|
||||
| `-V, --version` | Print version number |
|
||||
| `-A, --allow <LINT>` | Allow specific lint (e.g., `small-piece-length`, `private-trackerless`) |
|
||||
| `-a, --announce <URL>` | Use primary tracker announce URL for the torrent |
|
||||
| `-t, --announce-tier <URL-LIST>` | Add tiered tracker announce URLs to the torrent metadata, separate their announce URLs with commas. |
|
||||
| `-c, --comment <TEXT>` | Set comment text in the generated `.torrent` file |
|
||||
| `--node <NODE>` | Add DHT bootstrap node to the torrent for peer discovery |
|
||||
| `-g, --glob <GLOB>` | Include or exclude files matching specific glob patterns |
|
||||
| `-i, --input <INPUT>` | Read contents from input source (file, dir, or standard input) |
|
||||
| `-N, --name <TEXT>` | Set name of the encoded magnet link to specific text |
|
||||
| `-o, --output <TARGET>` | Save `.torrent` file to specified target or print to output |
|
||||
| `--peer <PEER>` | Add peer specification to the generated magnet link |
|
||||
| `-p, --piece-length <BYTES>` | Set piece length for encoding torrent metadata |
|
||||
| `--sort-by <SPEC>` | Determine order of files within the encoded torrent (path, size, or both) |
|
||||
| `-s, --source <TEXT>` | Set source field in encoded torrent metadata to specific text |
|
||||
| `--update-url <URL>` | Set URL where revised version of metainfo can be downloaded |
|
||||
Flags:
|
||||
```shell
|
||||
-N, --name <TEXT> Set name of torrent
|
||||
-i, --input <INPUT> Torrent Files
|
||||
-c, --comment <TEXT> Torrent Comment
|
||||
-a, --announce <URL> Torrent Tracker
|
||||
```
|
||||
|
||||
### Show torrent information
|
||||
```shell
|
||||
imdl torrent show <torrent>
|
||||
```
|
||||
|
||||
You can output the information as JSON using `--json`.
|
||||
|
||||
### Verify torrent
|
||||
```shell
|
||||
imdl torrent verify <torrent>
|
||||
imdl torrent verify --input torr.torrent --content file
|
||||
```
|
||||
|
||||
### Magnet Links
|
||||
### Generate magnet link
|
||||
```shell
|
||||
# Get magnet link from torrent file
|
||||
imdl torrent link [-s, --select-only <INDICES>...] <torrent>
|
||||
# Select files to download. Values are indices into the `info.files` list, e.g. `--select-only 1,2,3`.
|
||||
|
||||
# Get torrent file from magnet link
|
||||
imdl torrent from-link [-o, --output <OUT>] <INPUT>
|
||||
|
||||
# Announce a torrent
|
||||
imdl torrent announce <INPUT>
|
||||
```
|
||||
imdl torrent link <torrent>
|
||||
```
|
|
@ -1,29 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/tbillington/kondo
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# Kondo 🧹
|
||||
Cleans `node_modules`, `target`, `build`, and friends from your projects.
|
||||
|
||||
Excellent if
|
||||
- 💾 You want to back up your code but don't want to include GBs of dependencies
|
||||
- 🧑🎨 You try out lots of projects but hate how much space they occupy
|
||||
- ⚡️ You like keeping your disks lean and zippy
|
||||
|
||||
## Usage
|
||||
Kondo recursively cleans project directories.
|
||||
|
||||
Supported project types: Cargo, Node, Unity, SBT, Haskell Stack, Maven, Unreal Engine, Jupyter Notebook, Python, Jupyter Notebooks, CMake, Composer, Pub, Elixir, Swift, Gradle, and .NET projects.
|
||||
|
||||
Usage: `kondo [OPTIONS] [DIRS]...`
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-I, --ignored-dirs <IGNORED_DIRS>` | Directories to ignore. Will also prevent recursive traversal within |
|
||||
| `-q, --quiet...` | Quiet mode. Won't output to the terminal. `-qq` prevents all output |
|
||||
| `-a, --all` | Clean all found projects without confirmation |
|
||||
| `-L, --follow-symlinks` | Follow symbolic links |
|
||||
| `-s, --same-filesystem` | Restrict directory traversal to the root filesystem |
|
||||
| `-o, --older <OLDER>` | Only directories with a file last modified n units of time ago will be looked at. Ex: 20d. Units are m: minutes, h: hours, d: days, w: weeks, M: months and y: years |
|
|
@ -1,122 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/lukaspustina/mhost
|
||||
website: https://mhost.pustina.de
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# mhost
|
||||
A modern take on the classic host DNS lookup utility including an easy to use and very fast Rust lookup library.
|
||||
|
||||
## Use Cases
|
||||
### Just lookup an IP address
|
||||
```shell
|
||||
$ mhost l github.com
|
||||
```
|
||||
|
||||
### Just lookup an IP address, using even more than just your local name servers
|
||||
```shell
|
||||
$ mhost server-lists public-dns -o servers.txt
|
||||
$ mhost --limit 6000 --max-concurrent-servers 1000 --timeout 1 -f servers.txt l www.github.com
|
||||
```
|
||||
|
||||
The first command downloads a list of public available name servers that are maintained by the Public DNS community. Usually only a subset of these are reachable, but it still a large set of active name servers.
|
||||
|
||||
The second command uses the name servers list from before and queries all of them concurrently. These settings are very aggressive and highly stresses your internet connection. mhost default settings are set much more cautiously.
|
||||
|
||||
### Just lookup an IP address, using UDP, TCP, DoT, and DoH
|
||||
```shell
|
||||
$ mhost -s 1.1.1.1 -s tcp:1.1.1.1 -s tls:1.1.1.1:853,tls_auth_name=cloudflare-dns.com -s https:1.1.1.1:443,tls_auth_name=cloudflare-dns.com,name=Cloudflare -p l github.com
|
||||
```
|
||||
|
||||
As already mentioned before, mhost supports DNS queries over UDP, TCP, DNS over TLS (DoT), as well as DNS over HTTPS (DoH). In the above example, mhost uses all four protocols to query Cloudflare’s name servers.
|
||||
|
||||
This command also shows the syntax for name server specification, which in general is `protocol:<host name | ip address>:port,tls_auth_name=hostname,name=human-readable-name`.
|
||||
|
||||
### Discover a domain
|
||||
Sometimes you want to know which host names and subdomains a domain has. mhost offers a simple command to help you find these. Please mind, that mhost only uses DNS specific discovery methods. If you want even deeper discoveries using Google, Shodan etc. there are other tools available.
|
||||
|
||||
```shell
|
||||
$ mhost -p d github.com -p
|
||||
```
|
||||
|
||||
This command uses the predefined name servers to discover the GitHub domain. The `-s` reduces all discovered names to real subdomains of github.com..
|
||||
|
||||
### You can go one more step and explore the autonomous systems GitHub uses. In order to discover those, you can use the following commands:
|
||||
|
||||
```shell
|
||||
$ mhost -p l --all -w github.com
|
||||
$ mhost -p l --all 140.82.121.0/24
|
||||
```
|
||||
|
||||
### Check your name server configuration
|
||||
```shell
|
||||
$ mhost -p c github.com -p
|
||||
```
|
||||
|
||||
## Usage
|
||||
mhost has three main commands: `lookup`, `discover`, and `check`. `lookup` lookups up arbitrary DNS records of a domain name. `discover` tries various methods to discover host names and subdomains of a domain. `check` uses lints to check if all records of a domain name adhere to the DNS RFC.
|
||||
|
||||
### General Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------------ | -------------------------------------------------------------------------------------------------- |
|
||||
| `-use-system-resolv-opt` | Uses options set in `/etc/resolv.conf` |
|
||||
| `-no-system-nameservers` | Ignores nameservers from `/etc/resolv.conf` |
|
||||
| `-S, --no-system-lookups` | Ignores system nameservers for lookups |
|
||||
| `--resolv-conf <FILE>` | Uses alternative resolv.conf file |
|
||||
| `--ndots <NUMBER>` | Sets number of dots to qualify domain name as FQDN [default: 1] |
|
||||
| `--search-domain <DOMAIN>` | Sets the search domain to append if HOSTNAME has less than `ndots` dots |
|
||||
| `--system-nameserver <IP ADDR>...` | Adds system nameserver for system lookups; only IP addresses allowed |
|
||||
| `-s, --nameserver <HOSTNAME / IP ADDR>...` | Adds nameserver for lookups |
|
||||
| `-p, --predefined` | Adds predefined nameservers for lookups |
|
||||
| `--predefined-filter <PROTOCOL>` | Filters predefined nameservers by protocol [default: udp] [possible values: udp, tcp, https, tls] |
|
||||
| `--list-predefined` | Lists all predefined nameservers |
|
||||
| `-f, --nameservers-from-file <FILE>` | Adds nameservers from file |
|
||||
| `--limit <NUMBER>` | Sets max. number of nameservers to query [default: 100] |
|
||||
| `--max-concurrent-servers <NUMBER>` | Sets max. concurrent nameservers [default: 10] |
|
||||
| `--max-concurrent-requests <NUMBER>` | Sets max. concurrent requests per nameserver [default: 5] |
|
||||
| `--retries <NUMBER>` | Sets number of retries if first lookup to nameserver fails [default: 0] |
|
||||
| `--timeout <TIMEOUT>` | Sets timeout in seconds for responses [default: 5] |
|
||||
| `-m, --resolvers-mode <MODE>` | Sets resolvers lookup mode [default: multi] [possible values: multi, uni] |
|
||||
| `--wait-multiple-responses` | Waits until timeout for additional responses from nameservers |
|
||||
| `--no-abort-on-error` | Sets do-not-ignore errors from nameservers |
|
||||
| `--no-abort-on-timeout` | Sets do-not-ignore timeouts from nameservers |
|
||||
| `--no-aborts` | Sets do-not-ignore errors and timeouts from nameservers |
|
||||
| `-o, --output <FORMAT>` | Sets the output format for result presentation [default: summary] [possible values: json, summary] |
|
||||
| `--output-options <OPTIONS>` | Sets output options |
|
||||
| `--show-errors` | Shows error counts |
|
||||
| `-q, --quiet` | Does not print anything but results |
|
||||
| `--no-color` | Disables colorful output |
|
||||
| `--ascii` | Uses only ASCII compatible characters for output |
|
||||
|
||||
### Lookup Options
|
||||
|
||||
| Option | Description |
|
||||
| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--all` | Enables lookups for all record types |
|
||||
| `-s`, `--service` | Parses ARG as service spec and set record type to SRV |
|
||||
| `-w`, `--whois` | Retrieves Whois information about A, AAAA, and PTR records |
|
||||
| `-h`, `--help` | Prints help information |
|
||||
| `-t`, `--record-type <RECORD TYPE>...` | Sets record type to lookup, will be ignored in case of IP address lookup [default: A,AAAA,CNAME,MX] [possible values: A, AAAA, ANAME, ANY, CNAME, MX, NULL, NS, PTR, SOA, SRV, TXT] |
|
||||
|
||||
### Discover Options
|
||||
|
||||
```markdown
|
||||
| Option | Description |
|
||||
| ----------------------------------- | ------------------------------------------------------------------------------------------ |
|
||||
| `-p`, `--show-partial-results` | Shows results after each lookup step |
|
||||
| `-w`, `--wordlist-from-file <FILE>` | Uses wordlist from file |
|
||||
| `--rnd-names-number <NUMBER>` | Sets number of random domain names to generate for wildcard resolution check [default: 3] |
|
||||
| `--rnd-names-len <LEN>` | Sets length of random domain names to generate for wildcard resolution check [default: 32] |
|
||||
| `-s`, `--subdomains-only` | Shows subdomains only omitting all other discovered names |
|
||||
|
||||
### Check Options
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------- | ------------------------------------------- |
|
||||
| `--show-partial-results` | Shows results after each check step |
|
||||
| `--show-intermediate-lookups` | Shows all lookups made during by all checks |
|
||||
| `--no-cnames` | Does not run cname lints |
|
||||
| `--no-soa` | Does not run SOA check |
|
||||
| `--no-spf` | Does not run SPF check |
|
|
@ -1,17 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/fnichol/names
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# names
|
||||
Random name generator for Rust
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
> names
|
||||
selfish-change
|
||||
```
|
||||
|
||||
Usage: `names [-n, --number] <AMOUNT>`
|
|
@ -1,31 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/sorairolake/qrtool
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# qrtool
|
||||
qrtool is a command-line utility for encoding or decoding QR code.
|
||||
|
||||
## Usage
|
||||
### Encode
|
||||
Usage: `qrtool encode [OPTION]… [STRING]`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-o, --output FILE` | Output the result to a file. |
|
||||
| `-r, --read-from FILE` | Read input data from a file. This option conflicts with `[STRING]`. |
|
||||
| `-s, --size NUMBER` | The module size in pixels. If this option is not specified, the module size is 8 when the output format is PNG or SVG, and 1 otherwise. |
|
||||
| `-l, --error-correction-level LEVEL` | Error correction level. The possible values are: Level `L`. 7% of codewords can be restored. Level `M`. 15% of codewords can be restored. This is the default value. Level `Q`. 25% of codewords can be restored. Level `H`. 30% of codewords can be restored. |
|
||||
| `--level LEVEL` | Alias for `-l, --error-correction-level`. |
|
||||
| `-m, --margin NUMBER` | The width of margin. If this option is not specified, the margin will be 4 for normal QR code and 2 for Micro QR code. |
|
||||
| `-t, --type FORMAT` | The format of the output. The possible values are: `png`, `svg`, `pic`, `ansi256`, `ansi-true-color`, `ascii`, `ascii-invert`, `unicode`, `unicode-invert` |
|
||||
| `--foreground COLOR` | Foreground color. COLOR takes a CSS color string. Colored output is only available when the output format is PNG, SVG or any ANSI escape sequences. Note that lossy conversion may be performed depending on the color space supported by the method to specify a color, the color depth supported by the output format, etc. Default is black. |
|
||||
| `--background COLOR` | Background color. COLOR takes a CSS color string. Colored output is only available when the output format is PNG, SVG or any ANSI escape sequences. Note that lossy conversion may be performed depending on the color space supported by the method to specify a color, the color depth supported by the output format, etc. Default is white. |
|
||||
|
||||
### Decode
|
||||
Usage: `qrtool decode [OPTION]… [IMAGE]`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-t, --type FORMAT` | The format of the input. If FORMAT is not specified, the format is determined based on the extension or the magic number. The possible values are: `bmp`, `dds`, `farbfeld`, `gif`, `hdr`, `ico`, `jpeg`, `openexr`, `png`, `pnm`, `qoi`, `svg`, `tga`, `tiff`, `webp`, `xbm` |
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/wr7/refold
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# refold
|
||||
refold is a commandline tool for performing text-wrapping, similar to unix `fold`. Unlike `fold`, refold will recombine lines before performing line-wrapping, and it will automatically detect line prefixes.
|
||||
|
||||
## Usage
|
||||
Usage: `refold [FLAGS...]`
|
||||
|
||||
refold reads from stdin and writes to stdout
|
||||
|
||||
### Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--width, -w <width>` | Sets the width to wrap at (default 80). |
|
||||
| `--prefix, -p <prefix>` | Sets the prefix for each line (default: auto detect). Set to an empty string to disable prefixing entirely. |
|
||||
| `--boundaries, -b, --unicode-boundaries` | Sets the split mode to "boundaries" mode (default). In boundaries mode, line wrapping may occur in-between unicode breakable characters. |
|
||||
| `--spaces, -s` | Sets the split mode to "space" mode. In space mode, line wrapping may occur in-between words separated by ASCII spaces. |
|
||||
| `--characters, -c, --break-words, --break` | Sets the split mode to "character" mode. In character mode, line wrapping may occur in-between any two characters. |
|
|
@ -1,45 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/vschwaberow/rexturl
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# rexturl
|
||||
A versatile command-line tool for parsing and manipulating URLs.
|
||||
|
||||
## Usage
|
||||
Usage: `rexturl [OPTIONS] [URLS...]`
|
||||
|
||||
If no URLs are provided, rexturl will read from stdin.
|
||||
|
||||
### Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------- | --------------------------------------------------------- |
|
||||
| `--urls <URLS>` | Input URLs to process |
|
||||
| `--scheme` | Extract and display the URL scheme |
|
||||
| `--username` | Extract and display the username from the URL |
|
||||
| `--host` | Extract and display the hostname |
|
||||
| `--port` | Extract and display the port number |
|
||||
| `--path` | Extract and display the URL path |
|
||||
| `--query` | Extract and display the query string |
|
||||
| `--fragment` | Extract and display the URL fragment |
|
||||
| `--sort` | Sort the output |
|
||||
| `--unique` | Remove duplicate entries from the output |
|
||||
| `--json` | Output results in JSON format |
|
||||
| `--all` | Display all URL components |
|
||||
| `--custom` | Enable custom output mode |
|
||||
| `--format <FORMAT>` | Custom output format [default: `{scheme}://{host}{path}`] |
|
||||
| '--domain' | Extract and display the domain |
|
||||
|
||||
|
||||
### Custom Output Format
|
||||
When using `--custom` and `--format`, you can use the following placeholders:
|
||||
- `{scheme}`
|
||||
- `{username}`
|
||||
- `{host}`
|
||||
- `{domain}`
|
||||
- `{port}`
|
||||
- `{path}`
|
||||
- `{query}`
|
||||
- `{fragment}`
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
obj: application
|
||||
website: https://rsync.samba.org
|
||||
arch-wiki: https://wiki.archlinux.org/title/Rsync
|
||||
website: https://rsync.samba.org/
|
||||
repo: https://github.com/WayneD/rsync
|
||||
---
|
||||
|
||||
|
@ -45,3 +44,4 @@ Either `source` or `destination` can be a local folder or a remote path (`user@h
|
|||
| --log-file=FILE | log what we're doing to the specified FILE |
|
||||
| --partial | keep partially transferred files |
|
||||
| -P | same as --partial --progress |
|
||||
|
||||
|
|
|
@ -1,54 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://gitlab.com/KodyVB/tagctl
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# tagctl
|
||||
Tagctl is a command line program which can add or remove tags to files.
|
||||
The tags can either be in the name or under `user.xdg.tags` in the extended attributes.
|
||||
|
||||
## Usage
|
||||
Usage: `tagctl [OPTIONS] [FILES]...`
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-t, --tag <tag>` | Tag to add/remove to selected files. `%p` uses the parent directory name, `%y` uses the modified year, `%m` uses modified month, `%d` uses modified day, and `%w` uses modified weekday |
|
||||
| `-d, --delimiter <delimiter>` | Separator for multiple tags (default: `,`) |
|
||||
| `-i, --input` | Accepts input from stdin |
|
||||
| `-x, --xattr` | Adds/removes tags via xattr under `user.xdg.tags` |
|
||||
| `-r, --remove` | Removes tag instead of adding |
|
||||
| `-R, --remove_all` | Removes all tags |
|
||||
| `-v, --verbose` | Increases verbosity of output |
|
||||
| `-g, --generate_autocomplete <generate_autocomplete>` | The shell to generate auto-completion for `bash`, `elvish`, `fish`, `zsh` |
|
||||
|
||||
|
||||
## Examples
|
||||
**Add tag `example` to current directory using file names:**
|
||||
```shell
|
||||
tagctl -t example "$(ls)"
|
||||
ls | tagctl --input --tag example
|
||||
```
|
||||
|
||||
**Remove tag `example` from current directory using file names:**
|
||||
```shell
|
||||
tagctl -r --tag=example "$(ls)"
|
||||
ls | tagctl --remove -it example
|
||||
```
|
||||
|
||||
**Add tag `example` to current directory using extended attributes:**
|
||||
```shell
|
||||
tagctl -xt example "$(ls)"
|
||||
ls | tagctl --xattr --input --tag example
|
||||
```
|
||||
|
||||
**Remove tag `example` from current directory using extended attributes:**
|
||||
```shell
|
||||
tagctl -xr --tag=example "$(ls)"
|
||||
ls | tagctl --xattr --remove -it example
|
||||
```
|
||||
|
||||
**Add tag `example` to two sets of inputs using file names:**
|
||||
```shell
|
||||
find /home/user/Documents | tagctl -it "example" "$(ls)"
|
||||
```
|
|
@ -1,23 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/sectore/timr-tui
|
||||
rev: 2025-01-31
|
||||
---
|
||||
|
||||
# timr-tui
|
||||
TUI to organize your time: Pomodoro, Countdown, Timer.
|
||||
|
||||
## CLI
|
||||
Usage: `timr-tui [OPTIONS]`
|
||||
|
||||
| Option | Description |
|
||||
| -------- | ----------------------------------------------------------------------------------------------- |
|
||||
| `–c` | Countdown time to start from. Formats: 'ss', 'mm:ss', or 'hh:mm:ss' |
|
||||
| `–w` | Work time to count down from. Formats: 'ss', 'm:ss', or 'h:mm:s' |
|
||||
| `–p` | Pause time to count down from. Formats: 'ss', 'm:ss', or 'h:m:s' |
|
||||
| `–d` | Show deciseconds |
|
||||
| `–m` | Mode to start with. [possible values: countdown, timer, pomodoro] |
|
||||
| `–s` | Style to display time with. [possible values: full, light, medium, dark, thick, cross, braille] |
|
||||
| `--menu` | Open the menu |
|
||||
| `–r` | Reset stored values to default values. |
|
||||
| `–n` | Toggle desktop notifications on or off. Experimental. [possible values: on, off] |
|
|
@ -3,18 +3,16 @@ obj: application
|
|||
repo: https://github.com/tmux/tmux
|
||||
arch-wiki: https://wiki.archlinux.org/title/tmux
|
||||
wiki: https://en.wikipedia.org/wiki/Tmux
|
||||
rev: 2024-12-16
|
||||
rev: 2024-01-15
|
||||
---
|
||||
|
||||
# tmux
|
||||
tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached.
|
||||
tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached.
|
||||
|
||||
# Usage
|
||||
**New tmux session:**
|
||||
```shell
|
||||
tmux
|
||||
tmux new -s name
|
||||
tmux new -s mysession -n mywindow
|
||||
```
|
||||
|
||||
**List existing sessions:**
|
||||
|
@ -25,7 +23,6 @@ tmux ls
|
|||
**Attach to a named session:**
|
||||
```shell
|
||||
tmux attach -t name
|
||||
tmux a -t name
|
||||
```
|
||||
|
||||
**Kill a session:**
|
||||
|
@ -34,30 +31,14 @@ tmux kill-session -t name
|
|||
```
|
||||
|
||||
# Keybinds
|
||||
- Show the time: `Ctrl-b + t`
|
||||
|
||||
## Sessions
|
||||
- Rename current session: `Ctrl-b + $`
|
||||
- Vertical Split: `Ctrl-b %`
|
||||
- Horizontal Split: `Ctrl-b "`
|
||||
- Select Pane: `Ctrl-b q [num]`
|
||||
- Change Pane Size: `Ctrl-b Ctrl [Down/Up/Left/Right]`
|
||||
- Switch sessions: `Ctrl-b s`
|
||||
- Detach from a running session: `Ctrl-b + d`
|
||||
- Sessions and windows overview: `Ctrl-b + w`
|
||||
- Move to previous session: `Ctrl-b + (`
|
||||
- Move to next session: `Ctrl-b + )`
|
||||
- Switch sessions: `Ctrl-b + s`
|
||||
|
||||
## Windows
|
||||
- Create a new window: `Ctrl-b + c`
|
||||
- Rename current window: `Ctrl-b + ,`
|
||||
- Go to previous window: `Ctrl-b + p`
|
||||
- Go to next window: `Ctrl-b + n`
|
||||
- Go to window: `Ctrl-b + [0-9]`
|
||||
|
||||
## Panes
|
||||
- Vertical Split: `Ctrl-b + %`
|
||||
- Horizontal Split: `Ctrl-b + "`
|
||||
- Select Pane: `Ctrl-b + q + [num]`
|
||||
- Change Pane Size: `Ctrl-b + Ctrl + [Down/Up/Left/Right]`
|
||||
- Move current pane left: `Ctrl-b + {`
|
||||
- Move current pane right: `Ctrl-b + }`
|
||||
- Close current pane: `Ctrl-b + x`
|
||||
- Switch to the next pane: `Ctrl-b + o`
|
||||
- Convert pane into a window: `Ctrl-b + !`
|
||||
- Create a new window inside session: `Ctrl-b c`
|
||||
- Go to next window: `Ctrl-b n`
|
||||
- Switch sessions and windows: `Ctrl-B w`
|
||||
- Go to window: `Ctrl-b [0-9]`
|
||||
- Kill a window: `Ctrl-b x`
|
|
@ -1,54 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://codeberg.org/chrysn/unionfarm
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# unionfarm
|
||||
This is a small utility for managing symlink farms. It takes a "farm" directory and any number of "data" directories, and creates (or updates) the union (or overlay) of the data directories in the farm directory by placing symlinks to data directories.
|
||||
|
||||
It is similar to
|
||||
- union mounts (overlay/overlayfs) -- but works without system privileges; it is not live, but can then again err out on duplicate files rather than picking the highest ranking
|
||||
|
||||
Usage: `unionfarm <FARM> [DATA]...`
|
||||
|
||||
## Example
|
||||
|
||||
```
|
||||
$ tree my-photos
|
||||
my-photos
|
||||
├── 2018/
|
||||
│ └── Rome/
|
||||
│ └── ...
|
||||
└── 2019/
|
||||
└── Helsinki/
|
||||
└── DSCN2305.jpg
|
||||
```
|
||||
|
||||
Assume you have a collection of photos as above, and want to see them overlaid with a friend's photos:
|
||||
|
||||
```
|
||||
$ tree ~friend/photos
|
||||
/home/friend/photos
|
||||
├── 2018/
|
||||
│ └── Amsterdam/
|
||||
│ └── ...
|
||||
└── 2019/
|
||||
└── Helsinki/
|
||||
└── DSC_0815.jpg
|
||||
```
|
||||
|
||||
With unionfarm, you can create a shared view on them:
|
||||
|
||||
```
|
||||
$ unionfarm all-photos my-photos ~friend/photos
|
||||
$ tree all-photos
|
||||
all-photos
|
||||
├── 2018/
|
||||
│ ├── Amsterdam -> /home/friend/photos/2018/Amsterdam/
|
||||
│ └── Rome -> ../../my-photos/2018/Rome/
|
||||
└── 2019/
|
||||
└── Helsinki/
|
||||
├── DSC_0815.jpg -> /home/friend/photos/2019/Helsinki/DSC_0815.jpg
|
||||
└── DSCN2305.jpg -> ../../../my-photos/2019/Helsinki/DSCN2305.jpg
|
||||
```
|
|
@ -1,229 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/xo/usql
|
||||
rev: 2024-12-10
|
||||
---
|
||||
|
||||
# usql
|
||||
usql is a universal command-line interface for PostgreSQL, MySQL, Oracle Database, SQLite3, Microsoft SQL Server, and many other databases including NoSQL and non-relational databases!
|
||||
|
||||
usql provides a simple way to work with SQL and NoSQL databases via a command-line inspired by PostgreSQL's psql. usql supports most of the core psql features, such as variables, backticks, backslash commands and has additional features that psql does not, such as multiple database support, copying between databases, syntax highlighting, context-based completion, and terminal graphics.
|
||||
|
||||
## Usage
|
||||
|
||||
```sh
|
||||
usql [options]... [DSN]
|
||||
```
|
||||
|
||||
DSN can be any database connection string like `sqlite:///path/to/my/file` or `postgres://user:pass@host:port/db`.
|
||||
|
||||
### Options
|
||||
|
||||
| Option | Description |
|
||||
| ----------------------------------------- | -------------------------------------------------------------------------------------- |
|
||||
| `-c, --command COMMAND` | run only single command (SQL or internal) and exit |
|
||||
| `-f, --file FILE` | execute commands from file and exit |
|
||||
| `-w, --no-password` | never prompt for password |
|
||||
| `-X, --no-init` | do not execute initialization scripts (aliases: `--no-rc` `--no-psqlrc` `--no-usqlrc`) |
|
||||
| `-o, --out FILE` | output file |
|
||||
| `-W, --password` | force password prompt (should happen automatically) |
|
||||
| `-1, --single-transaction` | execute as a single transaction (if non-interactive) |
|
||||
| `-v, --set NAME=VALUE` | set variable NAME to VALUE (see \set command, aliases: --var --variable) |
|
||||
| `-N, --cset NAME=DSN` | set named connection NAME to DSN (see \cset command) |
|
||||
| `-P, --pset VAR=ARG` | set printing option VAR to ARG (see \pset command) |
|
||||
| `-F, --field-separator FIELD-SEPARATOR` | field separator for unaligned and CSV output |
|
||||
| `-R, --record-separator RECORD-SEPARATOR` | record separator for unaligned and CSV output (default \n) |
|
||||
| `-T, --table-attr TABLE-ATTR` | set HTML table tag attributes (e.g., width, border) |
|
||||
| `-A, --no-align` | unaligned table output mode |
|
||||
| `-H, --html` | HTML table output mode |
|
||||
| `-t, --tuples-only` | print rows only |
|
||||
| `-x, --expanded` | turn on expanded table output |
|
||||
| `-z, --field-separator-zero` | set field separator for unaligned and CSV output to zero byte |
|
||||
| `-0, --record-separator-zero` | set record separator for unaligned and CSV output to zero byte |
|
||||
| `-J, --json` | JSON output mode |
|
||||
| `-C, --csv` | CSV output mode |
|
||||
| `-G, --vertical` | vertical output mode |
|
||||
| `-q, --quiet` | run quietly (no messages, only query output) |
|
||||
| `--config string` | config file |
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Description |
|
||||
| ---------------------------------- | ----------------------------------------------------------------------------- |
|
||||
| **General:** | |
|
||||
| `\q` | quit usql |
|
||||
| `\quit` | alias for `\q` |
|
||||
| `\drivers` | show database drivers available to usql |
|
||||
| **Connection:** | |
|
||||
| `\c DSN` | connect to database url |
|
||||
| `\c DRIVER PARAMS...` | connect to database with driver and parameters |
|
||||
| `\cset [NAME [DSN]]` | set named connection, or list all if no parameters |
|
||||
| `\cset NAME DRIVER PARAMS...` | define named connection for database driver |
|
||||
| `\Z` | close database connection |
|
||||
| `\password [USERNAME]` | change the password for a user |
|
||||
| `\conninfo` | display information about the current database connection |
|
||||
| **Operating System:** | |
|
||||
| `\cd [DIR]` | change the current working directory |
|
||||
| `\getenv VARNAME ENVVAR` | fetch environment variable |
|
||||
| `\setenv NAME [VALUE]` | set or unset environment variable |
|
||||
| `\! [COMMAND]` | execute command in shell or start interactive shell |
|
||||
| `\timing [on/off]` | toggle timing of commands |
|
||||
| **Variables:** | |
|
||||
| `\prompt [-TYPE] VAR [PROMPT]` | prompt user to set variable |
|
||||
| `\set [NAME [VALUE]]` | set internal variable, or list all if no parameters |
|
||||
| `\unset NAME` | unset (delete) internal variable |
|
||||
| **Query Execute:** | |
|
||||
| `\g [(OPTIONS)] [FILE] or ;` | execute query (and send results to file or pipe) |
|
||||
| `\G [(OPTIONS)] [FILE]` | as \g, but forces vertical output mode |
|
||||
| `\gx [(OPTIONS)] [FILE]` | as \g, but forces expanded output mode |
|
||||
| `\gexec` | execute query and execute each value of the result |
|
||||
| `\gset [PREFIX]` | execute query and store results in usql variables |
|
||||
| **Query Buffer:** | |
|
||||
| `\e [FILE] [LINE]` | edit the query buffer (or file) with external editor |
|
||||
| `\p` | show the contents of the query buffer |
|
||||
| `\raw` | show the raw (non-interpolated) contents of the query buffer |
|
||||
| `\r` | reset (clear) the query buffer |
|
||||
| **Input/Output:** | |
|
||||
| `\copy SRC DST QUERY TABLE` | copy query from source url to table on destination url |
|
||||
| `\copy SRC DST QUERY TABLE(A,...)` | copy query from source url to columns of table on destination url |
|
||||
| `\echo [-n] [STRING]` | write string to standard output (-n for no newline) |
|
||||
| `\qecho [-n] [STRING]` | write string to \o output stream (-n for no newline) |
|
||||
| `\warn [-n] [STRING]` | write string to standard error (-n for no newline) |
|
||||
| `\o [FILE]` | send all query results to file or pipe |
|
||||
| **Informational:** | |
|
||||
| `\d[S+] [NAME]` | list tables, views, and sequences or describe table, view, sequence, or index |
|
||||
| `\da[S+] [PATTERN]` | list aggregates |
|
||||
| `\df[S+] [PATTERN]` | list functions |
|
||||
| `\di[S+] [PATTERN]` | list indexes |
|
||||
| `\dm[S+] [PATTERN]` | list materialized views |
|
||||
| `\dn[S+] [PATTERN]` | list schemas |
|
||||
| `\dp[S] [PATTERN]` | list table, view, and sequence access privileges |
|
||||
| `\ds[S+] [PATTERN]` | list sequences |
|
||||
| `\dt[S+] [PATTERN]` | list tables |
|
||||
| `\dv[S+] [PATTERN]` | list views |
|
||||
| `\l[+]` | list databases |
|
||||
| `\ss[+] [TABLE/QUERY] [k]` | show stats for a table or a query |
|
||||
| **Formatting:** | |
|
||||
| `\pset [NAME [VALUE]]` | Set table output option |
|
||||
| `\a` | Toggle between unaligned and aligned output mode |
|
||||
| `\C [STRING]` | Set table title, or unset if none |
|
||||
| `\f [STRING]` | Show or set field separator for unaligned query output |
|
||||
| `\H` | Toggle HTML output mode |
|
||||
| `\T [STRING]` | Set HTML <table> tag attributes, or unset if none |
|
||||
| `\t [on/off]` | Show only rows |
|
||||
| `\x [on/off/auto]` | Toggle expanded output |
|
||||
| **Transaction:** | |
|
||||
| `\\begin` | Begin a transaction |
|
||||
| `\\begin [-read-only] [ISOLATION]` | Begin a transaction with isolation level |
|
||||
| `\\commit` | Commit current transaction |
|
||||
| `\\rollback` | Rollback (abort) current transaction |
|
||||
|
||||
## Configuration
|
||||
During its initialization phase, usql reads a standard YAML configuration file `config.yaml`. On Windows this is `%AppData%/usql/config.yaml`, on macOS this is `$HOME/Library/Application Support/usql/config.yaml`, and on Linux and other Unix systems this is normally `$HOME/.config/usql/config.yaml`.
|
||||
|
||||
```yml
|
||||
# named connections
|
||||
# name can be used instead of database url
|
||||
connections:
|
||||
my_couchbase_conn: couchbase://Administrator:P4ssw0rd@localhost
|
||||
my_clickhouse_conn: clickhouse://clickhouse:P4ssw0rd@localhost
|
||||
css: cassandra://cassandra:cassandra@localhost
|
||||
fsl: flightsql://flight_username:P4ssw0rd@localhost
|
||||
gdr:
|
||||
protocol: godror
|
||||
username: system
|
||||
password: P4ssw0rd
|
||||
hostname: localhost
|
||||
port: 1521
|
||||
database: free
|
||||
ign: ignite://ignite:ignite@localhost
|
||||
mss: sqlserver://sa:Adm1nP@ssw0rd@localhost
|
||||
mym: mysql://root:P4ssw0rd@localhost
|
||||
myz: mymysql://root:P4ssw0rd@localhost
|
||||
ora: oracle://system:P4ssw0rd@localhost/free
|
||||
ore: oracle://system:P4ssw0rd@localhost:1522/db1
|
||||
pgs: postgres://postgres:P4ssw0rd@localhost
|
||||
pgx: pgx://postgres:P4ssw0rd@localhost
|
||||
vrt:
|
||||
proto: vertica
|
||||
user: vertica
|
||||
pass: vertica
|
||||
host: localhost
|
||||
sll:
|
||||
file: /path/to/mydb.sqlite3
|
||||
mdc: modernsqlite:test.db
|
||||
dkd: test.duckdb
|
||||
zzz: ["databricks", "token:dapi*****@adb-*************.azuredatabricks.net:443/sql/protocolv1/o/*********/*******"]
|
||||
zz2:
|
||||
proto: mysql
|
||||
user: "my username"
|
||||
pass: "my password!"
|
||||
host: localhost
|
||||
opts:
|
||||
opt1: "😀"
|
||||
|
||||
# init script
|
||||
init: |
|
||||
\echo welcome to the jungle `date`
|
||||
\set SYNTAX_HL_STYLE paraiso-dark
|
||||
\set PROMPT1 '\033[32m%S%M%/%R%#\033[0m '
|
||||
\set bar test
|
||||
\set foo test
|
||||
-- \set SHOW_HOST_INFORMATION false
|
||||
-- \set SYNTAX_HL false
|
||||
\set 型示師 '本門台初埼本門台初埼'
|
||||
|
||||
# charts path
|
||||
charts_path: charts
|
||||
# defined queries
|
||||
queries:
|
||||
q1:
|
||||
```
|
||||
|
||||
### Time Formatting
|
||||
Some databases support time/date columns that support formatting. By default, usql formats time/date columns as RFC3339Nano, and can be set using `\pset time FORMAT`:
|
||||
|
||||
```
|
||||
$ usql pg://
|
||||
Connected with driver postgres (PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1))
|
||||
Type "help" for help.
|
||||
|
||||
pg:postgres@=> \pset
|
||||
time RFC3339Nano
|
||||
pg:postgres@=> select now();
|
||||
now
|
||||
-----------------------------
|
||||
2021-05-01T22:21:44.710385Z
|
||||
(1 row)
|
||||
|
||||
pg:postgres@=> \pset time Kitchen
|
||||
Time display is "Kitchen" ("3:04PM").
|
||||
pg:postgres@=> select now();
|
||||
now
|
||||
---------
|
||||
10:22PM
|
||||
(1 row)
|
||||
```
|
||||
|
||||
usql's time format supports any Go supported time format, or can be any standard Go const name, such as Kitchen above. See below for an overview of the available time constants.
|
||||
|
||||
#### Time Constants
|
||||
The following are the time constant names available in `usql`, corresponding time format value, and example display output:
|
||||
|
||||
| Constant | Format | Display |
|
||||
| ----------- | ------------------------------------: | ----------------------------------: |
|
||||
| ANSIC | `Mon Jan _2 15:04:05 2006` | `Wed Aug 3 20:12:48 2022` |
|
||||
| UnixDate | `Mon Jan _2 15:04:05 MST 2006` | `Wed Aug 3 20:12:48 UTC 2022` |
|
||||
| RubyDate | `Mon Jan 02 15:04:05 -0700 2006` | `Wed Aug 03 20:12:48 +0000 2022` |
|
||||
| RFC822 | `02 Jan 06 15:04 MST` | `03 Aug 22 20:12 UTC` |
|
||||
| RFC822Z | `02 Jan 06 15:04 -0700` | `03 Aug 22 20:12 +0000` |
|
||||
| RFC850 | `Monday, 02-Jan-06 15:04:05 MST` | `Wednesday, 03-Aug-22 20:12:48 UTC` |
|
||||
| RFC1123 | `Mon, 02 Jan 2006 15:04:05 MST` | `Wed, 03 Aug 2022 20:12:48 UTC` |
|
||||
| RFC1123Z | `Mon, 02 Jan 2006 15:04:05 -0700` | `Wed, 03 Aug 2022 20:12:48 +0000` |
|
||||
| RFC3339 | `2006-01-02T15:04:05Z07:00` | `2022-08-03T20:12:48Z` |
|
||||
| RFC3339Nano | `2006-01-02T15:04:05.999999999Z07:00` | `2022-08-03T20:12:48.693257Z` |
|
||||
| Kitchen | `3:04PM` | `8:12PM` |
|
||||
| Stamp | `Jan _2 15:04:05` | `Aug 3 20:12:48` |
|
||||
| StampMilli | `Jan _2 15:04:05.000` | `Aug 3 20:12:48.693` |
|
||||
| StampMicro | `Jan _2 15:04:05.000000` | `Aug 3 20:12:48.693257` |
|
||||
| StampNano | `Jan _2 15:04:05.000000000` | `Aug 3 20:12:48.693257000` |
|
|
@ -1,22 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/ahamlinman/xt
|
||||
rev: 2025-01-30
|
||||
---
|
||||
|
||||
# xt
|
||||
xt is a cross-format translator for JSON, MessagePack, TOML, and YAML.
|
||||
|
||||
## Usage
|
||||
Usage: `xt [-f format] [-t format] [file ...]`
|
||||
|
||||
| Option | Description |
|
||||
|---|---|
|
||||
| `-f format` | Skip detection and convert every input from the given format |
|
||||
| `-t format` | Convert to the given format (default: `json`) |
|
||||
|
||||
## Formats
|
||||
- `json`, `j`:
|
||||
- `msgpack`, `m`
|
||||
- `toml`, `t`
|
||||
- `yaml`, `y`
|
|
@ -1,80 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
arch-wiki: https://wiki.archlinux.org/title/SDDM
|
||||
wiki: https://en.wikipedia.org/wiki/Simple_Desktop_Display_Manager
|
||||
repo: https://github.com/sddm/sddm
|
||||
rev: 2024-12-18
|
||||
---
|
||||
|
||||
# SDDM
|
||||
The Simple Desktop Display Manager (SDDM) is a display manager. It is the recommended display manager for the KDE Plasma and LXQt desktop environments.
|
||||
|
||||
## Configuration
|
||||
The default configuration file for SDDM can be found at `/usr/lib/sddm/sddm.conf.d/default.conf`. For any changes, create configuration file(s) in `/etc/sddm.conf.d/`.
|
||||
|
||||
Everything should work out of the box, since Arch Linux uses systemd and SDDM defaults to using `systemd-logind` for session management.
|
||||
|
||||
### Autologin
|
||||
SDDM supports automatic login through its configuration file, for example (`/etc/sddm.conf.d/autologin.conf`):
|
||||
|
||||
```ini
|
||||
[Autologin]
|
||||
User=john
|
||||
Session=plasma
|
||||
|
||||
# Optionally always relogin the user on logout
|
||||
Relogin=true
|
||||
```
|
||||
|
||||
This configuration causes a KDE Plasma session to be started for user `john` when the system is booted. Available session types can be found in `/usr/share/xsessions/` for X and in `/usr/share/wayland-sessions/` for Wayland.
|
||||
|
||||
To autologin into KDE Plasma while simultaneously locking the session (e.g. to allow autostarted apps to warm up), create a systemd user unit drop in to pass `--lockscreen` in `plasma-ksmserver.service` (`~/.config/systemd/user/plasma-ksmserver.service.d/override.conf`):
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=/usr/bin/ksmserver --lockscreen
|
||||
```
|
||||
|
||||
### Theme settings
|
||||
Theme settings can be changed in the `[Theme]` section. If you use Plasma's system settings, themes may show previews.
|
||||
|
||||
Set to `breeze` for the default Plasma theme.
|
||||
|
||||
#### Current theme
|
||||
|
||||
Set the current theme through the Current value, e.g. `Current=archlinux-simplyblack`.
|
||||
|
||||
#### Editing themes
|
||||
The default SDDM theme directory is `/usr/share/sddm/themes/`. You can add your custom made themes to that directory under a separate subdirectory. Note that SDDM requires these subdirectory names to be the same as the theme names. Study the files installed to modify or create your own theme.
|
||||
|
||||
#### Customizing a theme
|
||||
To override settings in the `theme.conf` configuration file, create a custom `theme.conf.user` file in the same directory. For example, to change the theme's background (`/usr/share/sddm/themes/name/theme.conf.user`):
|
||||
|
||||
```ini
|
||||
[General]
|
||||
background=/path/to/background.png
|
||||
```
|
||||
|
||||
#### Testing (previewing) a theme
|
||||
You can preview an SDDM theme if needed. This is especially helpful if you are not sure how the theme would look if selected or just edited a theme and want to see how it would look without logging out. You can run something like this:
|
||||
|
||||
```sh
|
||||
sddm-greeter-qt6 --test-mode --theme /usr/share/sddm/themes/breeze
|
||||
```
|
||||
|
||||
This should open a new window for every monitor you have connected and show a preview of the theme.
|
||||
|
||||
#### Mouse cursor
|
||||
To set the mouse cursor theme, set `CursorTheme` to your preferred cursor theme.
|
||||
|
||||
Valid Plasma mouse cursor theme names are `breeze_cursors`, `Breeze_Snow` and `breeze-dark`.
|
||||
|
||||
### Keyboard Layout
|
||||
To set the keyboard layout with SDDM, edit ` /usr/share/sddm/scripts/Xsetup`:
|
||||
|
||||
```
|
||||
#!/bin/sh
|
||||
# Xsetup - run as root before the login dialog appears
|
||||
setxkbmap de,us
|
||||
```
|
|
@ -1,66 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/neuromeow/licensit
|
||||
rev: 2025-01-31
|
||||
---
|
||||
|
||||
# licensit
|
||||
`licensit` is a command-line tool to create LICENSE files.
|
||||
|
||||
### Supported licenses
|
||||
|
||||
- GNU Affero General Public License v3.0 (AGPL-3.0)
|
||||
- Apache License 2.0 (Apache-2.0)
|
||||
- BSD 2-Clause “Simplified” License (BSD-2-Clause)
|
||||
- BSD 3-Clause “New” or “Revised” License (BSD-3-Clause)
|
||||
- Boost Software License 1.0 (BSL-1.0)
|
||||
- Creative Commons Zero v1.0 Universal (CC0-1.0)
|
||||
- Eclipse Public License 2.0 (EPL-2.0)
|
||||
- GNU General Public License v2.0 (GPL-2.0)
|
||||
- GNU General Public License v3.0 (GPL-3.0)
|
||||
- GNU Lesser General Public License v2.1 (LGPL-2.1)
|
||||
- MIT License (MIT)
|
||||
- Mozilla Public License 2.0 (MPL-2.0)
|
||||
- The Unlicense (Unlicense)
|
||||
|
||||
## Usage
|
||||
`licensit` simplifies the process of creating and managing license files for your projects.
|
||||
|
||||
### Listing Available Licenses
|
||||
```
|
||||
licensit list
|
||||
```
|
||||
|
||||
Shows all supported licenses.
|
||||
|
||||
### Showing License Content
|
||||
To view the content of a specific license with the author and year filled in:
|
||||
|
||||
```
|
||||
licensit show [LICENSE] [--user USER] [--year YEAR]
|
||||
```
|
||||
|
||||
- `[LICENSE]`: The ID of the license you want to display (for example, `mit`, `apache-2.0`)
|
||||
- `--user [USER]`: Specifies the license holder's name. If not provided, `licensit` will use the following sources in order to determine the user name:
|
||||
- `LICENSE_AUTHOR` environment variable
|
||||
- `user.name` entry in the `$HOME/.gitconfig` file
|
||||
- Username associated with the current effective user ID
|
||||
- `--year [YEAR]`: Sets the year during which the license is effective. Defaults to the current year if not specified
|
||||
|
||||
To display just the template of a license (without any specific user or year information):
|
||||
|
||||
```
|
||||
licensit show [LICENSE] --template
|
||||
```
|
||||
|
||||
- `[LICENSE]`: The ID of the license whose template you want to display (for example, `mit`, `apache-2.0`)
|
||||
- `--template`: Displays the license template with placeholders for the user and year. This option cannot be used with `--user` or `--year`
|
||||
|
||||
### Adding a License to Your Project
|
||||
To add a license file to your current directory:
|
||||
|
||||
```
|
||||
licensit add [LICENSE] [--user USER] [--year YEAR]
|
||||
```
|
||||
|
||||
Creates a `LICENSE` file in the current directory with the specified details.
|
|
@ -1,127 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/fornwall/rust-script
|
||||
website: https://rust-script.org
|
||||
---
|
||||
|
||||
# RustScript
|
||||
With rust-script Rust files and expressions can be executed just like a shell or Python script. Features include:
|
||||
- Caching compiled artifacts for speed.
|
||||
- Reading Cargo manifests embedded in Rust scripts.
|
||||
- Supporting executable Rust scripts via Unix shebangs and Windows file associations.
|
||||
- Using expressions as stream filters (i.e. for use in command pipelines).
|
||||
- Running unit tests and benchmarks from scripts.
|
||||
|
||||
## Scripts
|
||||
The primary use for rust-script is for running Rust source files as scripts. For example:
|
||||
|
||||
```
|
||||
$ echo 'println!("Hello, World!");' > hello.rs
|
||||
$ rust-script hello.rs
|
||||
Hello, World!
|
||||
```
|
||||
|
||||
Under the hood, a Cargo project will be generated and built (with the Cargo output hidden unless compilation fails or the `-c/--cargo-output` option is used). The first invocation of the script will be slower as the script is compiled - subsequent invocations of unmodified scripts will be fast as the built executable is cached.
|
||||
|
||||
As seen from the above example, using a `fn main() {}` function is not required. If not present, the script file will be wrapped in a `fn main() { ... }` block.
|
||||
|
||||
rust-script will look for embedded dependency and manifest information in the script as shown by the below two equivalent `now.rs` variants:
|
||||
|
||||
```rust
|
||||
#!/usr/bin/env rust-script
|
||||
//! This is a regular crate doc comment, but it also contains a partial
|
||||
//! Cargo manifest. Note the use of a *fenced* code block, and the
|
||||
//! `cargo` "language".
|
||||
//!
|
||||
//! ```cargo
|
||||
//! [dependencies]
|
||||
//! time = "0.1.25"
|
||||
//! ```
|
||||
fn main() {
|
||||
println!("{}", time::now().rfc822z());
|
||||
}
|
||||
```
|
||||
|
||||
```rust
|
||||
// cargo-deps: time="0.1.25"
|
||||
// You can also leave off the version number, in which case, it's assumed
|
||||
// to be "*". Also, the `cargo-deps` comment *must* be a single-line
|
||||
// comment, and it *must* be the first thing in the file, after the
|
||||
// shebang.
|
||||
// Multiple dependencies should be separated by commas:
|
||||
// cargo-deps: time="0.1.25", libc="0.2.5"
|
||||
fn main() {
|
||||
println!("{}", time::now().rfc822z());
|
||||
}
|
||||
```
|
||||
|
||||
The output from running one of the above scripts may look something like:
|
||||
|
||||
```
|
||||
$ rust-script now
|
||||
Wed, 28 Oct 2020 00:38:45 +0100
|
||||
```
|
||||
|
||||
## Useful command-line arguments:
|
||||
|
||||
- `--bench`: Compile and run benchmarks. Requires a nightly toolchain.
|
||||
- `--debug`: Build a debug executable, not an optimised one.
|
||||
- `--force`: Force the script to be rebuilt. Useful if you want to force a recompile with a different toolchain.
|
||||
- `--package`: Generate the Cargo package and print the path to it - but don’t compile or run it. Effectively “unpacks” the script into a Cargo package.
|
||||
- `--test`: Compile and run tests.
|
||||
- `--wrapper`: Add a wrapper around the executable. Can be used to run debugging with e.g. `rust-script --debug --wrapper rust-lldb my-script.rs` or benchmarking with `rust-script --wrapper "hyperfine --runs 100" my-script.rs`
|
||||
|
||||
## Executable Scripts
|
||||
On Unix systems, you can use `#!/usr/bin/env rust-script` as a shebang line in a Rust script. This will allow you to execute a script files (which don’t need to have the `.rs` file extension) directly.
|
||||
|
||||
If you are using Windows, you can associate the `.ers` extension (executable Rust - a renamed `.rs` file) with rust-script. This allows you to execute Rust scripts simply by naming them like any other executable or script.
|
||||
|
||||
This can be done using the `rust-script --install-file-association` command. Uninstall the file association with `rust-script --uninstall-file-association`.
|
||||
|
||||
If you want to make a script usable across platforms, use both a shebang line and give the file a `.ers` file extension.
|
||||
|
||||
## Expressions
|
||||
Using the `-e/--expr` option a Rust expression can be evaluated directly, with dependencies (if any) added using `-d/--dep`:
|
||||
|
||||
```
|
||||
$ rust-script -e '1+2'
|
||||
3
|
||||
$ rust-script --dep time --expr "time::OffsetDateTime::now_utc().format(time::Format::Rfc3339).to_string()"`
|
||||
"2020-10-28T11:42:10+00:00"
|
||||
$ # Use a specific version of the time crate (instead of default latest):
|
||||
$ rust-script --dep time=0.1.38 -e "time::now().rfc822z().to_string()"
|
||||
"2020-10-28T11:42:10+00:00"
|
||||
```
|
||||
|
||||
The code given is embedded into a block expression, evaluated, and printed out using the Debug formatter (i.e. `{:?}`).
|
||||
|
||||
## Filters
|
||||
You can use rust-script to write a quick filter, by specifying a closure to be called for each line read from stdin, like so:
|
||||
|
||||
```
|
||||
$ cat now.ers | rust-script --loop \
|
||||
"let mut n=0; move |l| {n+=1; println!(\"{:>6}: {}\",n,l.trim_end())}"
|
||||
1: // cargo-deps: time="0.1.25"
|
||||
3: fn main() {
|
||||
4: println!("{}", time::now().rfc822z());
|
||||
5: }
|
||||
```
|
||||
|
||||
You can achieve a similar effect to the above by using the `--count` flag, which causes the line number to be passed as a second argument to your closure:
|
||||
|
||||
```
|
||||
$ cat now.ers | rust-script --count --loop \
|
||||
"|l,n| println!(\"{:>6}: {}\", n, l.trim_end())"
|
||||
1: // cargo-deps: time="0.1.25"
|
||||
2: fn main() {
|
||||
3: println!("{}", time::now().rfc822z());
|
||||
4: }
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
The following environment variables are provided to scripts by rust-script:
|
||||
|
||||
- `$RUST_SCRIPT_BASE_PATH`: the base path used by rust-script to resolve relative dependency paths. Note that this is not necessarily the same as either the working directory, or the directory in which the script is being compiled.
|
||||
- `$RUST_SCRIPT_PKG_NAME`: the generated package name of the script.
|
||||
- `$RUST_SCRIPT_SAFE_NAME`: the file name of the script (sans file extension) being run. For scripts, this is derived from the script’s filename. May also be `expr` or `loop` for those invocations.
|
||||
- `$RUST_SCRIPT_PATH`: absolute path to the script being run, assuming one exists. Set to the empty string for expressions.
|
|
@ -1,7 +1,5 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://git.launchpad.net/ufw/
|
||||
arch-wiki: https://wiki.archlinux.org/title/Uncomplicated_Firewall
|
||||
---
|
||||
|
||||
# ufw
|
||||
|
@ -19,134 +17,19 @@ The next line is only needed _once_ the first time you install the package:
|
|||
ufw enable
|
||||
```
|
||||
|
||||
**See status:**
|
||||
See status:
|
||||
```shell
|
||||
ufw status
|
||||
```
|
||||
|
||||
**Enable/Disable:**
|
||||
Enable/Disable
|
||||
```shell
|
||||
ufw enable
|
||||
ufw disable
|
||||
```
|
||||
|
||||
**Allow/Deny:**
|
||||
Allow/Deny ports
|
||||
```shell
|
||||
ufw allow <app|port>
|
||||
ufw deny <app|port>
|
||||
|
||||
ufw allow from <CIDR>
|
||||
ufw deny from <CIDR>
|
||||
```
|
||||
|
||||
## Forward policy
|
||||
Users needing to run a VPN such as OpenVPN or WireGuard can adjust the `DEFAULT_FORWARD_POLICY` variable in `/etc/default/ufw` from a value of `DROP` to `ACCEPT` to forward all packets regardless of the settings of the user interface. To forward for a specific interface like `wg0`, user can add the following line in the filter block
|
||||
|
||||
```sh
|
||||
# /etc/ufw/before.rules
|
||||
|
||||
-A ufw-before-forward -i wg0 -j ACCEPT
|
||||
-A ufw-before-forward -o wg0 -j ACCEPT
|
||||
```
|
||||
|
||||
You may also need to uncomment
|
||||
|
||||
```sh
|
||||
# /etc/ufw/sysctl.conf
|
||||
|
||||
net/ipv4/ip_forward=1
|
||||
net/ipv6/conf/default/forwarding=1
|
||||
net/ipv6/conf/all/forwarding=1
|
||||
```
|
||||
|
||||
## Adding other applications
|
||||
The PKG comes with some defaults based on the default ports of many common daemons and programs. Inspect the options by looking in the `/etc/ufw/applications.d` directory or by listing them in the program itself:
|
||||
|
||||
```sh
|
||||
ufw app list
|
||||
```
|
||||
|
||||
If users are running any of the applications on a non-standard port, it is recommended to simply make `/etc/ufw/applications.d/custom` containing the needed data using the defaults as a guide.
|
||||
|
||||
> **Warning**: If users modify any of the PKG provided rule sets, these will be overwritten the first time the ufw package is updated. This is why custom app definitions need to reside in a non-PKG file as recommended above!
|
||||
|
||||
Example, deluge with custom tcp ports that range from 20202-20205:
|
||||
|
||||
```ini
|
||||
[Deluge-my]
|
||||
title=Deluge
|
||||
description=Deluge BitTorrent client
|
||||
ports=20202:20205/tcp
|
||||
```
|
||||
|
||||
Should you require to define both tcp and udp ports for the same application, simply separate them with a pipe as shown: this app opens tcp ports 10000-10002 and udp port 10003:
|
||||
|
||||
```ini
|
||||
ports=10000:10002/tcp|10003/udp
|
||||
```
|
||||
|
||||
One can also use a comma to define ports if a range is not desired. This example opens tcp ports 10000-10002 (inclusive) and udp ports 10003 and 10009
|
||||
|
||||
```ini
|
||||
ports=10000:10002/tcp|10003,10009/udp
|
||||
```
|
||||
|
||||
## Deleting applications
|
||||
Drawing on the Deluge/Deluge-my example above, the following will remove the standard Deluge rules and replace them with the Deluge-my rules from the above example:
|
||||
|
||||
```sh
|
||||
ufw delete allow Deluge
|
||||
ufw allow Deluge-my
|
||||
```
|
||||
|
||||
## Black listing IP addresses
|
||||
It might be desirable to add ip addresses to a blacklist which is easily achieved simply by editing `/etc/ufw/before.rules` and inserting an `iptables DROP` line at the bottom of the file right above the "COMMIT" word.
|
||||
|
||||
```sh
|
||||
# /etc/ufw/before.rules
|
||||
|
||||
...
|
||||
## blacklist section
|
||||
# block just 199.115.117.99
|
||||
-A ufw-before-input -s 199.115.117.99 -j DROP
|
||||
# block 184.105.*.*
|
||||
-A ufw-before-input -s 184.105.0.0/16 -j DROP
|
||||
|
||||
# don't delete the 'COMMIT' line or these rules won't be processed
|
||||
COMMIT
|
||||
```
|
||||
|
||||
## Rate limiting with ufw
|
||||
ufw has the ability to deny connections from an IP address that has attempted to initiate 6 or more connections in the last 30 seconds. Users should consider using this option for services such as SSH.
|
||||
|
||||
Using the above basic configuration, to enable rate limiting we would simply replace the allow parameter with the limit parameter. The new rule will then replace the previous.
|
||||
|
||||
```sh
|
||||
ufw limit SSH
|
||||
```
|
||||
|
||||
## Disable remote ping
|
||||
Change `ACCEPT` to `DROP` in the following lines:
|
||||
|
||||
```sh
|
||||
/etc/ufw/before.rules
|
||||
|
||||
# ok icmp codes
|
||||
...
|
||||
-A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT
|
||||
```
|
||||
|
||||
If you use IPv6, related rules are in `/etc/ufw/before6.rules`.
|
||||
|
||||
## Disable UFW logging
|
||||
Disabling logging may be useful to stop UFW filling up the kernel (dmesg) and message logs:
|
||||
|
||||
```sh
|
||||
ufw logging off
|
||||
```
|
||||
|
||||
## UFW and Docker
|
||||
Docker in standard mode writes its own iptables rules and ignores ufw ones, which could lead to security issues. A solution can be found at https://github.com/chaifeng/ufw-docker.
|
||||
|
||||
## GUI frontends
|
||||
If you are using KDE Plasma, you can just go to `Wi-Fi & Networking > Firewall` to access and adjust firewall configurations given `plasma-firewall` is installed.
|
||||
```
|
|
@ -1,18 +1,17 @@
|
|||
---
|
||||
arch-wiki: https://wiki.archlinux.org/title/PKGBUILD
|
||||
obj: concept
|
||||
rev: 2024-12-19
|
||||
---
|
||||
|
||||
# PKGBUILD
|
||||
|
||||
A `PKGBUILD` is a shell script containing the build information required by [Arch Linux](../../../linux/Arch%20Linux.md) packages. [Arch Wiki](https://wiki.archlinux.org/title/PKGBUILD)
|
||||
|
||||
Packages in [Arch Linux](../../../linux/Arch%20Linux.md) are built using the [makepkg](makepkg.md) utility. When [makepkg](makepkg.md) is run, it searches for a `PKGBUILD` file in the current directory and follows the instructions therein to either compile or otherwise acquire the files to build a package archive (`pkgname.pkg.tar.zst`). The resulting package contains binary files and installation instructions, readily installable with [pacman](Pacman.md).
|
||||
Packages in [Arch Linux](../../../linux/Arch%20Linux.md) are built using the [makepkg](makepkg.md) utility. When [makepkg](makepkg.md) is run, it searches for a PKGBUILD file in the current directory and follows the instructions therein to either compile or otherwise acquire the files to build a package archive (pkgname.pkg.tar.zst). The resulting package contains binary files and installation instructions, readily installable with [pacman](Pacman.md).
|
||||
|
||||
Mandatory variables are `pkgname`, `pkgver`, `pkgrel`, and `arch`. `license` is not strictly necessary to build a package, but is recommended for any `PKGBUILD` shared with others, as [makepkg](makepkg.md) will produce a warning if not present.
|
||||
Mandatory variables are `pkgname`, `pkgver`, `pkgrel`, and `arch`. `license` is not strictly necessary to build a package, but is recommended for any PKGBUILD shared with others, as [makepkg](makepkg.md) will produce a warning if not present.
|
||||
|
||||
## Example
|
||||
# Example
|
||||
PKGBUILD:
|
||||
```sh
|
||||
# Maintainer: User <mail>
|
||||
|
@ -49,187 +48,4 @@ package() {
|
|||
cd "$pkgname"
|
||||
install -Dm755 ./app "$pkgdir/usr/bin/app"
|
||||
}
|
||||
```
|
||||
|
||||
## Directives
|
||||
The following is a list of standard options and directives available for use in a `PKGBUILD`. These are all understood and interpreted by `makepkg`, and most of them will be directly transferred to the built package.
|
||||
|
||||
If you need to create any custom variables for use in your build process, it is recommended to prefix their name with an `_` (underscore). This will prevent any possible name clashes with internal `makepkg` variables. For example, to store the base kernel version in a variable, use something similar to `$_basekernver`.
|
||||
|
||||
### Name and Version
|
||||
|
||||
#### `pkgname`
|
||||
Either the name of the package or an array of names for split packages.
|
||||
Valid characters for members of this array are alphanumerics, and any of the following characters: `@ . _ + -`. Additionally, names are not allowed to start with hyphens or dots.
|
||||
|
||||
#### `pkgver`
|
||||
The version of the software as released from the author (e.g., `2.7.1`). The variable is not allowed to contain colons, forward slashes, hyphens or whitespace.
|
||||
|
||||
The pkgver variable can be automatically updated by providing a `pkgver()` function in the `PKGBUILD` that outputs the new package version. This is run after downloading and extracting the sources and running the `prepare()` function (if present), so it can use those files in determining the new `pkgver`. This is most useful when used with sources from version control systems.
|
||||
|
||||
#### `pkgrel`
|
||||
This is the release number specific to the distribution. This allows package maintainers to make updates to the package’s configure flags, for example. This is typically set to `1` for each new upstream software release and incremented for intermediate `PKGBUILD` updates. The variable is a positive integer, with an optional subrelease level specified by adding another positive integer separated by a period (i.e. in the form `x.y`).
|
||||
|
||||
#### `epoch`
|
||||
Used to force the package to be seen as newer than any previous versions with a lower epoch, even if the version number would normally not trigger such an upgrade. This value is required to be a positive integer; the default value if left unspecified is 0. This is useful when the version numbering scheme of a package changes (or is alphanumeric), breaking normal version comparison logic.
|
||||
|
||||
### Generic
|
||||
|
||||
#### `pkgdesc`
|
||||
This should be a brief description of the package and its functionality. Try to keep the description to one line of text and to not use the package’s name.
|
||||
|
||||
#### `url`
|
||||
This field contains a URL that is associated with the software being packaged. This is typically the project’s web site.
|
||||
|
||||
#### `license` (array)
|
||||
This field specifies the license(s) that apply to the package. If multiple licenses are applicable, list all of them: `license=('GPL' 'FDL')`.
|
||||
|
||||
#### `arch` (array)
|
||||
Defines on which architectures the given package is available (e.g., `arch=('i686' 'x86_64')`). Packages that contain no architecture specific files should use `arch=('any')`. Valid characters for members of this array are alphanumerics and `_`.
|
||||
|
||||
#### `groups` (array)
|
||||
An array of symbolic names that represent groups of packages, allowing you to install multiple packages by requesting a single target. For example, one could install all KDE packages by installing the kde group.
|
||||
|
||||
### Dependencies
|
||||
|
||||
#### `depends` (array)
|
||||
An array of packages this package depends on to run. Entries in this list should be surrounded with single quotes and contain at least the package name. Entries can also include a version requirement of the form `name<>version`, where `<>` is one of five comparisons: `>=` (greater than or equal to), `<=` (less than or equal to), `=` (equal to), `>` (greater than), or `<` (less than).
|
||||
|
||||
If the dependency name appears to be a library (ends with `.so`), `makepkg` will try to find a binary that depends on the library in the built package and append the version needed by the binary. Appending the version yourself disables automatic detection.
|
||||
|
||||
Additional architecture-specific depends can be added by appending an underscore and the architecture name e.g., `depends_x86_64=()`.
|
||||
|
||||
#### `makedepends` (array)
|
||||
An array of packages this package depends on to build but are not needed at runtime. Packages in this list follow the same format as `depends`.
|
||||
|
||||
Additional architecture-specific `makedepends` can be added by appending an underscore and the architecture name e.g., `makedepends_x86_64=()`.
|
||||
|
||||
#### `checkdepends` (array)
|
||||
An array of packages this package depends on to run its test suite but are not needed at runtime. Packages in this list follow the same format as depends. These dependencies are only considered when the `check()` function is present and is to be run by `makepkg`.
|
||||
|
||||
Additional architecture-specific checkdepends can be added by appending an underscore and the architecture name e.g., `checkdepends_x86_64=()`
|
||||
|
||||
#### `optdepends` (array)
|
||||
An array of packages (and accompanying reasons) that are not essential for base functionality, but may be necessary to make full use of the contents of this package. optdepends are currently for informational purposes only and are not utilized by pacman during dependency resolution. Packages in this list follow the same format as depends, with an optional description appended. The format for specifying optdepends descriptions is:
|
||||
|
||||
```shell
|
||||
optdepends=('python: for library bindings')
|
||||
```
|
||||
|
||||
Additional architecture-specific optdepends can be added by appending an underscore and the architecture name e.g., `optdepends_x86_64=()`.
|
||||
|
||||
### Package Relations
|
||||
|
||||
#### `provides` (array)
|
||||
An array of “virtual provisions” this package provides. This allows a package to provide dependencies other than its own package name. For example, the `dcron` package can provide `cron`, which allows packages to depend on `cron` rather than `dcron` OR `fcron`.
|
||||
|
||||
Versioned provisions are also possible, in the `name=version` format. For example, `dcron` can provide `cron=2.0` to satisfy the `cron>=2.0` dependency of other packages. Provisions involving the `>` and `<` operators are invalid as only specific versions of a package may be provided.
|
||||
|
||||
If the provision name appears to be a library (ends with `.so`), makepkg will try to find the library in the built package and append the correct version. Appending the version yourself disables automatic detection.
|
||||
|
||||
Additional architecture-specific provides can be added by appending an underscore and the architecture name e.g., `provides_x86_64=()`.
|
||||
|
||||
#### `conflicts` (array)
|
||||
An array of packages that will conflict with this package (i.e. they cannot both be installed at the same time). This directive follows the same format as `depends`. Versioned conflicts are supported using the operators as described in `depends`.
|
||||
|
||||
Additional architecture-specific conflicts can be added by appending an underscore and the architecture name e.g., `conflicts_x86_64=()`.
|
||||
|
||||
#### `replaces` (array)
|
||||
An array of packages this package should replace. This can be used to handle renamed/combined packages. For example, if the `j2re` package is renamed to `jre`, this directive allows future upgrades to continue as expected even though the package has moved. Versioned replaces are supported using the operators as described in `depends`.
|
||||
|
||||
Sysupgrade is currently the only pacman operation that utilizes this field. A normal sync or upgrade will not use its value.
|
||||
|
||||
Additional architecture-specific replaces can be added by appending an underscore and the architecture name e.g., `replaces_x86_64=()`.
|
||||
|
||||
### Other
|
||||
|
||||
#### `backup` (array)
|
||||
An array of file names, without preceding slashes, that should be backed up if the package is removed or upgraded. This is commonly used for packages placing configuration files in `/etc`.
|
||||
|
||||
#### `options` (array)
|
||||
This array allows you to override some of makepkg’s default behavior when building packages. To set an option, just include the option name in the `options` array. To reverse the default behavior, place an `!` at the front of the option. Only specify the options you specifically want to override, the rest will be taken from `makepkg.conf`
|
||||
|
||||
| Option | Description |
|
||||
| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `strip` | Strip symbols from binaries and libraries. If you frequently use a debugger on programs or libraries, it may be helpful to disable this option. |
|
||||
| `docs` | Save doc directories. If you wish to delete doc directories, specify `!docs` in the array. |
|
||||
| `libtool` | Leave libtool (`.la`) files in packages. Specify `!libtool` to remove them. |
|
||||
| `staticlibs` | Leave static library (`.a`) files in packages. Specify `!staticlibs` to remove them (if they have a shared counterpart). |
|
||||
| `emptydirs` | Leave empty directories in packages. |
|
||||
| `zipman` | Compress man and info pages with gzip. |
|
||||
| `ccache` | Allow the use of ccache during `build()`. More useful in its negative form `!ccache` with select packages that have problems building with ccache. |
|
||||
| `distcc` | Allow the use of distcc during `build()`. More useful in its negative form `!distcc` with select packages that have problems building with distcc. |
|
||||
| `buildflags` | Allow the use of user-specific buildflags (`CPPFLAGS`, `CFLAGS`, `CXXFLAGS`, `LDFLAGS`) during `build()` as specified in `makepkg.conf`. More useful in its negative form `!buildflags` with select packages that have problems building with custom buildflags. |
|
||||
| `makeflags` | Allow the use of user-specific makeflags during `build()` as specified in `makepkg.conf`. More useful in its negative form `!makeflags` with select packages that have problems building with custom makeflags such as `-j2`. |
|
||||
| `debug` | Add the user-specified debug flags (`DEBUG_CFLAGS`, `DEBUG_CXXFLAGS`) to their counterpart buildflags as specified in `makepkg.conf`. When used in combination with the `strip` option, a separate package containing the debug symbols is created. |
|
||||
| `lto` | Enable building packages using link time optimization. Adds `-flto` to both `CFLAGS` and `CXXFLAGS`. |
|
||||
|
||||
#### `install`
|
||||
Specifies a special install script that is to be included in the package. This file should reside in the same directory as the `PKGBUILD` and will be copied into the package by `makepkg`. It does not need to be included in the source array (e.g., `install=$pkgname.install`).
|
||||
|
||||
Pacman has the ability to store and execute a package-specific script when it installs, removes, or upgrades a package. This allows a package to configure itself after installation and perform an opposite action upon removal.
|
||||
|
||||
The exact time the script is run varies with each operation, and should be self-explanatory. Note that during an upgrade operation, none of the install or remove functions will be called.
|
||||
|
||||
Scripts are passed either one or two “full version strings”, where a full version string is either `pkgver-pkgrel` or `epoch:pkgver-pkgrel`, if `epoch` is non-zero.
|
||||
|
||||
- `pre_install`: Run right before files are extracted. One argument is passed: new package full version string.
|
||||
- `post_install`: Run right after files are extracted. One argument is passed: new package full version string.
|
||||
- `pre_upgrade`: Run right before files are extracted. Two arguments are passed in this order: new package full version string, old package full version string.
|
||||
- `post_upgrade`: Run after files are extracted. Two arguments are passed in this order: new package full version string, old package full version string.
|
||||
- `pre_remove`: Run right before files are removed. One argument is passed: old package full version string.
|
||||
- `post_remove`: Run right after files are removed. One argument is passed: old package full version string.
|
||||
|
||||
To use this feature, create a file such as `pkgname.install` and put it in the same directory as the `PKGBUILD` script. Then use the install directive: `install=pkgname.install`
|
||||
|
||||
#### `changelog`
|
||||
Specifies a changelog file that is to be included in the package. The changelog file should end in a single newline. This file should reside in the same directory as the `PKGBUILD` and will be copied into the package by `makepkg`. It does not need to be included in the source array (e.g., `changelog=$pkgname.changelog`).
|
||||
|
||||
### Sources
|
||||
|
||||
#### `source` (array)
|
||||
An array of source files required to build the package. Source files must either reside in the same directory as the `PKGBUILD`, or be a fully-qualified URL that `makepkg` can use to download the file. To simplify the maintenance of `PKGBUILDs`, use the `$pkgname` and `$pkgver` variables when specifying the download location, if possible. Compressed files will be extracted automatically unless found in the `noextract` array described below.
|
||||
|
||||
Additional architecture-specific sources can be added by appending an underscore and the architecture name e.g., `source_x86_64=()`. There must be a corresponding integrity array with checksums, e.g. `cksums_x86_64=()`.
|
||||
|
||||
It is also possible to change the name of the downloaded file, which is helpful with weird URLs and for handling multiple source files with the same name. The syntax is: `source=('filename::url')`.
|
||||
|
||||
Files in the source array with extensions `.sig`, `.sign` or, `.asc` are recognized by makepkg as PGP signatures and will be automatically used to verify the integrity of the corresponding source file.
|
||||
|
||||
#### `noextract` (array)
|
||||
An array of file names corresponding to those from the source array. Files listed here will not be extracted with the rest of the source files. This is useful for packages that use compressed data directly.
|
||||
|
||||
#### `validpgpkeys` (array)
|
||||
An array of PGP fingerprints. If this array is non-empty, `makepkg` will only accept signatures from the keys listed here and will ignore the trust values from the keyring. If the source file was signed with a subkey, `makepkg` will still use the primary key for comparison.
|
||||
|
||||
Only full fingerprints are accepted. They must be uppercase and must not contain whitespace characters.
|
||||
|
||||
### Integrity
|
||||
|
||||
#### `cksums` (array)
|
||||
This array contains CRC checksums for every source file specified in the source array (in the same order). `makepkg` will use this to verify source file integrity during subsequent builds. If `SKIP` is put in the array in place of a normal hash, the integrity check for that source file will be skipped. To easily generate cksums, run `makepkg -g >> PKGBUILD`. If desired, move the cksums line to an appropriate location. Note that checksums generated by `makepkg -g` should be verified using checksum values provided by the software developer.
|
||||
|
||||
#### `md5sums`, `sha1sums`, `sha224sums`, `sha256sums`, `sha384sums`, `sha512sums`, `b2sums` (arrays)
|
||||
Alternative integrity checks that `makepkg` supports; these all behave similar to the cksums option described above. To enable use and generation of these checksums, be sure to set up the `INTEGRITY_CHECK` option in `makepkg.conf`.
|
||||
|
||||
## Packaging Functions
|
||||
In addition to the above directives, `PKGBUILDs` require a set of functions that provide instructions to build and install the package. As a minimum, the `PKGBUILD` must contain a `package()` function which installs all the package’s files into the packaging directory, with optional `prepare()`, `build()`, and `check()` functions being used to create those files from source.
|
||||
|
||||
This is directly sourced and executed by `makepkg`, so anything that Bash or the system has available is available for use here. Be sure any exotic commands used are covered by the `makedepends` array.
|
||||
|
||||
If you create any variables of your own in any of these functions, it is recommended to use the Bash `local` keyword to scope the variable to inside the function.
|
||||
|
||||
### `package()` Function
|
||||
The `package()` function is used to install files into the directory that will become the root directory of the built package and is run after all the optional functions listed below. The packaging stage is run using `fakeroot` to ensure correct file permissions in the resulting package. All other functions will be run as the user calling `makepkg`. This function is run inside `$srcdir`.
|
||||
|
||||
### `verify()` Function
|
||||
An optional `verify()` function can be specified to implement arbitrary source authentication. The function should return a non-zero exit code when verification fails. This function is run before sources are extracted. This function is run inside `$startdir`.
|
||||
|
||||
### `prepare()` Function
|
||||
An optional `prepare()` function can be specified in which operations to prepare the sources for building, such as patching, are performed. This function is run after the source extraction and before the `build()` function. The `prepare()` function is skipped when source extraction is skipped. This function is run inside `$srcdir`.
|
||||
|
||||
### `build()` Function
|
||||
The optional `build()` function is used to compile and/or adjust the source files in preparation to be installed by the `package()` function. This function is run inside `$srcdir`.
|
||||
|
||||
### `check()` Function
|
||||
An optional `check()` function can be specified in which a package’s test-suite may be run. This function is run between the `build()` and `package()` functions. Be sure any exotic commands used are covered by the `checkdepends` array. This function is run inside `$srcdir`.
|
||||
```
|
|
@ -1,9 +1,6 @@
|
|||
---
|
||||
obj: application
|
||||
arch-wiki: https://wiki.archlinux.org/title/Pacman
|
||||
rev: 2025-01-08
|
||||
---
|
||||
|
||||
# Pacman
|
||||
Pacman is the default [Arch Linux](../../../linux/Arch%20Linux.md) Package Manager
|
||||
|
||||
|
@ -48,11 +45,6 @@ List explicitly installed packages:
|
|||
pacman -Qe
|
||||
```
|
||||
|
||||
List of packages owning a file/dir:
|
||||
```shell
|
||||
pacman -Qo /path/to/file
|
||||
```
|
||||
|
||||
List orphan packages (installed as dependencies and not required anymore):
|
||||
```shell
|
||||
pacman -Qdt
|
||||
|
@ -64,363 +56,6 @@ pacman -Q
|
|||
```
|
||||
|
||||
Empty the entire pacman cache:
|
||||
```shel
|
||||
pacman -Scc
|
||||
```
|
||||
|
||||
Read changelog of package:
|
||||
```shell
|
||||
pacman -Qc pkgname
|
||||
```
|
||||
|
||||
### File Conflicts
|
||||
When pacman removes a package that has a configuration file, it normally creates a backup copy of that configuration file and appends `.pacsave` to the name of the file. Likewise, when pacman upgrades a package which includes a new configuration file created by the maintainer differing from the currently installed file, it saves a `.pacnew` file with the new configuration. pacman provides notice when these files are written.
|
||||
|
||||
## Configuration
|
||||
Pacman, using libalpm, will attempt to read `pacman.conf` each time it is invoked. This configuration file is divided into sections or repositories. Each section defines a package repository that pacman can use when searching for packages in `--sync` mode. The exception to this is the `[options]` section, which defines global options.
|
||||
|
||||
```ini
|
||||
# /etc/pacman.conf
|
||||
|
||||
[options]
|
||||
# Set the default root directory for pacman to install to.
|
||||
# This option is used if you want to install a package on a temporary mounted partition which is "owned" by another system, or for a chroot install.
|
||||
# NOTE: If database path or log file are not specified on either the command line or in pacman.conf(5), their default location will be inside this root path.
|
||||
RootDir = /path/to/root/dir
|
||||
|
||||
# Overrides the default location of the toplevel database directory.
|
||||
# The default is /var/lib/pacman/.
|
||||
# Most users will not need to set this option.
|
||||
# NOTE: if specified, this is an absolute path and the root path is not automatically prepended.
|
||||
DBPath = /path/to/db/dir
|
||||
|
||||
# Overrides the default location of the package cache directory.
|
||||
# The default is /var/cache/pacman/pkg/.
|
||||
# Multiple cache directories can be specified, and they are tried in the order they are listed in the config file.
|
||||
# If a file is not found in any cache directory, it will be downloaded to the first cache directory with write access.
|
||||
# NOTE: this is an absolute path, the root path is not automatically prepended.
|
||||
CacheDir = /path/to/cache/dir
|
||||
|
||||
# Add directories to search for alpm hooks in addition to the system hook directory (/usr/share/libalpm/hooks/).
|
||||
# The default is /etc/pacman.d/hooks.
|
||||
# Multiple directories can be specified with hooks in later directories taking precedence over hooks in earlier directories.
|
||||
# NOTE: this is an absolute path, the root path is not automatically prepended. For more information on the alpm hooks, see alpm-hooks(5).
|
||||
HookDir = /path/to/hook/dir
|
||||
|
||||
# Overrides the default location of the directory containing configuration files for GnuPG.
|
||||
# The default is /etc/pacman.d/gnupg/.
|
||||
# This directory should contain two files: pubring.gpg and trustdb.gpg.
|
||||
# pubring.gpg holds the public keys of all packagers. trustdb.gpg contains a so-called trust database, which specifies that the keys are authentic and trusted.
|
||||
# NOTE: this is an absolute path, the root path is not automatically prepended.
|
||||
GPGDir = /path/to/gpg/dir
|
||||
|
||||
# Overrides the default location of the pacman log file.
|
||||
# The default is /var/log/pacman.log.
|
||||
# This is an absolute path and the root directory is not prepended.
|
||||
LogFile = /path/to/log/file
|
||||
|
||||
# If a user tries to --remove a package that’s listed in HoldPkg, pacman will ask for confirmation before proceeding. Shell-style glob patterns are allowed.
|
||||
HoldPkg = package ...
|
||||
|
||||
# Instructs pacman to ignore any upgrades for this package when performing a --sysupgrade. Shell-style glob patterns are allowed.
|
||||
IgnorePkg = package ...
|
||||
|
||||
# Instructs pacman to ignore any upgrades for all packages in this group when performing a --sysupgrade. Shell-style glob patterns are allowed.
|
||||
IgnoreGroup = group ...
|
||||
|
||||
# Include another configuration file.
|
||||
# This file can include repositories or general configuration options.
|
||||
# Wildcards in the specified paths will get expanded based on glob rules.
|
||||
Include = /path/to/config/file
|
||||
|
||||
# If set, pacman will only allow installation of packages with the given architectures (e.g. i686, x86_64, etc).
|
||||
# The special value auto will use the system architecture, provided via “uname -m”.
|
||||
# If unset, no architecture checks are made.
|
||||
# NOTE: Packages with the special architecture any can always be installed, as they are meant to be architecture independent.
|
||||
Architecture = auto &| i686 &| x86_64 | ...
|
||||
|
||||
# If set, an external program will be used to download all remote files.
|
||||
# All instances of %u will be replaced with the download URL.
|
||||
# If present, instances of %o will be replaced with the local filename, plus a “.part” extension, which allows programs like wget to do file resumes properly.
|
||||
XferCommand = /path/to/command %u [%o]
|
||||
|
||||
# All files listed with a NoUpgrade directive will never be touched during a package install/upgrade, and the new files will be installed with a .pacnew extension.
|
||||
# These files refer to files in the package archive, so do not include the leading slash (the RootDir) when specifying them.
|
||||
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a file with an exclamation mark.
|
||||
# Inverted files will result in previously blacklisted files being whitelisted again. Subsequent matches will override previous ones.
|
||||
# A leading literal exclamation mark or backslash needs to be escaped.
|
||||
NoUpgrade = file ...
|
||||
|
||||
# All files listed with a NoExtract directive will never be extracted from a package into the filesystem.
|
||||
# This can be useful when you don’t want part of a package to be installed.
|
||||
# For example, if your httpd root uses an index.php, then you would not want the index.html file to be extracted from the apache package.
|
||||
# These files refer to files in the package archive, so do not include the leading slash (the RootDir) when specifying them.
|
||||
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a file with an exclamation mark.
|
||||
# Inverted files will result in previously blacklisted files being whitelisted again. Subsequent matches will override previous ones.
|
||||
# A leading literal exclamation mark or backslash needs to be escaped.
|
||||
NoExtract = file ...
|
||||
|
||||
# If set to KeepInstalled (the default), the -Sc operation will clean packages that are no longer installed (not present in the local database).
|
||||
# If set to KeepCurrent, -Sc will clean outdated packages (not present in any sync database).
|
||||
# The second behavior is useful when the package cache is shared among multiple machines, where the local databases are usually different, but the sync databases in use could be the same.
|
||||
# If both values are specified, packages are only cleaned if not installed locally and not present in any known sync database.
|
||||
CleanMethod = KeepInstalled &| KeepCurrent
|
||||
|
||||
# Set the default signature verification level. For more information, see Package and Database Signature Checking below.
|
||||
SigLevel = ...
|
||||
|
||||
# Set the signature verification level for installing packages using the "-U" operation on a local file. Uses the value from SigLevel as the default.
|
||||
LocalFileSigLevel = ...
|
||||
|
||||
# Set the signature verification level for installing packages using the "-U" operation on a remote file URL. Uses the value from SigLevel as the default.
|
||||
RemoteFileSigLevel = ...
|
||||
|
||||
# Log action messages through syslog().
|
||||
# This will insert log entries into /var/log/messages or equivalent.
|
||||
UseSyslog
|
||||
|
||||
# Automatically enable colors only when pacman’s output is on a tty.
|
||||
Color
|
||||
|
||||
# Disables progress bars. This is useful for terminals which do not support escape characters.
|
||||
NoProgressBar
|
||||
|
||||
# Performs an approximate check for adequate available disk space before installing packages.
|
||||
CheckSpace
|
||||
|
||||
# Displays name, version and size of target packages formatted as a table for upgrade, sync and remove operations.
|
||||
VerbosePkgLists
|
||||
|
||||
# Disable defaults for low speed limit and timeout on downloads.
|
||||
# Use this if you have issues downloading files with proxy and/or security gateway.
|
||||
DisableDownloadTimeout
|
||||
|
||||
# Specifies number of concurrent download streams.
|
||||
# The value needs to be a positive integer.
|
||||
# If this config option is not set then only one download stream is used (i.e. downloads happen sequentially).
|
||||
ParallelDownloads = ...
|
||||
|
||||
# Specifies the user to switch to for downloading files.
|
||||
# If this config option is not set then the downloads are done as the user running pacman.
|
||||
DownloadUser = username
|
||||
|
||||
# Disable the default sandbox applied to the process downloading files on Linux systems.
|
||||
# Useful if experiencing landlock related failures while downloading files when running a Linux kernel that does not support this feature.
|
||||
DisableSandbox
|
||||
```
|
||||
|
||||
### Repository Sections
|
||||
Each repository section defines a section name and at least one location where the packages can be found. The section name is defined by the string within square brackets (the two above are core and custom). Repository names must be unique and the name local is reserved for the database of installed packages. Locations are defined with the Server directive and follow a URL naming structure. If you want to use a local directory, you can specify the full path with a `file://` prefix, as shown above.
|
||||
|
||||
A common way to define DB locations utilizes the Include directive. For each repository defined in the configuration file, a single Include directive can contain a file that lists the servers for that repository.
|
||||
|
||||
```ini
|
||||
[core]
|
||||
# use this server first
|
||||
Server = ftp://ftp.archlinux.org/$repo/os/$arch
|
||||
# next use servers as defined in the mirrorlist below
|
||||
Include = {sysconfdir}/pacman.d/mirrorlist
|
||||
|
||||
# Include another config file.
|
||||
Include = path
|
||||
|
||||
# A full URL to a location where the packages, and signatures (if available) for this repository can be found.
|
||||
# Cache servers will be tried before any non-cache servers, will not be removed from the server pool for 404 download errors, and will not be used for database files.
|
||||
CacheServer = url
|
||||
|
||||
# A full URL to a location where the database, packages, and signatures (if available) for this repository can be found.
|
||||
Server = url
|
||||
|
||||
# Set the signature verification level for this repository. For more information, see Package and Database Signature Checking below.
|
||||
SigLevel = ...
|
||||
|
||||
# Set the usage level for this repository. This option takes a list of tokens which must be at least one of the following:
|
||||
# Sync : Enables refreshes for this repository.
|
||||
# Search : Enables searching for this repository.
|
||||
# Install : Enables installation of packages from this repository during a --sync operation.
|
||||
# Upgrade : Allows this repository to be a valid source of packages when performing a --sysupgrade.
|
||||
# All : Enables all of the above features for the repository. This is the default if not specified.
|
||||
# Note that an enabled repository can be operated on explicitly, regardless of the Usage level set.
|
||||
Usage = ...
|
||||
```
|
||||
|
||||
### Signature Checking
|
||||
The `SigLevel` directive is valid in both the `[options]` and repository sections. If used in `[options]`, it sets a default value for any repository that does not provide the setting.
|
||||
- If set to `Never`, no signature checking will take place.
|
||||
- If set to `Optional` , signatures will be checked when present, but unsigned databases and packages will also be accepted.
|
||||
- If set to `Required`, signatures will be required on all packages and databases.
|
||||
|
||||
### Hooks
|
||||
libalpm provides the ability to specify hooks to run before or after transactions based on the packages and/or files being modified. Hooks consist of a single `[Action]` section describing the action to be run and one or more `[Trigger]` section describing which transactions it should be run for.
|
||||
|
||||
Hooks are read from files located in the system hook directory `/usr/share/libalpm/hooks`, and additional custom directories specified in pacman.conf (the default is `/etc/pacman.d/hooks`). The file names are required to have the suffix `.hook`. Hooks are run in alphabetical order of their file name, where the ordering ignores the suffix.
|
||||
|
||||
Hooks may be overridden by placing a file with the same name in a higher priority hook directory. Hooks may be disabled by overriding them with a symlink to `/dev/null`.
|
||||
|
||||
Hooks must contain at least one `[Trigger]` section that determines which transactions will cause the hook to run. If multiple trigger sections are defined the hook will run if the transaction matches any of the triggers.
|
||||
|
||||
```ini
|
||||
# Example: Force disks to sync to reduce the risk of data corruption
|
||||
|
||||
[Trigger]
|
||||
# Select the type of operation to match targets against.
|
||||
# May be specified multiple times.
|
||||
# Installations are considered an upgrade if the package or file is already present on the system regardless of whether the new package version is actually greater than the currently installed version. For Path triggers, this is true even if the file changes ownership from one package to another.
|
||||
# Operation = Install | Upgrade | Remove
|
||||
Operation = Install
|
||||
Operation = Upgrade
|
||||
Operation = Remove
|
||||
|
||||
# Select whether targets are matched against transaction packages or files.
|
||||
# Type = Path|Package
|
||||
Type = Package
|
||||
|
||||
# The path or package name to match against the active transaction.
|
||||
# Paths refer to the files in the package archive; the installation root should not be included in the path.
|
||||
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a target with an exclamation mark. May be specified multiple times.
|
||||
# Target = <path|package>
|
||||
Target = *
|
||||
|
||||
[Action]
|
||||
# An optional description that describes the action being taken by the hook for use in front-end output.
|
||||
# Description = ...
|
||||
|
||||
# Packages that must be installed for the hook to run. May be specified multiple times.
|
||||
# Depends = <package>
|
||||
Depends = coreutils
|
||||
|
||||
# When to run the hook. Required.
|
||||
# When = PreTransaction | PostTransaction
|
||||
When = PostTransaction
|
||||
|
||||
# Command to run.
|
||||
# Command arguments are split on whitespace. Values containing whitespace should be enclosed in quotes.
|
||||
# Exec = <command>
|
||||
Exec = /usr/bin/sync
|
||||
|
||||
# Causes the transaction to be aborted if the hook exits non-zero. Only applies to PreTransaction hooks.
|
||||
# AbortOnFail
|
||||
|
||||
# Causes the list of matched trigger targets to be passed to the running hook on stdin.
|
||||
# NeedsTargets
|
||||
```
|
||||
|
||||
## Repositories
|
||||
You can create your own package repository.
|
||||
|
||||
A repository essentially consists of:
|
||||
- the packages (`.tar.zst`) and their signatures (`.tar.zst.sig`)
|
||||
- a package index (`.db.tar.gz`)
|
||||
|
||||
### Adding a repo
|
||||
To use a repo, add it to your `pacman.conf`:
|
||||
|
||||
```ini
|
||||
|
||||
# Local Repository
|
||||
[myrepo]
|
||||
SigLevel = Optional TrustAll
|
||||
Server = file:///path/to/myrepo
|
||||
|
||||
# Remote Repository
|
||||
[myrepo]
|
||||
SigLevel = Optional
|
||||
Server = http://yourserver.com/myrepo
|
||||
```
|
||||
|
||||
### Package Database
|
||||
To manage the package data (index) use the `repo-add` and `repo-remove` commands.
|
||||
|
||||
`repo-add` will update a package database by reading a built package file. Multiple packages to add can be specified on the command line.
|
||||
If a matching `.sig` file is found alongside a package file, the signature will automatically be embedded into the database.
|
||||
|
||||
`repo-remove` will update a package database by removing the package name specified on the command line. Multiple packages to remove can be specified on the command line.
|
||||
|
||||
```sh
|
||||
repo-add [options] <path-to-db> <package> [<package> ...]
|
||||
repo-remove [options] <path-to-db> <packagename> [<packagename> ...]
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-q, --quiet` | Force this program to keep quiet and run silently except for warning and error messages. |
|
||||
| `-s, --sign` | Generate a PGP signature file using GnuPG. This will execute `gpg --detach-sign` on the generated database to generate a detached signature file, using the GPG agent if it is available. |
|
||||
| `-k, --key <key>` | Specify a key to use when signing packages. Can also be specified using the `GPGKEY` environment variable. If not specified in either location, the default key from the keyring will be used. |
|
||||
| `-v, --verify` | Verify the PGP signature of the database before updating the database. If the signature is invalid, an error is produced and the update does not proceed. |
|
||||
| `--nocolor` | Remove color from repo-add and repo-remove output. |
|
||||
| **`repo-add` ONLY OPTIONS:** | - |
|
||||
| `-n, --new` | Only add packages that are not already in the database. Warnings will be printed upon detection of existing packages, but they will not be re-added. |
|
||||
| `-R, --remove` | Remove old package files from the disk when updating their entry in the database. |
|
||||
| `--include-sigs` | Include package PGP signatures in the repository database (if available) |
|
||||
|
||||
## Package Signing
|
||||
To determine if packages are authentic, pacman uses OpenPGP keys in a web of trust model. Each user also has a unique OpenPGP key, which is generated when you configure `pacman-key`.
|
||||
|
||||
Examples of webs of trust:
|
||||
- Custom packages: Packages made and signed with a local key.
|
||||
- Unofficial packages: Packages made and signed by a developer. Then, a local key was used to sign the developer's key.
|
||||
- Official packages: Packages made and signed by a developer. The developer's key was signed by the Arch Linux master keys. You used your key to sign the master keys, and you trust them to vouch for developers.
|
||||
|
||||
### Setup
|
||||
The `SigLevel` option in `/etc/pacman.conf` determines the level of trust required to install a package with `pacman -S`. One can set signature checking globally, or per repository. If `SigLevel` is set globally in the `[options]` section, all packages installed with `pacman -S` will require signing. With the `LocalFileSigLevel` setting from the default `pacman.conf`, any packages you build, and install with `pacman -U`, will not need to be signed using `makepkg`.
|
||||
|
||||
For remote packages, the default configuration will only support the installation of packages signed by trusted keys:
|
||||
|
||||
```
|
||||
# /etc/pacman.conf
|
||||
SigLevel = Required DatabaseOptional TrustedOnly
|
||||
```
|
||||
|
||||
To initialize the pacman keyring run:
|
||||
|
||||
```sh
|
||||
pacman-key --init
|
||||
```
|
||||
|
||||
### Keyring Management
|
||||
#### Verifying the master keys
|
||||
The initial setup of keys is achieved using:
|
||||
|
||||
```sh
|
||||
pacman-key --populate
|
||||
```
|
||||
|
||||
OpenPGP keys are too large (2048 bits or more) for humans to work with, so they are usually hashed to create a 40-hex-digit fingerprint which can be used to check by hand that two keys are the same. The last eight digits of the fingerprint serve as a name for the key known as the '(short) key ID' (the last sixteen digits of the fingerprint would be the 'long key ID').
|
||||
|
||||
#### Adding developer keys
|
||||
The official Developers' and Package Maintainers' keys are signed by the master keys, so you do not need to use `pacman-key` to sign them yourself. Whenever pacman encounters a key it does not recognize, it will prompt you to download it from a keyserver configured in `/etc/pacman.d/gnupg/gpg.conf` (or by using the `--keyserver` option on the command line).
|
||||
|
||||
Once you have downloaded a developer key, you will not have to download it again, and it can be used to verify any other packages signed by that developer.
|
||||
|
||||
> **Note**: The `archlinux-keyring` package, which is a dependency of base, contains the latest keys. However keys can also be updated manually using `pacman-key --refresh-keys` (as root). While doing `--refresh-keys`, your local key will also be looked up on the remote keyserver, and you will receive a message about it not being found. This is nothing to be concerned about.
|
||||
|
||||
#### Adding unofficial keys
|
||||
This method can be utilized to add a key to the pacman keyring, or to enable signed unofficial user repositories.
|
||||
|
||||
First, get the key ID (keyid) from its owner. Then add it to the keyring using one of the two methods:
|
||||
|
||||
If the key is found on a keyserver, import it with:
|
||||
|
||||
```sh
|
||||
pacman-key --recv-keys keyid
|
||||
```
|
||||
|
||||
If otherwise a link to a keyfile is provided, download it and then run:
|
||||
|
||||
```sh
|
||||
pacman-key --add /path/to/downloaded/keyfile
|
||||
```
|
||||
|
||||
It is recommended to verify the fingerprint, as with any master key or any other key you are going to sign:
|
||||
|
||||
```sh
|
||||
pacman-key --finger keyid
|
||||
```
|
||||
|
||||
Finally, you must locally sign the imported key:
|
||||
|
||||
```sh
|
||||
pacman-key --lsign-key keyid
|
||||
```
|
||||
|
||||
You now trust this key to sign packages.
|
||||
pacman -Scc
|
||||
```
|
|
@ -1,190 +1,11 @@
|
|||
---
|
||||
arch-wiki: https://wiki.archlinux.org/title/Makepkg
|
||||
obj: application
|
||||
rev: 2024-12-19
|
||||
---
|
||||
|
||||
# makepkg
|
||||
makepkg is a tool for creating [pacman](Pacman.md) packages based on [PKGBUILD](PKGBUILD.md) files.
|
||||
|
||||
## Configuration
|
||||
The system configuration is available in `/etc/makepkg.conf`, but user-specific changes can be made in `$XDG_CONFIG_HOME/pacman/makepkg.conf` or `~/.makepkg.conf`. Also, system wide changes can be made with a drop-in file `/etc/makepkg.conf.d/makepkg.conf`. It is recommended to review the configuration prior to building packages.
|
||||
|
||||
> **Tip**: devtools helper scripts for building packages in a clean chroot use the `/usr/share/devtools/makepkg.conf.d/arch.conf` configuration file instead.
|
||||
|
||||
```sh
|
||||
#!/hint/bash
|
||||
# shellcheck disable=2034
|
||||
|
||||
#
|
||||
# /etc/makepkg.conf
|
||||
#
|
||||
|
||||
#########################################################################
|
||||
# SOURCE ACQUISITION
|
||||
#########################################################################
|
||||
#
|
||||
#-- The download utilities that makepkg should use to acquire sources
|
||||
# Format: 'protocol::agent'
|
||||
DLAGENTS=('file::/usr/bin/curl -qgC - -o %o %u'
|
||||
'ftp::/usr/bin/curl -qgfC - --ftp-pasv --retry 3 --retry-delay 3 -o %o %u'
|
||||
'http::/usr/bin/curl -qgb "" -fLC - --retry 3 --retry-delay 3 -o %o %u'
|
||||
'https::/usr/bin/curl -qgb "" -fLC - --retry 3 --retry-delay 3 -o %o %u'
|
||||
'rsync::/usr/bin/rsync --no-motd -z %u %o'
|
||||
'scp::/usr/bin/scp -C %u %o')
|
||||
|
||||
# Other common tools:
|
||||
# /usr/bin/snarf
|
||||
# /usr/bin/lftpget -c
|
||||
# /usr/bin/wget
|
||||
|
||||
#-- The package required by makepkg to download VCS sources
|
||||
# Format: 'protocol::package'
|
||||
VCSCLIENTS=('bzr::breezy'
|
||||
'fossil::fossil'
|
||||
'git::git'
|
||||
'hg::mercurial'
|
||||
'svn::subversion')
|
||||
|
||||
#########################################################################
|
||||
# ARCHITECTURE, COMPILE FLAGS
|
||||
#########################################################################
|
||||
#
|
||||
CARCH="x86_64"
|
||||
CHOST="x86_64-pc-linux-gnu"
|
||||
|
||||
#-- Compiler and Linker Flags
|
||||
#CPPFLAGS=""
|
||||
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \
|
||||
-Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security \
|
||||
-fstack-clash-protection -fcf-protection \
|
||||
-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer"
|
||||
CXXFLAGS="$CFLAGS -Wp,-D_GLIBCXX_ASSERTIONS"
|
||||
LDFLAGS="-Wl,-O1 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now \
|
||||
-Wl,-z,pack-relative-relocs"
|
||||
LTOFLAGS="-flto=auto"
|
||||
#-- Make Flags: change this for DistCC/SMP systems
|
||||
MAKEFLAGS="-j8"
|
||||
#-- Debugging flags
|
||||
DEBUG_CFLAGS="-g"
|
||||
DEBUG_CXXFLAGS="$DEBUG_CFLAGS"
|
||||
|
||||
#########################################################################
|
||||
# BUILD ENVIRONMENT
|
||||
#########################################################################
|
||||
#
|
||||
# Makepkg defaults: BUILDENV=(!distcc !color !ccache check !sign)
|
||||
# A negated environment option will do the opposite of the comments below.
|
||||
#
|
||||
#-- distcc: Use the Distributed C/C++/ObjC compiler
|
||||
#-- color: Colorize output messages
|
||||
#-- ccache: Use ccache to cache compilation
|
||||
#-- check: Run the check() function if present in the PKGBUILD
|
||||
#-- sign: Generate PGP signature file
|
||||
#
|
||||
BUILDENV=(!distcc color !ccache check !sign)
|
||||
|
||||
#
|
||||
#-- If using DistCC, your MAKEFLAGS will also need modification. In addition,
|
||||
#-- specify a space-delimited list of hosts running in the DistCC cluster.
|
||||
#DISTCC_HOSTS=""
|
||||
|
||||
#-- Specify a directory for package building.
|
||||
BUILDDIR=/tmp/makepkg
|
||||
|
||||
#########################################################################
|
||||
# GLOBAL PACKAGE OPTIONS
|
||||
# These are default values for the options=() settings
|
||||
#########################################################################
|
||||
#
|
||||
# Makepkg defaults: OPTIONS=(!strip docs libtool staticlibs emptydirs !zipman !purge !debug !lto !autodeps)
|
||||
# A negated option will do the opposite of the comments below.
|
||||
#
|
||||
#-- strip: Strip symbols from binaries/libraries
|
||||
#-- docs: Save doc directories specified by DOC_DIRS
|
||||
#-- libtool: Leave libtool (.la) files in packages
|
||||
#-- staticlibs: Leave static library (.a) files in packages
|
||||
#-- emptydirs: Leave empty directories in packages
|
||||
#-- zipman: Compress manual (man and info) pages in MAN_DIRS with gzip
|
||||
#-- purge: Remove files specified by PURGE_TARGETS
|
||||
#-- debug: Add debugging flags as specified in DEBUG_* variables
|
||||
#-- lto: Add compile flags for building with link time optimization
|
||||
#-- autodeps: Automatically add depends/provides
|
||||
#
|
||||
OPTIONS=(strip docs !libtool !staticlibs emptydirs zipman purge !debug lto)
|
||||
|
||||
#-- File integrity checks to use. Valid: md5, sha1, sha224, sha256, sha384, sha512, b2
|
||||
INTEGRITY_CHECK=(sha256)
|
||||
#-- Options to be used when stripping binaries. See `man strip' for details.
|
||||
STRIP_BINARIES="--strip-all"
|
||||
#-- Options to be used when stripping shared libraries. See `man strip' for details.
|
||||
STRIP_SHARED="--strip-unneeded"
|
||||
#-- Options to be used when stripping static libraries. See `man strip' for details.
|
||||
STRIP_STATIC="--strip-debug"
|
||||
#-- Manual (man and info) directories to compress (if zipman is specified)
|
||||
MAN_DIRS=({usr{,/local}{,/share},opt/*}/{man,info})
|
||||
#-- Doc directories to remove (if !docs is specified)
|
||||
DOC_DIRS=(usr/{,local/}{,share/}{doc,gtk-doc} opt/*/{doc,gtk-doc})
|
||||
#-- Files to be removed from all packages (if purge is specified)
|
||||
PURGE_TARGETS=(usr/{,share}/info/dir .packlist *.pod)
|
||||
#-- Directory to store source code in for debug packages
|
||||
DBGSRCDIR="/usr/src/debug"
|
||||
#-- Prefix and directories for library autodeps
|
||||
LIB_DIRS=('lib:usr/lib' 'lib32:usr/lib32')
|
||||
|
||||
#########################################################################
|
||||
# PACKAGE OUTPUT
|
||||
#########################################################################
|
||||
#
|
||||
# Default: put built package and cached source in build directory
|
||||
#
|
||||
#-- Destination: specify a fixed directory where all packages will be placed
|
||||
PKGDEST=/home/packages
|
||||
|
||||
#-- Source cache: specify a fixed directory where source files will be cached
|
||||
SRCDEST=/home/sources
|
||||
|
||||
#-- Source packages: specify a fixed directory where all src packages will be placed
|
||||
SRCPKGDEST=/home/srcpackages
|
||||
|
||||
#-- Log files: specify a fixed directory where all log files will be placed
|
||||
#LOGDEST=/home/makepkglogs
|
||||
|
||||
#-- Packager: name/email of the person or organization building packages
|
||||
PACKAGER="John Doe <john@doe.com>"
|
||||
#-- Specify a key to use for package signing
|
||||
GPGKEY=""
|
||||
|
||||
#########################################################################
|
||||
# COMPRESSION DEFAULTS
|
||||
#########################################################################
|
||||
#
|
||||
COMPRESSGZ=(gzip -c -f -n)
|
||||
COMPRESSBZ2=(bzip2 -c -f)
|
||||
COMPRESSXZ=(xz -c -z -)
|
||||
COMPRESSZST=(zstd -c -T0 -)
|
||||
COMPRESSLRZ=(lrzip -q)
|
||||
COMPRESSLZO=(lzop -q)
|
||||
COMPRESSZ=(compress -c -f)
|
||||
COMPRESSLZ4=(lz4 -q)
|
||||
COMPRESSLZ=(lzip -c -f)
|
||||
|
||||
#########################################################################
|
||||
# EXTENSION DEFAULTS
|
||||
#########################################################################
|
||||
#
|
||||
PKGEXT='.pkg.tar.zst'
|
||||
SRCEXT='.src.tar.gz'
|
||||
|
||||
#########################################################################
|
||||
# OTHER
|
||||
#########################################################################
|
||||
#
|
||||
#-- Command used to run pacman as root, instead of trying sudo and su
|
||||
#PACMAN_AUTH=()
|
||||
# vim: set ft=sh ts=2 sw=2 et:
|
||||
```
|
||||
|
||||
## Usage
|
||||
Make a package:
|
||||
```shell
|
||||
|
@ -217,102 +38,22 @@ makepkg --verifysource
|
|||
```
|
||||
|
||||
## Options
|
||||
| Option | Description |
|
||||
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-A, --ignorearch` | Ignore a missing or incomplete arch field in the build script |
|
||||
| `-c, --clean` | Clean up leftover work files and directories after a successful build |
|
||||
| `-d, --nodeps` | Do not perform any dependency checks. This will let you override and ignore any dependencies required. There is a good chance this option will break the build process if all of the dependencies are not installed |
|
||||
| Option | Description |
|
||||
| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-A, --ignorearch` | Ignore a missing or incomplete arch field in the build script |
|
||||
| `-c, --clean` | Clean up leftover work files and directories after a successful build |
|
||||
| `-d, --nodeps` | Do not perform any dependency checks. This will let you override and ignore any dependencies required. There is a good chance this option will break the build process if all of the dependencies are not installed |
|
||||
| `-e, --noextract` | Do not extract source files or run the prepare() function (if present); use whatever source already exists in the $srcdir/ directory. This is handy if you want to go into $srcdir/ and manually patch or tweak code, then make a package out of the result. Keep in mind that creating a patch may be a better solution to allow others to use your [PKGBUILD](PKGBUILD.md). |
|
||||
| `--skipinteg` | Do not perform any integrity checks (checksum and [PGP](../../../cryptography/GPG.md)) on source files |
|
||||
| `--skipchecksums` | Do not verify checksums of source files |
|
||||
| `--skippgpcheck` | Do not verify [PGP](../../../cryptography/GPG.md) signatures of source files |
|
||||
| `-i, --install` | Install or upgrade the package after a successful build using [pacman](Pacman.md) |
|
||||
| `-o, --nobuild` | Download and extract files, run the prepare() function, but do not build them. Useful with the `--noextract` option if you wish to tweak the files in $srcdir/ before building |
|
||||
| `-r, --rmdeps` | Upon successful build, remove any dependencies installed by makepkg during dependency auto-resolution and installation |
|
||||
| `-s, --syncdeps` | Install missing dependencies using [pacman](Pacman.md). When build-time or run-time dependencies are not found, [pacman](Pacman.md) will try to resolve them. If successful, the missing packages will be downloaded and installed |
|
||||
| `-C, --cleanbuild` | Remove the $srcdir before building the package |
|
||||
| `-f, --force` | Overwrite package if it already exists |
|
||||
| `--noarchive` | Do not create the archive at the end of the build process. This can be useful to test the package() function or if your target distribution does not use [pacman](Pacman.md) |
|
||||
| `--sign` | Sign the resulting package with [gpg](../../../cryptography/GPG.md) |
|
||||
| `--nosign` | Do not create a signature for the built package |
|
||||
| `--key <key>` | Specify a key to use when signing packages |
|
||||
| `--noconfirm` | (Passed to [pacman](Pacman.md)) Prevent [pacman](Pacman.md) from waiting for user input before proceeding with operations |
|
||||
|
||||
## Misc
|
||||
### Using mold linker
|
||||
[mold](../../development/mold.md) is a drop-in replacement for ld/lld linkers, which claims to be significantly faster.
|
||||
|
||||
To use mold, append `-fuse-ld=mold` to `LDFLAGS`. For example:
|
||||
|
||||
```sh
|
||||
# /etc/makepkg.conf
|
||||
|
||||
LDFLAGS="... -fuse-ld=mold"
|
||||
```
|
||||
|
||||
To pass extra options to mold, additionally add those to `LDFLAGS`. For example:
|
||||
|
||||
```sh
|
||||
# /etc/makepkg.conf
|
||||
|
||||
LDFLAGS="... -fuse-ld=mold -Wl,--separate-debug-file"
|
||||
```
|
||||
|
||||
To use mold for Rust packages, append `-C link-arg=-fuse-ld=mold` to `RUSTFLAGS`. For example:
|
||||
|
||||
```sh
|
||||
# /etc/makepkg.conf.d/rust.conf
|
||||
|
||||
RUSTFLAGS="... -C link-arg=-fuse-ld=mold"
|
||||
```
|
||||
|
||||
### Parallel compilation
|
||||
The make build system uses the `MAKEFLAGS` environment variable to specify additional options for make. The variable can also be set in the `makepkg.conf` file.
|
||||
|
||||
Users with multi-core/multi-processor systems can specify the number of jobs to run simultaneously. This can be accomplished with the use of `nproc` to determine the number of available processors, e.g.
|
||||
|
||||
```sh
|
||||
MAKEFLAGS="--jobs=$(nproc)".
|
||||
```
|
||||
|
||||
Some `PKGBUILDs` specifically override this with `-j1`, because of race conditions in certain versions or simply because it is not supported in the first place.
|
||||
|
||||
### Building from files in memory
|
||||
As compiling requires many I/O operations and handling of small files, moving the working directory to a [tmpfs](../../../linux/filesystems/tmpFS.md) may bring improvements in build times.
|
||||
|
||||
The `BUILDDIR` variable can be temporarily exported to makepkg to set the build directory to an existing tmpfs. For example:
|
||||
|
||||
```sh
|
||||
BUILDDIR=/tmp/makepkg makepkg
|
||||
```
|
||||
|
||||
Persistent configuration can be done in `makepkg.conf` by uncommenting the `BUILDDIR` option, which is found at the end of the BUILD ENVIRONMENT section in the default `/etc/makepkg.conf` file. Setting its value to e.g. `BUILDDIR=/tmp/makepkg` will make use of the Arch's default `/tmp` temporary file system.
|
||||
|
||||
> **Note:**
|
||||
> - Avoid compiling larger packages in tmpfs to prevent running out of memory.
|
||||
> - The tmpfs directory must be mounted without the `noexec` option, otherwise it will prevent built binaries from being executed.
|
||||
> - Keep in mind that packages compiled in tmpfs will not persist across reboot. Consider setting the `PKGDEST` option appropriately to move the built package automatically to a persistent directory.
|
||||
|
||||
### Generate new checksums
|
||||
Install `pacman-contrib` and run the following command in the same directory as the [PKGBUILD](./PKGBUILD.md) file to generate new checksums:
|
||||
|
||||
```sh
|
||||
updpkgsums
|
||||
```
|
||||
|
||||
`updpkgsums` uses `makepkg --geninteg` to generate the checksums.
|
||||
|
||||
The checksums can also be obtained with e.g `sha256sum` and added to the `sha256sums` array by hand.
|
||||
|
||||
### Build from local source files
|
||||
If you want to make changes to the source code you can download the source code without building the package by using the `-o, --nobuild` Download and extract files only option.
|
||||
|
||||
```sh
|
||||
makepkg -o
|
||||
```
|
||||
|
||||
You can now make changes to the sources and then build the package by using the `-e, --noextract` Do not extract source files (use existing `$srcdir/` dir) option. Use the `-f` option to overwrite already built and existing packages.
|
||||
|
||||
```sh
|
||||
makepkg -ef
|
||||
```
|
||||
| `--skipinteg` | Do not perform any integrity checks (checksum and [PGP](../../../cryptography/GPG.md)) on source files |
|
||||
| `--skipchecksums` | Do not verify checksums of source files |
|
||||
| `--skippgpcheck` | Do not verify [PGP](../../../cryptography/GPG.md) signatures of source files |
|
||||
| `-i, --install` | Install or upgrade the package after a successful build using [pacman](Pacman.md) |
|
||||
| `-o, --nobuild` | Download and extract files, run the prepare() function, but do not build them. Useful with the `--noextract` option if you wish to tweak the files in $srcdir/ before building |
|
||||
| `-r, --rmdeps` | Upon successful build, remove any dependencies installed by makepkg during dependency auto-resolution and installation |
|
||||
| `-s, --syncdeps` | Install missing dependencies using [pacman](Pacman.md). When build-time or run-time dependencies are not found, [pacman](Pacman.md) will try to resolve them. If successful, the missing packages will be downloaded and installed |
|
||||
| `-C, --cleanbuild` | Remove the $srcdir before building the package |
|
||||
| `--noarchive` | Do not create the archive at the end of the build process. This can be useful to test the package() function or if your target distribution does not use [pacman](Pacman.md) |
|
||||
| `--sign` | Sign the resulting package with [gpg](../../../cryptography/GPG.md) |
|
||||
| `--nosign` | Do not create a signature for the built package |
|
||||
| `--key <key>` | Specify a key to use when signing packages |
|
||||
| `--noconfirm` | (Passed to [pacman](Pacman.md)) Prevent [pacman](Pacman.md) from waiting for user input before proceeding with operations |
|
||||
|
|
|
@ -1,103 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/containers/bubblewrap
|
||||
arch-wiki: https://wiki.archlinux.org//title/Bubblewrap
|
||||
rev: 2025-01-09
|
||||
---
|
||||
|
||||
# Bubblewrap
|
||||
Bubblewrap is a lightweight sandbox application used by Flatpak and other container tools. It has a small installation footprint and minimal resource requirements. Notable features include support for cgroup/IPC/mount/network/PID/user/UTS namespaces and seccomp filtering. Note that bubblewrap drops all capabilities within a sandbox and that child tasks cannot gain greater privileges than its parent.
|
||||
|
||||
## Configuration
|
||||
Bubblewrap can be called directly from the command-line and/or within shell scripts as part of a complex wrapper.
|
||||
|
||||
A no-op bubblewrap invocation is as follows:
|
||||
|
||||
```sh
|
||||
bwrap --dev-bind / / bash
|
||||
```
|
||||
|
||||
This will spawn a Bash process which should behave exactly as outside a sandbox in most cases. If a sandboxed program misbehaves, you may want to start from the above no-op invocation, and work your way towards a more secure configuration step-by-step.
|
||||
|
||||
### Desktop entries
|
||||
Leverage Bubblewrap within desktop entries:
|
||||
- Bind as read-write the entire host `/` directory to `/` in the sandbox
|
||||
- Re-bind as read-only the `/var` and `/etc` directories in the sandbox
|
||||
- Mount a new devtmpfs filesystem to `/dev` in the sandbox
|
||||
- Create a tmpfs filesystem over the sandboxed `/run` directory
|
||||
- Disable network access by creating new network namespace
|
||||
|
||||
```ini
|
||||
[Desktop Entry]
|
||||
Name=nano Editor
|
||||
Exec=bwrap --bind / / --dev /dev --tmpfs /run --unshare-net st -e nano -o . %f
|
||||
Type=Application
|
||||
MimeType=text/plain;
|
||||
```
|
||||
|
||||
> **Note**: `--dev /dev` is required to write to `/dev/pty`
|
||||
|
||||
## Options
|
||||
Usage: `bwrap [optiosn] [command]`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `--args FD` | Parse nul-separated arguments from the given file descriptor. This option can be used multiple times to parse options from multiple sources. |
|
||||
| `--argv0 VALUE` | Set `argv[0]` to the value VALUE before running the program |
|
||||
| `--unshare-user` | Create a new user namespace |
|
||||
| `--unshare-user-try` | Create a new user namespace if possible else skip it |
|
||||
| `--unshare-ipc` | Create a new ipc namespace |
|
||||
| `--unshare-pid` | Create a new pid namespace |
|
||||
| `--unshare-net` | Create a new network namespace |
|
||||
| `--unshare-uts` | Create a new uts namespace |
|
||||
| `--unshare-cgroup` | Create a new cgroup namespace |
|
||||
| `--unshare-cgroup-try` | Create a new cgroup namespace if possible else skip it |
|
||||
| `--unshare-all` | Unshare all possible namespaces. Currently equivalent with: `--unshare-user-try --unshare-ipc --unshare-pid --unshare-net --unshare-uts --unshare-cgroup-try` |
|
||||
| `--share-net` | Retain the network namespace, overriding an earlier `--unshare-all` or `--unshare-net` |
|
||||
| `--userns FD` | Use an existing user namespace instead of creating a new one. The namespace must fulfil the permission requirements for `setns()`, which generally means that it must be a descendant of the currently active user namespace, owned by the same user. |
|
||||
| `--disable-userns` | Prevent the process in the sandbox from creating further user namespaces, so that it cannot rearrange the filesystem namespace or do other more complex namespace modification. |
|
||||
| `--assert-userns-disabled` | Confirm that the process in the sandbox has been prevented from creating further user namespaces, but without taking any particular action to prevent that. For example, this can be combined with --userns to check that the given user namespace has already been set up to prevent the creation of further user namespaces. |
|
||||
| `--pidns FD` | Use an existing pid namespace instead of creating one. This is often used with `--userns`, because the pid namespace must be owned by the same user namespace that bwrap uses. |
|
||||
| `--uid UID` | Use a custom user id in the sandbox (requires `--unshare-user`) |
|
||||
| `--gid GID` | Use a custom group id in the sandbox (requires `--unshare-user`) |
|
||||
| `--hostname HOSTNAME` | Use a custom hostname in the sandbox (requires `--unshare-uts`) |
|
||||
| `--chdir DIR` | Change directory to DIR |
|
||||
| `--setenv VAR VALUE` | Set an environment variable |
|
||||
| `--unsetenv VAR` | Unset an environment variable |
|
||||
| `--clearenv` | Unset all environment variables, except for PWD and any that are subsequently set by `--setenv` |
|
||||
| `--lock-file DEST` | Take a lock on DEST while the sandbox is running. This option can be used multiple times to take locks on multiple files. |
|
||||
| `--sync-fd FD` | Keep this file descriptor open while the sandbox is running |
|
||||
| `--perms OCTAL` | This option does nothing on its own, and must be followed by one of the options that it affects. It sets the permissions for the next operation to OCTAL. Subsequent operations are not affected: for example, `--perms 0700 --tmpfs /a --tmpfs /b` will mount `/a` with permissions `0700`, then return to the default permissions for `/b`. Note that `--perms` and `--size` can be combined: `--perms 0700 --size 10485760 --tmpfs /s` will apply permissions as well as a maximum size to the created tmpfs. |
|
||||
| `--size BYTES` | This option does nothing on its own, and must be followed by `--tmpfs`. It sets the size in bytes for the next tmpfs. For example, `--size 10485760 --tmpfs /tmp` will create a tmpfs at `/tmp` of size 10MiB. Subsequent operations are not affected. |
|
||||
| `--bind SRC DEST` | Bind mount the host path SRC on DEST |
|
||||
| `--bind-try SRC DEST` | Equal to `--bind` but ignores non-existent SRC |
|
||||
| `--dev-bind SRC DEST` | Bind mount the host path SRC on DEST, allowing device access |
|
||||
| `--dev-bind-try SRC DEST` | Equal to `--dev-bind` but ignores non-existent SRC |
|
||||
| `--ro-bind SRC DEST` | Bind mount the host path SRC readonly on DEST |
|
||||
| `--ro-bind-try SRC DEST` | Equal to `--ro-bind` but ignores non-existent SRC |
|
||||
| `--remount-ro DEST` | Remount the path DEST as readonly. It works only on the specified mount point, without changing any other mount point under the specified path |
|
||||
| `--overlay-src SRC` | This option does nothing on its own, and must be followed by one of the other overlay options. It specifies a host path from which files should be read if they aren't present in a higher layer. |
|
||||
| `--overlay RWSRC WORKDIR DEST`, `--tmp-overlay DEST`, `--ro-overlay DEST` | Use overlayfs to mount the host paths specified by `RWSRC` and all immediately preceding `--overlay-src` on `DEST`. `DEST` will contain the union of all the files in all the layers. With `--overlay` all writes will go to `RWSRC`. Reads will come preferentially from `RWSRC`, and then from any `--overlay-src` paths. `WORKDIR` must be an empty directory on the same filesystem as `RWSRC`, and is used internally by the kernel. With `--tmp-overlay` all writes will go to the tmpfs that hosts the sandbox root, in a location not accessible from either the host or the child process. Writes will therefore not be persisted across multiple runs. With `--ro-overlay` the filesystem will be mounted read-only. This option requires at least two `--overlay-src` to precede it. |
|
||||
| `--proc DEST` | Mount procfs on DEST |
|
||||
| `--dev DEST` | Mount new devtmpfs on DEST |
|
||||
| `--tmpfs DEST` | Mount new tmpfs on DEST. If the previous option was `--perms`, it sets the mode of the tmpfs. Otherwise, the tmpfs has mode `0755`. If the previous option was `--size`, it sets the size in bytes of the tmpfs. Otherwise, the tmpfs has the default size. |
|
||||
| `--mqueue DEST` | Mount new mqueue on DEST |
|
||||
| `--dir DEST` | Create a directory at DEST. If the directory already exists, its permissions are unmodified, ignoring `--perms` (use `--chmod` if the permissions of an existing directory need to be changed). If the directory is newly created and the previous option was `--perms`, it sets the mode of the directory. Otherwise, newly-created directories have mode `0755`. |
|
||||
| `--file FD DEST` | Copy from the file descriptor FD to DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0666` (note that this is not the same as `--bind-data`). |
|
||||
| `--bind-data FD DEST` | Copy from the file descriptor FD to a file which is bind-mounted on DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0600` (note that this is not the same as `--file`). |
|
||||
| `--ro-bind-data FD DEST` | Copy from the file descriptor FD to a file which is bind-mounted read-only on DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0600` (note that this is not the same as `--file`). |
|
||||
| `--symlink SRC DEST` | Create a symlink at DEST with target SRC. |
|
||||
| `--chmod OCTAL PATH` | Set the permissions of PATH, which must already exist, to OCTAL. |
|
||||
| `--seccomp FD` | Load and use seccomp rules from FD. The rules need to be in the form of a compiled cBPF program, as generated by seccomp_export_bpf. If this option is given more than once, only the last one is used. Use `--add-seccomp-fd` if multiple seccomp programs are needed. |
|
||||
| `--add-seccomp-fd FD` | Load and use seccomp rules from FD. The rules need to be in the form of a compiled cBPF program, as generated by seccomp_export_bpf. This option can be repeated, in which case all the seccomp programs will be loaded in the order given (note that the kernel will evaluate them in reverse order, so the last program on the bwrap command-line is evaluated first). All of them, except possibly the last, must allow use of the PR_SET_SECCOMP prctl. This option cannot be combined with `--seccomp`. |
|
||||
| `--exec-label LABEL` | Exec Label from the sandbox. On an SELinux system you can specify the SELinux context for the sandbox process(s). |
|
||||
| `--file-label LABEL` | File label for temporary sandbox content. On an SELinux system you can specify the SELinux context for the sandbox content. |
|
||||
| `--block-fd FD` | Block the sandbox on reading from FD until some data is available. |
|
||||
| `--userns-block-fd FD` | Do not initialize the user namespace but wait on FD until it is ready. This allow external processes (like newuidmap/newgidmap) to setup the user namespace before it is used by the sandbox process. |
|
||||
| `--info-fd FD` | Write information in JSON format about the sandbox to FD. |
|
||||
| `--json-status-fd FD` | Multiple JSON documents are written to FD, one per line. |
|
||||
| `--new-session` | Create a new terminal session for the sandbox (calls `setsid()`). This disconnects the sandbox from the controlling terminal which means the sandbox can't for instance inject input into the terminal. Note: In a general sandbox, if you don't use `--new-session`, it is recommended to use seccomp to disallow the `TIOCSTI` ioctl, otherwise the application can feed keyboard input to the terminal which can e.g. lead to out-of-sandbox command execution. |
|
||||
| `--die-with-parent` | Ensures child process (COMMAND) dies when bwrap's parent dies. Kills (SIGKILL) all bwrap sandbox processes in sequence from parent to child including COMMAND process when bwrap or bwrap's parent dies. |
|
||||
| `--as-pid-1` | Do not create a process with PID=1 in the sandbox to reap child processes. |
|
||||
| `--cap-add CAP` | Add the specified capability CAP, e.g. `CAP_DAC_READ_SEARCH`, when running as privileged user. It accepts the special value `ALL` to add all the permitted caps. |
|
||||
| `--cap-drop CAP` | Drop the specified capability when running as privileged user. It accepts the special value `ALL` to drop all the caps. By default no caps are left in the sandboxed process. The `--cap-add` and `--cap-drop` options are processed in the order they are specified on the command line. Please be careful to the order they are specified. |
|
42
technology/applications/utilities/cAdvisor.md
Normal file
42
technology/applications/utilities/cAdvisor.md
Normal file
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/google/cadvisor
|
||||
rev: 2024-12-12
|
||||
---
|
||||
|
||||
# cAdvisor
|
||||
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.
|
||||
|
||||
## Prometheus
|
||||
Add this to [Prometheus](../web/Prometheus.md) config file:
|
||||
|
||||
```yml
|
||||
scrape_configs:
|
||||
- job_name: cadvisor
|
||||
scrape_interval: 5s
|
||||
static_configs:
|
||||
- targets:
|
||||
- cadvisor:8080
|
||||
```
|
||||
|
||||
## Docker-Compose
|
||||
|
||||
```yml
|
||||
services:
|
||||
cadvisor:
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:ro
|
||||
- /dev/disk/:/dev/disk:ro
|
||||
ports:
|
||||
- target: 8080
|
||||
published: 8080
|
||||
protocol: tcp
|
||||
mode: host
|
||||
privileged: true
|
||||
image: gcr.io/cadvisor/cadvisor
|
||||
deploy:
|
||||
mode: global
|
||||
```
|
178
technology/applications/utilities/node-exporter.md
Normal file
178
technology/applications/utilities/node-exporter.md
Normal file
|
@ -0,0 +1,178 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/prometheus/node_exporter
|
||||
rev: 2024-12-12
|
||||
---
|
||||
|
||||
# Prometheus Node Exporter
|
||||
Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.
|
||||
|
||||
A Dashboard to use with Node Exporter and Grafana can be found [here](https://grafana.com/grafana/dashboards/1860-node-exporter-full/).
|
||||
|
||||
## Usage
|
||||
The node_exporter listens on HTTP port 9100 by default.
|
||||
|
||||
### Docker
|
||||
The `node_exporter` is designed to monitor the host system. Deploying in containers requires extra care in order to avoid monitoring the container itself.
|
||||
|
||||
For situations where containerized deployment is needed, some extra flags must be used to allow the `node_exporter` access to the host namespaces.
|
||||
|
||||
Be aware that any non-root mount points you want to monitor will need to be bind-mounted into the container.
|
||||
|
||||
If you start container for host monitoring, specify `path.rootfs` argument. This argument must match path in bind-mount of host root. The `node_exporter` will use `path.rootfs` as prefix to access host filesystem.
|
||||
|
||||
```yml
|
||||
---
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
node_exporter:
|
||||
image: quay.io/prometheus/node-exporter:latest
|
||||
container_name: node_exporter
|
||||
command:
|
||||
- '--path.rootfs=/host'
|
||||
network_mode: host
|
||||
pid: host
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- '/:/host:ro,rslave'
|
||||
```
|
||||
|
||||
On some systems, the timex collector requires an additional Docker flag, `--cap-add=SYS_TIME`, in order to access the required syscalls.
|
||||
|
||||
### Prometheus
|
||||
Configure Prometheus to scrape the exposed node exporter:
|
||||
|
||||
```yml
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: node
|
||||
static_configs:
|
||||
- targets: ['localhost:9100']
|
||||
```
|
||||
|
||||
## Configuration
|
||||
Node Exporter can be configured using CLI arguments.
|
||||
|
||||
### Options
|
||||
|
||||
| **Option** | **Description** |
|
||||
| ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- |
|
||||
| `--path.procfs="/proc"` | procfs mountpoint. |
|
||||
| `--path.sysfs="/sys"` | sysfs mountpoint. |
|
||||
| `--path.rootfs="/"` | rootfs mountpoint. |
|
||||
| `--path.udev.data="/run/udev/data"` | udev data path. |
|
||||
| `--collector.runit.servicedir="/etc/service"` | Path to runit service directory. |
|
||||
| `--collector.supervisord.url="http://localhost:9001/RPC2"` | XML RPC endpoint. |
|
||||
| `--collector.sysctl.include=COLLECTOR.SYSCTL.INCLUDE ...` | Select sysctl metrics to include. |
|
||||
| `--collector.sysctl.include-info=COLLECTOR.SYSCTL.INCLUDE-INFO ...` | Select sysctl metrics to include as info metrics. |
|
||||
| `--collector.systemd.unit-include=".+"` | Regexp of systemd units to include. Units must both match include and not match exclude to be included. |
|
||||
| `--collector.systemd.unit-exclude=".+\\.(automount|device|mount|scope|slice|target)"` | Regexp of systemd units to exclude. Units must both match include and not match exclude to be included. |
|
||||
| `--collector.systemd.enable-task-metrics` | Enables service unit tasks metrics `unit_tasks_current` and `unit_tasks_max`. |
|
||||
| `--collector.systemd.enable-restarts-metrics` | Enables service unit metric `service_restart_total`. |
|
||||
| `--collector.systemd.enable-start-time-metrics` | Enables service unit metric `unit_start_time_seconds`. |
|
||||
| `--collector.tapestats.ignored-devices="^$"` | Regexp of devices to ignore for tapestats. |
|
||||
| `--collector.textfile.directory="/var/lib/prometheus/node-exporter"` | Directory to read text files with metrics from. |
|
||||
| `--collector.vmstat.fields="^(oom_kill|pgpg|pswp|pg.*fault).*"` | Regexp of fields to return for vmstat collector. |
|
||||
| `--collector.arp` | Enable the arp collector (default: enabled). |
|
||||
| `--collector.bcache` | Enable the bcache collector (default: enabled). |
|
||||
| `--collector.bonding` | Enable the bonding collector (default: enabled). |
|
||||
| `--collector.btrfs` | Enable the btrfs collector (default: enabled). |
|
||||
| `--collector.buddyinfo` | Enable the buddyinfo collector (default: disabled). |
|
||||
| `--collector.cgroups` | Enable the cgroups collector (default: disabled). |
|
||||
| `--collector.conntrack` | Enable the conntrack collector (default: enabled). |
|
||||
| `--collector.cpu` | Enable the cpu collector (default: enabled). |
|
||||
| `--collector.cpufreq` | Enable the cpufreq collector (default: enabled). |
|
||||
| `--collector.diskstats` | Enable the diskstats collector (default: enabled). |
|
||||
| `--collector.dmi` | Enable the dmi collector (default: enabled). |
|
||||
| `--collector.drbd` | Enable the drbd collector (default: disabled). |
|
||||
| `--collector.drm` | Enable the drm collector (default: disabled). |
|
||||
| `--collector.edac` | Enable the edac collector (default: enabled). |
|
||||
| `--collector.entropy` | Enable the entropy collector (default: enabled). |
|
||||
| `--collector.ethtool` | Enable the ethtool collector (default: disabled). |
|
||||
| `--collector.fibrechannel` | Enable the fibrechannel collector (default: enabled). |
|
||||
| `--collector.filefd` | Enable the filefd collector (default: enabled). |
|
||||
| `--collector.filesystem` | Enable the filesystem collector (default: enabled). |
|
||||
| `--collector.hwmon` | Enable the hwmon collector (default: enabled). |
|
||||
| `--collector.infiniband` | Enable the infiniband collector (default: enabled). |
|
||||
| `--collector.interrupts` | Enable the interrupts collector (default: disabled). |
|
||||
| `--collector.ipvs` | Enable the ipvs collector (default: enabled). |
|
||||
| `--collector.ksmd` | Enable the ksmd collector (default: disabled). |
|
||||
| `--collector.lnstat` | Enable the lnstat collector (default: disabled). |
|
||||
| `--collector.loadavg` | Enable the loadavg collector (default: enabled). |
|
||||
| `--collector.logind` | Enable the logind collector (default: disabled). |
|
||||
| `--collector.mdadm` | Enable the mdadm collector (default: enabled). |
|
||||
| `--collector.meminfo` | Enable the meminfo collector (default: enabled). |
|
||||
| `--collector.meminfo_numa` | Enable the meminfo_numa collector (default: disabled). |
|
||||
| `--collector.mountstats` | Enable the mountstats collector (default: disabled). |
|
||||
| `--collector.netclass` | Enable the netclass collector (default: enabled). |
|
||||
| `--collector.netdev` | Enable the netdev collector (default: enabled). |
|
||||
| `--collector.netstat` | Enable the netstat collector (default: enabled). |
|
||||
| `--collector.network_route` | Enable the network_route collector (default: disabled). |
|
||||
| `--collector.nfs` | Enable the nfs collector (default: enabled). |
|
||||
| `--collector.nfsd` | Enable the nfsd collector (default: enabled). |
|
||||
| `--collector.ntp` | Enable the ntp collector (default: disabled). |
|
||||
| `--collector.nvme` | Enable the nvme collector (default: enabled). |
|
||||
| `--collector.os` | Enable the os collector (default: enabled). |
|
||||
| `--collector.perf` | Enable the perf collector (default: disabled). |
|
||||
| `--collector.powersupplyclass` | Enable the powersupplyclass collector (default: enabled). |
|
||||
| `--collector.pressure` | Enable the pressure collector (default: enabled). |
|
||||
| `--collector.processes` | Enable the processes collector (default: disabled). |
|
||||
| `--collector.qdisc` | Enable the qdisc collector (default: disabled). |
|
||||
| `--collector.rapl` | Enable the rapl collector (default: enabled). |
|
||||
| `--collector.runit` | Enable the runit collector (default: disabled). |
|
||||
| `--collector.schedstat` | Enable the schedstat collector (default: enabled). |
|
||||
| `--collector.selinux` | Enable the selinux collector (default: enabled). |
|
||||
| `--collector.slabinfo` | Enable the slabinfo collector (default: disabled). |
|
||||
| `--collector.sockstat` | Enable the sockstat collector (default: enabled). |
|
||||
| `--collector.softnet` | Enable the softnet collector (default: enabled). |
|
||||
| `--collector.stat` | Enable the stat collector (default: enabled). |
|
||||
| `--collector.supervisord` | Enable the supervisord collector (default: disabled). |
|
||||
| `--collector.sysctl` | Enable the sysctl collector (default: disabled). |
|
||||
| `--collector.systemd` | Enable the systemd collector (default: enabled). |
|
||||
| `--collector.tapestats` | Enable the tapestats collector (default: enabled). |
|
||||
| `--collector.tcpstat` | Enable the tcpstat collector (default: disabled). |
|
||||
| `--collector.textfile` | Enable the textfile collector (default: enabled). |
|
||||
| `--collector.thermal_zone` | Enable the thermal_zone collector (default: enabled). |
|
||||
| `--collector.time` | Enable the time collector (default: enabled). |
|
||||
| `--collector.timex` | Enable the timex collector (default: enabled). |
|
||||
| `--collector.udp_queues` | Enable the udp_queues collector (default: enabled). |
|
||||
| `--collector.uname` | Enable the uname collector (default: enabled). |
|
||||
| `--collector.vmstat` | Enable the vmstat collector (default: enabled). |
|
||||
| `--collector.wifi` | Enable the wifi collector (default: disabled). |
|
||||
| `--collector.xfs` | Enable the xfs collector (default: enabled). |
|
||||
| `--collector.zfs` | Enable the zfs collector (default: enabled). |
|
||||
| `--collector.zoneinfo` | Enable the zoneinfo collector (default: disabled). |
|
||||
| `--web.telemetry-path="/metrics"` | Path under which to expose metrics. |
|
||||
| `--web.disable-exporter-metrics` | Exclude metrics about the exporter itself (`promhttp_*`, `process_*`, `go_*`). |
|
||||
| `--web.max-requests=40` | Maximum number of parallel scrape requests. Use 0 to disable. |
|
||||
| `--collector.disable-defaults` | Set all collectors to disabled by default. |
|
||||
| `--runtime.gomaxprocs=1` | The target number of CPUs Go will run on (`GOMAXPROCS`). |
|
||||
| `--web.systemd-socket` | Use systemd socket activation listeners instead of port listeners (Linux only). |
|
||||
| `--web.listen-address=:9100 ...` | Addresses on which to expose metrics and web interface. Repeatable for multiple addresses. |
|
||||
| `--web.config.file=""` | [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication. |
|
||||
| `--log.level=info` | Only log messages with the given severity or above. One of: `[debug, info, warn, error]`. |
|
||||
| `--log.format=logfmt` | Output format of log messages. One of: `[logfmt, json]`. |
|
||||
|
||||
### Web Configuration
|
||||
Exporters and services instrumented with the Exporter Toolkit share the same web configuration file format. This is experimental and might change in the future.
|
||||
|
||||
To specify which web configuration file to load, use the `--web.config.file` flag.
|
||||
|
||||
Basic config file:
|
||||
```yml
|
||||
# TLS and basic authentication configuration example.
|
||||
#
|
||||
# Additionally, a certificate and a key file are needed.
|
||||
tls_server_config:
|
||||
cert_file: server.crt
|
||||
key_file: server.key
|
||||
|
||||
# Usernames and passwords required to connect.
|
||||
# Passwords are hashed with bcrypt: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md#about-bcrypt.
|
||||
basic_auth_users:
|
||||
alice: $2y$10$mDwo.lAisC94iLAyP81MCesa29IzH37oigHC/42V2pdJlUprsJPze
|
||||
bob: $2y$10$hLqFl9jSjoAAy95Z/zw8Ye8wkdMBM8c5Bn1ptYqP/AXyV0.oy0S8m
|
||||
```
|
|
@ -1,19 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/demoray/retry-cli
|
||||
rev: 2025-01-28
|
||||
---
|
||||
|
||||
# retry-cli
|
||||
retry is a command line tool written in Rust intended to automatically re-run failed commands with a user configurable delay between tries.
|
||||
|
||||
## Usage
|
||||
Usage: `retry [OPTIONS] <COMMAND>...`
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------- | -------------------------------------------------------------- |
|
||||
| `--attempts <ATTEMPTS>` | Amount of retries (default: `3`) |
|
||||
| `--min-duration <MIN_DURATION>` | minimum duration (default: `10ms`) |
|
||||
| `--max-duration <MAX_DURATION>` | maximum duration |
|
||||
| `--jitter <JITTER>` | amount of randomization to add to the backoff (default: `0.3`) |
|
||||
| `--factor <FACTOR>` | backoff factor (default: `2`) |
|
8
technology/applications/web/Grafana.md
Normal file
8
technology/applications/web/Grafana.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
obj: application
|
||||
website: https://grafana.com
|
||||
repo: https://github.com/grafana/grafana
|
||||
---
|
||||
|
||||
# Grafana
|
||||
#wip
|
58
technology/applications/web/Prometheus.md
Normal file
58
technology/applications/web/Prometheus.md
Normal file
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
obj: application
|
||||
website: https://prometheus.io
|
||||
repo: https://github.com/prometheus/prometheus
|
||||
rev: 2024-12-12
|
||||
---
|
||||
|
||||
# Prometheus
|
||||
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
|
||||
It collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
|
||||
This data can then be visualized using [Grafana](./Grafana.md).
|
||||
|
||||
## Docker Compose
|
||||
|
||||
```yml
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus
|
||||
ports:
|
||||
- 9090:9090
|
||||
volumes:
|
||||
- ./data:/prometheus
|
||||
- ./conf:/etc/prometheus
|
||||
```
|
||||
|
||||
## Configuration
|
||||
Basic prometheus config:
|
||||
|
||||
```yml
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: "prometheus"
|
||||
static_configs:
|
||||
- targets: ["localhost:9090"]
|
||||
|
||||
# Node Exporter Config
|
||||
- job_name: node_exporter
|
||||
scrape_interval: 5s
|
||||
static_configs:
|
||||
- targets: ['host:9100']
|
||||
|
||||
# Job with custom CA
|
||||
- job_name: custom_ca
|
||||
static_configs:
|
||||
- targets: ['endpoint']
|
||||
tls_config:
|
||||
ca_file: '/ca_file.crt'
|
||||
|
||||
# Job with Bearer Auth
|
||||
- job_name: bearer_auth
|
||||
scrape_interval: 120s
|
||||
static_configs:
|
||||
- targets: ['endpoint']
|
||||
bearer_token: 'BEARER_TOKEN'
|
||||
```
|
File diff suppressed because it is too large
Load diff
|
@ -1,457 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
website: https://bitmagnet.io
|
||||
---
|
||||
|
||||
# bitmagnet
|
||||
A self-hosted BitTorrent indexer, DHT crawler, content classifier and torrent search engine with web UI, GraphQL API and Servarr stack integration.
|
||||
|
||||
## Docker Compose
|
||||
|
||||
```yml
|
||||
services:
|
||||
bitmagnet:
|
||||
image: ghcr.io/bitmagnet-io/bitmagnet:latest
|
||||
container_name: bitmagnet
|
||||
ports:
|
||||
# API and WebUI port:
|
||||
- "3333:3333"
|
||||
# BitTorrent ports:
|
||||
- "3334:3334/tcp"
|
||||
- "3334:3334/udp"
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- POSTGRES_HOST=postgres
|
||||
- POSTGRES_PASSWORD=postgres
|
||||
# - TMDB_API_KEY=your_api_key
|
||||
command:
|
||||
- worker
|
||||
- run
|
||||
- --keys=http_server
|
||||
- --keys=queue_server
|
||||
# disable the next line to run without DHT crawler
|
||||
- --keys=dht_crawler
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
|
||||
postgres:
|
||||
image: postgres:16-alpine
|
||||
container_name: bitmagnet-postgres
|
||||
volumes:
|
||||
- ./data/postgres:/var/lib/postgresql/data
|
||||
# ports:
|
||||
# - "5432:5432" Expose this port if you'd like to dig around in the database
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=postgres
|
||||
- POSTGRES_DB=bitmagnet
|
||||
- PGUSER=postgres
|
||||
shm_size: 1g
|
||||
healthcheck:
|
||||
test:
|
||||
- CMD-SHELL
|
||||
- pg_isready
|
||||
start_period: 20s
|
||||
interval: 10s
|
||||
```
|
||||
|
||||
After running `docker compose up -d` you should be able to access the web interface at http://localhost:3333. The DHT crawler should have started and you should see items appear in the web UI within around a minute.
|
||||
|
||||
To run the bitmagnet CLI, use `docker compose run bitmagnet bitmagnet command...`
|
||||
|
||||
## Configuration
|
||||
|
||||
- `postgres.host`, `postgres.name`, `postgres.user`, `postgres.password` (default: `localhost`, `bitmagnet`, `postgres`, `empty`): Set these values to configure connection to your Postgres database.
|
||||
- `tmdb.api_key`: TMDB API Key.
|
||||
- `tmdb.enabled` (default: `true`): Specify false to disable the TMDB API integration.
|
||||
- `dht_crawler.save_files_threshold` (default: `100`): Some torrents contain many thousands of files, which impacts performance and uses a lot of database disk space. This parameter sets a maximum limit for the number of files saved by the crawler with each torrent.
|
||||
- `dht_crawler.save_pieces` (default: `false`): If true, the DHT crawler will save the pieces bytes from the torrent metadata. The pieces take up quite a lot of space, and aren’t currently very useful, but they may be used by future features.
|
||||
- `log.level` (default: `info`): Logging
|
||||
- `log.json` (default: `false`): By default logs are output in a pretty format with colors; enable this flag if you’d prefer plain JSON.
|
||||
|
||||
To see a full list of available configuration options using the CLI, run:
|
||||
|
||||
```sh
|
||||
bitmagnet config show
|
||||
```
|
||||
|
||||
### Specifying configuration values
|
||||
Configuration paths are delimited by dots. If you’re specifying configuration in a YAML file then each dot represents a nesting level, for example to configure `log.json`, `tmdb.api_key` and `http_server.cors.allowed_origins`:
|
||||
|
||||
```yml
|
||||
log:
|
||||
json: true
|
||||
tmdb:
|
||||
api_key: my-api-key
|
||||
http_server:
|
||||
cors:
|
||||
allowed_origins:
|
||||
- https://example1.com
|
||||
- https://example2.com
|
||||
```
|
||||
|
||||
This is not a suggested configuration file, it’s just an example of how to specify configuration values.
|
||||
|
||||
To configure these same values with environment variables, upper-case the path and replace all dots with underscores, for example:
|
||||
|
||||
```sh
|
||||
LOG_JSON=true \
|
||||
TMDB_API_KEY=my-api-key \
|
||||
HTTP_SERVER_CORS_ALLOWED_ORIGINS=https://example1.com,https://example2.com \
|
||||
bitmagnet config show
|
||||
```
|
||||
|
||||
### Configuration precedence
|
||||
In order of precedence, configuration values will be read from:
|
||||
- Environment variables
|
||||
- `config.yml` in the current working directory
|
||||
- `config.yml` in the XDG-compliant config location for the current user (for example on MacOS this is `~/Library/Application Support/bitmagnet/config.yml`)
|
||||
- Default values
|
||||
|
||||
Environment variables can be used to configure simple scalar types (strings, numbers, booleans) and slice types (arrays). For more complex configuration types such as maps you’ll have to use YAML configuration. bitmagnet will exit with an error if it’s unable to parse a provided configuration value.
|
||||
|
||||
### VPN configuration
|
||||
It’s recommended that you run bitmagnet behind a VPN. If you’re using Docker then `gluetun` is a good solution for this, although the networking settings can be tricky.
|
||||
|
||||
### Classifier
|
||||
The classifier can be configured and customized to do things like:
|
||||
- automatically delete torrents you don’t want in your index
|
||||
- add custom tags to torrents you’re interested in
|
||||
- customize the keywords and file extensions used for determining a torrent’s content type
|
||||
- specify completely custom logic to classify and perform other actions on torrents
|
||||
|
||||
#### Background
|
||||
After a torrent is crawled or imported, some further processing must be done to gather metadata, have a guess at the torrent’s contents and finally index it in the database, allowing it to be searched and displayed in the UI/API.
|
||||
|
||||
bitmagnet’s classifier is powered by a Domain Specific Language. The aim of this is to provide a high level of customisability, along with transparency into the classification process which will hopefully aid collaboration on improvements to the core classifier logic.
|
||||
|
||||
The classifier is declared in YAML format. The application includes a core classifier that can be configured, extended or completely replaced with a custom classifier. This page documents the required format.
|
||||
|
||||
#### Source precedence
|
||||
bitmagnet will attempt to load classifier source code from all the following locations. Any discovered classifier source will be merged with other sources in the following order of precedence:
|
||||
|
||||
- the core classifier
|
||||
- `classifier.yml` in the XDG-compliant config location for the current user (for example on MacOS this is `~/Library/Application Support/bitmagnet/classifier.yml`)
|
||||
- `classifier.yml` in the current working directory
|
||||
- Classifier configuration
|
||||
|
||||
Note that multiple sources will be merged, not replaced. For example, keywords added to the classifier configuration will be merged with the core keywords.
|
||||
|
||||
The merged classifier source can be viewed with the CLI command `bitmagnet classifier show`.
|
||||
|
||||
#### Schema
|
||||
A JSON schema for the classifier is available; some editors and IDEs will be able to validate the structure of your classifier document by specifying the `$schema` attribute:
|
||||
|
||||
```yml
|
||||
$schema: bitmagnet.io/schemas/classifier-0.1.json
|
||||
```
|
||||
|
||||
The classifier schema can also be viewed by running the cli command `bitmagnet classifier schema`.
|
||||
|
||||
|
||||
The classifier declaration comprises the following components:
|
||||
- **Workflows**
|
||||
A workflow is a list of actions that will be executed on all torrents when they are classified. When no custom configuration is provided, the default workflow will be run. To use a different workflow instead, specify the classifier.workflow configuration option with the name of your custom workflow.
|
||||
|
||||
- **Actions**
|
||||
An action is a piece of workflow to be executed. All actions either return an updated classification result or an error.
|
||||
For example, the following action will set the content type of the current torrent to audiobook:
|
||||
|
||||
```yml
|
||||
set_content_type: audiobook
|
||||
```
|
||||
|
||||
The following action will return an unmatched error:
|
||||
```yml
|
||||
unmatched
|
||||
```
|
||||
|
||||
And the following action will delete the current torrent being classified (returning a delete error):
|
||||
```yml
|
||||
delete
|
||||
```
|
||||
|
||||
These actions aren’t much use on their own - we’d want to check some conditions are satisfied before setting a content type or deleting a torrent, and for this we’d use the if_else action. For example, the following action will set the content type to audiobook if the torrent name contains audiobook-related keywords, and will otherwise return an unmatched error:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "torrent.baseName.matches(keywords.audiobook)"
|
||||
if_action:
|
||||
set_content_type: audiobook
|
||||
else_action: unmatched
|
||||
```
|
||||
|
||||
The following action will delete a torrent if its name matches the list ofbanned keywords:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "torrent.baseName.matches(keywords.banned)"
|
||||
if_action: delete
|
||||
```
|
||||
|
||||
Actions may return the following types of error:
|
||||
- An unmatched error indicates that the current action did not match for the current torrent
|
||||
- A delete error indicates that the torrent should be deleted
|
||||
- An unhandled error may occur, for example if the TMDB API was unreachable
|
||||
|
||||
Whenever an error is returned, the current classification will be terminated.
|
||||
|
||||
Note that a workflow should never return an unmatched error. We expect to iterate through a series of checks corresponding to each content type. If the current torrent does not match the content type being checked, we’ll proceed to the next check until we find a match; if no match can be found, the content type will be unknown. To facilitate this, we can use the find_match action.
|
||||
|
||||
The find_match action is a bit like a try/catch block in some programming languages; it will try to match a particular content type, and if an unmatched error is returned, it will catch the unmatched error proceed to the next check. For example, the following action will attempt to classify a torrent as an audiobook, and then as an ebook. If both checks fail, the content type will be unknown:
|
||||
```yml
|
||||
find_match:
|
||||
# match audiobooks:
|
||||
- if_else:
|
||||
condition: "torrent.baseName.matches(keywords.audiobook)"
|
||||
if_action:
|
||||
set_content_type: audiobook
|
||||
else_action: unmatched
|
||||
# match ebooks:
|
||||
- if_else:
|
||||
condition: "torrent.files.map(f, f.extension in extensions.ebook ? f.size : - f.size).sum() > 0"
|
||||
if_action:
|
||||
set_content_type: ebook
|
||||
else_action: unmatched
|
||||
```
|
||||
|
||||
For a full list of available actions, please refer to the JSON schema.
|
||||
|
||||
#### Conditions
|
||||
Conditions are used in conjunction with the `if_else` action, in order to execute an action if a particular condition is satisfied.
|
||||
|
||||
The conditions in the examples above use CEL (Common Expression Language) expressions.
|
||||
|
||||
##### The CEL environment
|
||||
CEL is already a well-documented language, so this page won’t go into detail about the CEL syntax. In the context of the bitmagnet classifier, the CEL environment exposes a number of variables:
|
||||
|
||||
- `torrent`: The current torrent being classified (protobuf type: `bitmagnet.Torrent`)
|
||||
- `result`: The current classification result (protobuf type: `bitmagnet.Classification`)
|
||||
- `keywords`: A map of strings to regular expressions, representing named lists of keywords
|
||||
- `extensions`: A map of strings to string lists, representing named lists of extensions
|
||||
- `contentType`: A map of strings to enum values representing content types (e.g. `contentType.movie`, `contentType.music`)
|
||||
- `fileType`: A map of strings to enum values representing file types (e.g. `fileType.video`, `fileType.audio`)
|
||||
- `flags`: A map of strings to the configured values of flags
|
||||
- `kb`, `mb`, `gb`: Variables defined for convenience, equal to the number of bytes in a kilobyte, megabyte and gigabyte respectively
|
||||
|
||||
For more details on the protocol buffer types, please refer to the protobuf schema.
|
||||
|
||||
##### Boolean logic (`or`, `and` & `not`)
|
||||
In addition to CEL expressions, conditions may be declared using the boolean logic operators or, and and not. For example, the following condition evaluates to true, if either the torrent consists mostly of file extensions very commonly used for music (e.g. `flac`), OR if the torrent both has a name that includes music-related keywords, and consists mostly of audio files:
|
||||
|
||||
or:
|
||||
- "torrent.files.map(f, f.extension in extensions.music ? f.size : - f.size).sum() > 0"
|
||||
- and:
|
||||
- "torrent.baseName.matches(keywords.music)"
|
||||
- "torrent.files.map(f, f.fileType == fileType.audio ? f.size : - f.size).sum() > 0"
|
||||
|
||||
> Note that we could also have specified the above condition using just one CEL expression, but breaking up complex conditions like this is more readable.
|
||||
|
||||
#### Keywords
|
||||
The classifier includes lists of keywords associated with different types of torrents. These aim to provide a simpler alternative to regular expressions, and the classifier will compile all keyword lists to regular expressions that can be used within CEL expressions. In order for a keyword to match, it must appear as an isolated token in the test string - that is, it must be either at the beginning or preceded by a non-word character, and either at the end or followed by a non-word character.
|
||||
|
||||
Reserved characters in the syntax are:
|
||||
|
||||
parentheses `(` and `)` enclose a group
|
||||
`|` is an `OR` operator
|
||||
`*` is a wildcard operator
|
||||
`?` makes the previous character or group optional
|
||||
`+` specifies one or more of the previous character
|
||||
`#` specifies any number
|
||||
` ` specifies any non-word or non-number character
|
||||
|
||||
For example, to define some music- and audiobook-related keywords:
|
||||
```yml
|
||||
keywords:
|
||||
music: # define music-related keywords
|
||||
- music # all letters are case-insensitive, and must be defined in lowercase unless escaped
|
||||
- discography
|
||||
- album
|
||||
- \V.?\A # escaped letters are case-sensitive; matches "VA", "V.A" and "V.A.", but not "va"
|
||||
- various artists # matches "various artists" and "Various.Artists"
|
||||
audiobook: # define audiobook-related keywords
|
||||
- (audio)?books?
|
||||
- (un)?abridged
|
||||
- narrated
|
||||
- novels?
|
||||
- (auto)?biograph(y|ies) # matches "biography", "autobiographies" etc.
|
||||
```
|
||||
|
||||
If you’d rather use plain old regular expressions, the CEL syntax supports that too, for example `torrent.baseName.matches("^myregex$")`.
|
||||
|
||||
#### Extensions
|
||||
The classifier includes lists of file extensions associated with different types of content. For example, to identify torrents of type comic by their file extensions, the extensions are first declared:
|
||||
|
||||
```yml
|
||||
extensions:
|
||||
comic:
|
||||
- cb7
|
||||
- cba
|
||||
- cbr
|
||||
- cbt
|
||||
- cbz
|
||||
```
|
||||
|
||||
The extensions can now be used as part of a condition within an `if_else` action:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "torrent.files.map(f, f.extension in extensions.comic ? f.size : - f.size).sum() > 0"
|
||||
if_action:
|
||||
set_content_type: comic
|
||||
else_action: unmatched
|
||||
```
|
||||
|
||||
#### Flags
|
||||
Flags can be used to configure workflows. In order to use a flag in a workflow, it must first be defined. For example, the core classifier defines the following flags that are used in the default workflow:
|
||||
```yml
|
||||
flag_definitions:
|
||||
tmdb_enabled: bool
|
||||
delete_content_types: content_type_list
|
||||
delete_xxx: bool
|
||||
```
|
||||
|
||||
These flags can be referenced within CEL expressions, for example to delete adult content if the `delete_xxx` flag is set to true:
|
||||
```yml
|
||||
if_else:
|
||||
condition: "flags.delete_xxx && result.contentType == contentType.xxx"
|
||||
if_action: delete
|
||||
```
|
||||
|
||||
#### Configuration
|
||||
The classifier can be customized by providing a `classifier.yml` file in a supported location as described above. If you only want to make some minor modifications, it may be convenient to specify these using the main application configuration instead, by providing values in either `config.yml` or as environment variables. The application configuration exposes some but not all properties of the classifier.
|
||||
|
||||
For example, in your `config.yml` you could specify:
|
||||
```yml
|
||||
classifier:
|
||||
# specify a custom workflow to be used:
|
||||
workflow: custom
|
||||
# add to the core list of music keywords:
|
||||
keywords:
|
||||
music:
|
||||
- my-custom-music-keyword
|
||||
# add a file extension to the list of audiobook-related extensions:
|
||||
extensions:
|
||||
audiobook:
|
||||
- abc
|
||||
# auto-delete all comics
|
||||
flags:
|
||||
delete_content_types:
|
||||
- comics
|
||||
```
|
||||
|
||||
Or as environment variables you could specify:
|
||||
```shell
|
||||
TMDB_ENABLED=false \ # disable the TMDB API integration
|
||||
CLASSIFIER_WORKFLOW=custom \ # specify a custom workflow to be used
|
||||
CLASSIFIER_DELETE_XXX=true \ # auto-delete all adult content
|
||||
bitmagnet worker run --all
|
||||
```
|
||||
|
||||
#### Validation
|
||||
The classifier source is compiled on initial load, and all structural and syntax errors should be caught at compile time. If there are errors in your classifier source, bitmagnet should exit with an error message indicating the location of the problem.
|
||||
|
||||
#### Testing on individual torrents
|
||||
You can test the classifier on an individual torrent or torrents using the bitmagnet process CLI command:
|
||||
```shell
|
||||
bitmagnet process --infoHash=aaaaaaaaaaaaaaaaaaaa --infoHash=bbbbbbbbbbbbbbbbbbbb
|
||||
```
|
||||
|
||||
#### Reclassify all torrents
|
||||
The classifier is being updated regularly, and to reclassify already-crawled torrents you’ll need to run the CLI and queue them for reprocessing.
|
||||
|
||||
For context: after torrents are crawled or imported, they won’t show up in the UI straight away. They must first be “processed” by the job queue. This involves a few steps:
|
||||
- The classifier attempts to classify the torrent (determine its content type, and match it to a known piece of content)
|
||||
- The search index for the torrent is built
|
||||
- The torrent content record is saved to the database
|
||||
|
||||
The reprocess command will re-queue torrents to allow the latest updates to be applied to their content records.
|
||||
|
||||
To reprocess all torrents in your index, simply run `bitmagnet reprocess`. If you’ve indexed a lot of torrents, this will take a while, so there are a few options available to control exactly what gets reprocessed:
|
||||
- `apisDisabled`: Disable API calls during classification. This makes the classifier run a lot faster, but disables identification with external services such as TMDB (metadata already gathered from external APIs is not lost).
|
||||
- `contentType`: Only reprocess torrents of a certain content type. For example, `bitmagnet reprocess --contentType movie` will only reprocess movies. Multiple content types can be comma separated, and `null` refers to torrents of unknown content type.
|
||||
- `orphans`: Only reprocess torrents that have no content record.
|
||||
- `classifyMode`: This controls how already matched torrents are handled.
|
||||
- `default`: Only attempt to match previously unmatched torrents
|
||||
- `rematch`: Ignore any pre-existing match and always classify from scratch (A torrent is “matched” if it’s associated with a specific piece of content from one of the API integrations, currently only TMDB)
|
||||
|
||||
#### Practical use cases and examples
|
||||
##### Auto-delete specific content types
|
||||
The default workflow provides a flag that allows for automatically deleting specific content types. For example, to delete all comic, software and xxx torrents:
|
||||
```yml
|
||||
flags:
|
||||
delete_content_types:
|
||||
- comic
|
||||
- software
|
||||
- xxx
|
||||
```
|
||||
|
||||
Auto-deleting adult content has been one of the most requested features. For convenience, this is exposed as the configuration option `classifier.delete_xxx`, and can be specified with the environment variable `CLASSIFIER_DELETE_XXX=true`.
|
||||
|
||||
##### Auto-delete torrents containing specific keywords
|
||||
Any torrents containing keywords in the banned list will be automatically deleted. This is primarily used for deleting CSAM content, but the list can be extended to auto-delete any other keywords:
|
||||
|
||||
```yml
|
||||
keywords:
|
||||
banned:
|
||||
- my-hated-keyword
|
||||
```
|
||||
|
||||
##### Disable the TMDB API integration
|
||||
The `tmdb_enabled` flag can be used to disable the TMDB API integration:
|
||||
```yml
|
||||
flags:
|
||||
tmdb_enabled: false
|
||||
```
|
||||
|
||||
For convenience, this is also exposed as the configuration option `tmdb.enabled`, and can be specified with the environment variable `$TMDB_ENABLED=false`.
|
||||
|
||||
The `apis_enabled` flag has the same effect, disabling TMDB and any future API integrations:
|
||||
|
||||
```yml
|
||||
flags:
|
||||
apis_enabled: false
|
||||
```
|
||||
|
||||
API integrations can also be disabled for individual classifier runs, without disabling them globally, by passing the `--apisDisabled` flag to the reprocess command.
|
||||
|
||||
##### Extend the default workflow with custom logic
|
||||
Custom workflows can be added in the workflows section of the classifier document. It is possible to extend the default workflow by using the `run_workflow` action within your custom workflow, for example:
|
||||
```yml
|
||||
workflows:
|
||||
custom:
|
||||
- <my custom action to be executed before the default workflow>
|
||||
- run_workflow: default
|
||||
- <my custom action to be executed after the default workflow>
|
||||
```
|
||||
|
||||
A concrete example of this is adding tags to torrents based on custom criteria.
|
||||
|
||||
##### Use tags to create custom torrent categories
|
||||
Is there a category of torrent you’re interested in that isn’t captured by one of the core content types? Torrent tags are intended to capture custom categories and content types.
|
||||
|
||||
Let’s imagine you’d like to surface torrents containing interesting documents. The interesting documents have specific file extensions, and their filenames contain specific keywords. Let’s create a custom action to tag torrents containing interesting documents:
|
||||
|
||||
```yml
|
||||
# define file extensions for the documents we're interested in:
|
||||
extensions:
|
||||
interesting_documents:
|
||||
- doc
|
||||
- docx
|
||||
- pdf
|
||||
# define keywords that must be present in the filenames of the interesting documents:
|
||||
keywords:
|
||||
interesting_documents:
|
||||
- interesting
|
||||
- fascinating
|
||||
# extend the default workflow with a custom workflow to tag torrents containing interesting documents:
|
||||
workflows:
|
||||
custom:
|
||||
# first run the default workflow:
|
||||
- run_workflow: default
|
||||
# then add the tag to any torrents containing interesting documents:
|
||||
- if_else:
|
||||
condition: "torrent.files.filter(f, f.extension in extensions.interesting_documents && f.basePath.matches(keywords.interesting_documents)).size() > 0"
|
||||
if_action:
|
||||
add_tag: interesting-documents
|
||||
```
|
||||
|
||||
To specify that the custom workflow should be used, remember to specify the `classifier.workflow` configuration option, e.g. `CLASSIFIER_WORKFLOW=custom bitmagnet worker run --all`.
|
8
technology/applications/web/loki.md
Normal file
8
technology/applications/web/loki.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/grafana/loki
|
||||
website: https://grafana.com/oss/loki
|
||||
---
|
||||
|
||||
# Grafana Loki
|
||||
#wip
|
|
@ -1,123 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/FiloSottile/age
|
||||
source: https://age-encryption.org/v1
|
||||
rev: 2025-01-09
|
||||
---
|
||||
|
||||
# age
|
||||
age is a simple, modern and secure file encryption tool, format, and Go library.
|
||||
|
||||
It features small explicit keys, no config options, and UNIX-style composability.
|
||||
|
||||
```sh
|
||||
$ age-keygen -o key.txt
|
||||
Public key: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
|
||||
$ PUBLIC_KEY=$(age-keygen -y key.txt)
|
||||
$ tar cvz ~/data | age -r $PUBLIC_KEY > data.tar.gz.age
|
||||
$ age --decrypt -i key.txt data.tar.gz.age > data.tar.gz
|
||||
```
|
||||
|
||||
## Usage
|
||||
For the full documentation, read [the age(1) man page](https://filippo.io/age/age.1).
|
||||
|
||||
```
|
||||
Usage:
|
||||
age [--encrypt] (-r RECIPIENT | -R PATH)... [--armor] [-o OUTPUT] [INPUT]
|
||||
age [--encrypt] --passphrase [--armor] [-o OUTPUT] [INPUT]
|
||||
age --decrypt [-i PATH]... [-o OUTPUT] [INPUT]
|
||||
|
||||
Options:
|
||||
-e, --encrypt Encrypt the input to the output. Default if omitted.
|
||||
-d, --decrypt Decrypt the input to the output.
|
||||
-o, --output OUTPUT Write the result to the file at path OUTPUT.
|
||||
-a, --armor Encrypt to a PEM encoded format.
|
||||
-p, --passphrase Encrypt with a passphrase.
|
||||
-r, --recipient RECIPIENT Encrypt to the specified RECIPIENT. Can be repeated.
|
||||
-R, --recipients-file PATH Encrypt to recipients listed at PATH. Can be repeated.
|
||||
-i, --identity PATH Use the identity file at PATH. Can be repeated.
|
||||
|
||||
INPUT defaults to standard input, and OUTPUT defaults to standard output.
|
||||
If OUTPUT exists, it will be overwritten.
|
||||
|
||||
RECIPIENT can be an age public key generated by age-keygen ("age1...")
|
||||
or an SSH public key ("ssh-ed25519 AAAA...", "ssh-rsa AAAA...").
|
||||
|
||||
Recipient files contain one or more recipients, one per line. Empty lines
|
||||
and lines starting with "#" are ignored as comments. "-" may be used to
|
||||
read recipients from standard input.
|
||||
|
||||
Identity files contain one or more secret keys ("AGE-SECRET-KEY-1..."),
|
||||
one per line, or an SSH key. Empty lines and lines starting with "#" are
|
||||
ignored as comments. Passphrase encrypted age files can be used as
|
||||
identity files. Multiple key files can be provided, and any unused ones
|
||||
will be ignored. "-" may be used to read identities from standard input.
|
||||
|
||||
When --encrypt is specified explicitly, -i can also be used to encrypt to an
|
||||
identity file symmetrically, instead or in addition to normal recipients.
|
||||
```
|
||||
|
||||
### Multiple recipients
|
||||
Files can be encrypted to multiple recipients by repeating `-r/--recipient`. Every recipient will be able to decrypt the file.
|
||||
|
||||
```
|
||||
$ age -o example.jpg.age -r age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p \
|
||||
-r age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg example.jpg
|
||||
```
|
||||
|
||||
#### Recipient files
|
||||
Multiple recipients can also be listed one per line in one or more files passed with the `-R/--recipients-file` flag.
|
||||
|
||||
```
|
||||
$ cat recipients.txt
|
||||
# Alice
|
||||
age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
|
||||
# Bob
|
||||
age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg
|
||||
$ age -R recipients.txt example.jpg > example.jpg.age
|
||||
```
|
||||
|
||||
If the argument to `-R` (or `-i`) is `-`, the file is read from standard input.
|
||||
|
||||
### Passphrases
|
||||
Files can be encrypted with a passphrase by using `-p/--passphrase`. By default age will automatically generate a secure passphrase. Passphrase protected files are automatically detected at decrypt time.
|
||||
|
||||
```
|
||||
$ age -p secrets.txt > secrets.txt.age
|
||||
Enter passphrase (leave empty to autogenerate a secure one):
|
||||
Using the autogenerated passphrase "release-response-step-brand-wrap-ankle-pair-unusual-sword-train".
|
||||
$ age -d secrets.txt.age > secrets.txt
|
||||
Enter passphrase:
|
||||
```
|
||||
|
||||
### Passphrase-protected key files
|
||||
If an identity file passed to `-i` is a passphrase encrypted age file, it will be automatically decrypted.
|
||||
|
||||
```
|
||||
$ age-keygen | age -p > key.age
|
||||
Public key: age1yhm4gctwfmrpz87tdslm550wrx6m79y9f2hdzt0lndjnehwj0ukqrjpyx5
|
||||
Enter passphrase (leave empty to autogenerate a secure one):
|
||||
Using the autogenerated passphrase "hip-roast-boring-snake-mention-east-wasp-honey-input-actress".
|
||||
$ age -r age1yhm4gctwfmrpz87tdslm550wrx6m79y9f2hdzt0lndjnehwj0ukqrjpyx5 secrets.txt > secrets.txt.age
|
||||
$ age -d -i key.age secrets.txt.age > secrets.txt
|
||||
Enter passphrase for identity file "key.age":
|
||||
```
|
||||
|
||||
Passphrase-protected identity files are not necessary for most use cases, where access to the encrypted identity file implies access to the whole system. However, they can be useful if the identity file is stored remotely.
|
||||
|
||||
### SSH keys
|
||||
As a convenience feature, age also supports encrypting to `ssh-rsa` and `ssh-ed25519` SSH public keys, and decrypting with the respective private key file. (`ssh-agent` is not supported.)
|
||||
|
||||
```
|
||||
$ age -R ~/.ssh/id_ed25519.pub example.jpg > example.jpg.age
|
||||
$ age -d -i ~/.ssh/id_ed25519 example.jpg.age > example.jpg
|
||||
```
|
||||
|
||||
Note that SSH key support employs more complex cryptography, and embeds a public key tag in the encrypted file, making it possible to track files that are encrypted to a specific public key.
|
||||
|
||||
#### Encrypting to a GitHub user
|
||||
Combining SSH key support and `-R`, you can easily encrypt a file to the SSH keys listed on a GitHub profile.
|
||||
|
||||
```
|
||||
$ curl https://github.com/benjojo.keys | age -R - example.jpg > example.jpg.age
|
||||
```
|
|
@ -3,7 +3,7 @@ obj: application
|
|||
wiki: https://en.wikipedia.org/wiki/Git
|
||||
repo: https://github.com/git/git
|
||||
website: https://git-scm.com
|
||||
rev: 2024-12-04
|
||||
rev: 2024-04-15
|
||||
---
|
||||
|
||||
# Git
|
||||
|
@ -286,19 +286,4 @@ git am --abort < patch
|
|||
|
||||
## .gitignore
|
||||
A `.gitignore` file specifies intentionally untracked files that Git should ignore. Files already tracked by Git are not affected.
|
||||
This file contains pattern on each line which exclude files from git versioning.
|
||||
|
||||
## Git Hooks
|
||||
Git hooks are custom scripts that run automatically in response to certain Git events or actions. These hooks are useful for automating tasks like code quality checks, running tests, enforcing commit message conventions, and more. Git hooks can be executed at different points in the Git workflow, such as before or after a commit, push, or merge.
|
||||
|
||||
Git hooks are stored in the `.git/hooks` directory of your repository. By default, this directory contains example scripts with the `.sample` extension. You can customize these scripts by removing the `.sample` extension and editing them as needed.
|
||||
|
||||
Hooks only apply to your local repository. If a hook script fails it prevents the associated action as well.
|
||||
|
||||
### Common Git Hooks
|
||||
- pre-commit
|
||||
- prepare-commit-msg
|
||||
- commit-msg
|
||||
- post-commit
|
||||
- post-checkout
|
||||
- pre-rebase
|
||||
This file contains pattern on each line which exclude files from git versioning.
|
|
@ -1,126 +0,0 @@
|
|||
---
|
||||
obj: concept
|
||||
repo: https://github.com/ulid/spec
|
||||
aliases: ["Universally Unique Lexicographically Sortable Identifier"]
|
||||
---
|
||||
|
||||
# ULID (Universally Unique Lexicographically Sortable Identifier)
|
||||
UUID can be suboptimal for many use-cases because:
|
||||
|
||||
- It isn't the most character efficient way of encoding 128 bits of randomness
|
||||
- UUID v1/v2 is impractical in many environments, as it requires access to a unique, stable MAC address
|
||||
- UUID v3/v5 requires a unique seed and produces randomly distributed IDs, which can cause fragmentation in many data structures
|
||||
- UUID v4 provides no other information than randomness which can cause fragmentation in many data structures
|
||||
|
||||
Instead, herein is proposed ULID:
|
||||
|
||||
```javascript
|
||||
ulid() // 01ARZ3NDEKTSV4RRFFQ69G5FAV
|
||||
```
|
||||
|
||||
- 128-bit compatibility with UUID
|
||||
- 1.21e+24 unique ULIDs per millisecond
|
||||
- Lexicographically sortable!
|
||||
- Canonically encoded as a 26 character string, as opposed to the 36 character UUID
|
||||
- Uses Crockford's base32 for better efficiency and readability (5 bits per character)
|
||||
- Case insensitive
|
||||
- No special characters (URL safe)
|
||||
- Monotonic sort order (correctly detects and handles the same millisecond)
|
||||
|
||||
## Specification
|
||||
Below is the current specification of ULID as implemented in [ulid/javascript](https://github.com/ulid/javascript).
|
||||
|
||||
*Note: the binary format has not been implemented in JavaScript as of yet.*
|
||||
|
||||
```
|
||||
01AN4Z07BY 79KA1307SR9X4MV3
|
||||
|
||||
|----------| |----------------|
|
||||
Timestamp Randomness
|
||||
48bits 80bits
|
||||
```
|
||||
|
||||
### Components
|
||||
|
||||
**Timestamp**
|
||||
- 48 bit integer
|
||||
- UNIX-time in milliseconds
|
||||
- Won't run out of space 'til the year 10889 AD.
|
||||
|
||||
**Randomness**
|
||||
- 80 bits
|
||||
- Cryptographically secure source of randomness, if possible
|
||||
|
||||
### Sorting
|
||||
The left-most character must be sorted first, and the right-most character sorted last (lexical order). The default ASCII character set must be used. Within the same millisecond, sort order is not guaranteed
|
||||
|
||||
### Canonical String Representation
|
||||
|
||||
```
|
||||
ttttttttttrrrrrrrrrrrrrrrr
|
||||
|
||||
where
|
||||
t is Timestamp (10 characters)
|
||||
r is Randomness (16 characters)
|
||||
```
|
||||
|
||||
#### Encoding
|
||||
Crockford's Base32 is used as shown. This alphabet excludes the letters I, L, O, and U to avoid confusion and abuse.
|
||||
|
||||
```
|
||||
0123456789ABCDEFGHJKMNPQRSTVWXYZ
|
||||
```
|
||||
|
||||
### Monotonicity
|
||||
When generating a ULID within the same millisecond, we can provide some guarantees regarding sort order. Namely, if the same millisecond is detected, the `random` component is incremented by 1 bit in the least significant bit position (with carrying). For example:
|
||||
|
||||
```javascript
|
||||
import { monotonicFactory } from 'ulid'
|
||||
|
||||
const ulid = monotonicFactory()
|
||||
|
||||
// Assume that these calls occur within the same millisecond
|
||||
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRZ
|
||||
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS0
|
||||
```
|
||||
|
||||
If, in the extremely unlikely event that, you manage to generate more than $2^{80}$ ULIDs within the same millisecond, or cause the random component to overflow with less, the generation will fail.
|
||||
|
||||
```javascript
|
||||
import { monotonicFactory } from 'ulid'
|
||||
|
||||
const ulid = monotonicFactory()
|
||||
|
||||
// Assume that these calls occur within the same millisecond
|
||||
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRY
|
||||
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVRZ
|
||||
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS0
|
||||
ulid() // 01BX5ZZKBKACTAV9WEVGEMMVS1
|
||||
...
|
||||
ulid() // 01BX5ZZKBKZZZZZZZZZZZZZZZX
|
||||
ulid() // 01BX5ZZKBKZZZZZZZZZZZZZZZY
|
||||
ulid() // 01BX5ZZKBKZZZZZZZZZZZZZZZZ
|
||||
ulid() // throw new Error()!
|
||||
```
|
||||
|
||||
#### Overflow Errors when Parsing Base32 Strings
|
||||
Technically, a 26-character Base32 encoded string can contain 130 bits of information, whereas a ULID must only contain 128 bits. Therefore, the largest valid ULID encoded in Base32 is `7ZZZZZZZZZZZZZZZZZZZZZZZZZ`, which corresponds to an epoch time of `281474976710655` or $2^{48}-1$.
|
||||
|
||||
Any attempt to decode or encode a ULID larger than this should be rejected by all implementations, to prevent overflow bugs.
|
||||
|
||||
### Binary Layout and Byte Order
|
||||
The components are encoded as 16 octets. Each component is encoded with the Most Significant Byte first (network byte order).
|
||||
|
||||
```
|
||||
0 1 2 3
|
||||
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| 32_bit_uint_time_high |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| 16_bit_uint_time_low | 16_bit_uint_random |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| 32_bit_uint_random |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
| 32_bit_uint_random |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
||||
```
|
|
@ -401,11 +401,11 @@ You've seen most of the remaining operators in other examples:
|
|||
### Comments
|
||||
Dart supports single-line comments, multi-line comments, and documentation comments.
|
||||
|
||||
#### Single-line comments
|
||||
####Single-line comments
|
||||
A single-line comment begins with `//`. Everything between `//` and the end of line is ignored by the Dart compiler.
|
||||
```dart
|
||||
void main() {
|
||||
// refactor into an AbstractLlamaGreetingFactory?
|
||||
// TODO: refactor into an AbstractLlamaGreetingFactory?
|
||||
print('Welcome to my Llama farm!');
|
||||
}
|
||||
```
|
||||
|
|
|
@ -941,147 +941,35 @@ The exact assembly code syntax is target-specific and opaque to the compiler exc
|
|||
Currently, all supported targets follow the assembly code syntax used by LLVM's internal assembler which usually corresponds to that of the GNU assembler (GAS). On x86, the .intel_syntax noprefix mode of GAS is used by default. On ARM, the .syntax unified mode is used. These targets impose an additional restriction on the assembly code: any assembler state (e.g. the current section which can be changed with `.section`) must be restored to its original value at the end of the asm string. Assembly code that does not conform to the GAS syntax will result in assembler-specific behavior. Further constraints on the directives used by inline assembly are indicated by Directives Support.
|
||||
|
||||
## [Crates](https://lib.rs)
|
||||
### Filesystem
|
||||
- [itertools](https://lib.rs/crates/itertools): Extra iterator adaptors, iterator methods, free functions, and macros
|
||||
- [num_enum](https://lib.rs/crates/num_enum): Procedural macros to make inter-operation between primitives and enums easier
|
||||
- [cached](https://crates.io/crates/cached): Caching Crate
|
||||
- [tempfile](https://lib.rs/crates/tempfile): Temporary files and directories
|
||||
- [temp-dir](https://lib.rs/crates/temp-dir): Simple temporary directory with cleanup
|
||||
- [walkdir](https://crates.io/crates/walkdir): recursively scan directories
|
||||
- [jwalk](https://lib.rs/crates/jwalk): Filesystem walk performed in parallel with streamed and sorted results
|
||||
- [glob](https://lib.rs/crates/glob): Support for matching file paths against Unix shell style patterns
|
||||
- [notify](https://lib.rs/crates/notify): filesystem watcher
|
||||
- [camino](https://lib.rs/crates/camino): UTF-8 paths
|
||||
- [sugar_path](https://lib.rs/crates/sugar_path): Sugar functions for manipulating paths
|
||||
- [path-absolutize](https://lib.rs/crates/path-absolutize): A library for extending Path and PathBuf in order to get an absolute path and remove the containing dots
|
||||
- [fs_extra](https://lib.rs/crates/fs_extra): Expanding std::fs and std::io. Recursively copy folders with information about process and much more.
|
||||
- [vfs](https://lib.rs/crates/vfs): A virtual filesystem for Rust
|
||||
- [fuser](https://lib.rs/crates/fuser): Filesystem in Userspace (FUSE) for Rust
|
||||
- [directories](https://lib.rs/crates/directories): A tiny mid-level library that provides platform-specific standard locations of directories for config, cache and other data on Linux, Windows and macOS
|
||||
- [xattr](https://lib.rs/crates/xattr): unix extended filesystem attributes
|
||||
- [open](https://lib.rs/crates/open): Open a path or URL using the program configured on the system
|
||||
- [infer](https://lib.rs/crates/infer): Small crate to infer file type based on magic number signatures
|
||||
|
||||
### Error Handling
|
||||
- [anyhow](https://lib.rs/crates/anyhow): Flexible concrete Error type built on `std::error::Error`
|
||||
- [color-eyre](https://lib.rs/crates/color-eyre): Styled error messages
|
||||
- [thiserror](https://lib.rs/crates/thiserror): macros for creating error types
|
||||
- [user-error](https://lib.rs/crates/user-error): Pretty printed errors for your CLI application.
|
||||
- [eyre](https://lib.rs/crates/eyre): Flexible concrete Error Reporting type built on `std::error::Error` with customizable Reports
|
||||
- [color-eyre](https://lib.rs/crates/color-eyre): An error report handler for panics and `eyre::Reports` for colorful, consistent, and well formatted error reports for all kinds of errors
|
||||
|
||||
### Data Structures
|
||||
- [hashbrown](https://lib.rs/crates/hashbrown): A Rust port of Google's SwissTable hash map
|
||||
- [bitvec](https://lib.rs/crates/bitvec): Addresses memory by bits, for packed collections and bitfields
|
||||
- [bitflags](https://lib.rs/crates/bitflags): A macro to generate structures which behave like bitflags
|
||||
- [smallvec](https://lib.rs/crates/smallvec): 'Small vector' optimization: store up to a small number of items on the stack
|
||||
- [ndarray](https://lib.rs/crates/ndarray): An n-dimensional array for general elements and for numerics. Lightweight array views and slicing; views support chunking and splitting.
|
||||
- [zerovec](https://lib.rs/crates/zerovec): Zero-copy vector backed by a byte array
|
||||
- [priority-queue](https://lib.rs/crates/priority-queue): A Priority Queue implemented as a heap with a function to efficiently change the priority of an item
|
||||
- [histogram](https://lib.rs/crates/histogram): A collection of histogram data structures
|
||||
- [fraction](https://lib.rs/crates/fraction): Lossless fractions and decimals; drop-in float replacement
|
||||
- [ringbuffer](https://lib.rs/crates/ringbuffer): A fixed-size circular buffer
|
||||
- [grid](https://lib.rs/crates/grid): Dynamic generic 2D data structure
|
||||
- [datas](https://lib.rs/crates/datas): A library for data structures and algorithms and data analisys
|
||||
- [trees](https://lib.rs/crates/trees): General purpose tree data structures
|
||||
- [either](https://lib.rs/crates/either): The enum Either with variants Left and Right is a general purpose sum type with two cases
|
||||
- [either_of](https://lib.rs/crates/either_of): Utilities for working with enumerated types that contain one of 2..n other types
|
||||
- [petgraph](https://lib.rs/crates/petgraph): Graph data structure library. Provides graph types and graph algorithms.
|
||||
- [hypergraph](https://lib.rs/crates/hypergraph): Hypergraph is data structure library to create a directed hypergraph in which an hyperedge can join any number of vertices
|
||||
- [gix](https://crates.io/crates/gix): Interact with git repositories just like git would
|
||||
- [git2](https://lib.rs/crates/git2): Bindings to libgit2 for interoperating with git repositories.
|
||||
|
||||
### Parser
|
||||
- [nom](https://lib.rs/crates/nom): A byte-oriented, zero-copy, parser combinators library
|
||||
- [pest](https://lib.rs/crates/pest): pest is a general purpose parser written in Rust
|
||||
- [keepass](https://lib.rs/crates/keepass): KeePass .kdbx database file parser
|
||||
- [html5ever](https://lib.rs/crates/html5ever): High-performance browser-grade HTML5 parser
|
||||
- [comrak](https://lib.rs/crates/comrak): A 100% CommonMark-compatible GitHub Flavored Markdown parser and formatter
|
||||
- [uriparse](https://lib.rs/crates/uriparse): A URI parser including relative references
|
||||
- [markdown](https://lib.rs/crates/markdown): CommonMark compliant markdown parser in Rust with ASTs and extensions
|
||||
- [evalexpr](https://lib.rs/crates/evalexpr): A powerful arithmetic and boolean expression evaluator
|
||||
- [uuid](https://lib.rs/crates/uuid): A library to generate and parse UUIDs
|
||||
- [semver](https://lib.rs/crates/semver): Parser and evaluator for Cargo's flavor of Semantic Versioning
|
||||
- [url](https://lib.rs/crates/url): URL library for Rust, based on the WHATWG URL Standard
|
||||
- [httparse](https://lib.rs/crates/httparse): A tiny, safe, speedy, zero-copy HTTP/1.x parser
|
||||
- [syntect](https://lib.rs/crates/syntect): library for high quality syntax highlighting and code intelligence using Sublime Text's grammars
|
||||
|
||||
### Serialization
|
||||
- [serde](https://lib.rs/crates/serde): A generic serialization/deserialization framework
|
||||
- [serde_with](https://lib.rs/crates/serde_with): Custom de/serialization functions for Rust's serde
|
||||
- [bincode](https://lib.rs/crates/bincode): A binary serialization / deserialization strategy for transforming structs into bytes and vice versa!
|
||||
- [serde_json](https://lib.rs/crates/serde_json): A [JSON](../../../files/JSON.md) serialization file format
|
||||
- [serde_jsonc](https://lib.rs/crates/serde_jsonc): A JSON serialization file format
|
||||
- [serde_yaml](https://lib.rs/crates/serde_yaml): [YAML](../../../files/YAML.md) data format for Serde
|
||||
- [bson](https://lib.rs/crates/bson): Encoding and decoding support for [BSON](../../../files/BSON.md) in Rust
|
||||
- [toml](https://lib.rs/crates/toml): A native Rust encoder and decoder of [TOML](../../../files/TOML.md)-formatted files and streams.
|
||||
- [gray_matter](https://lib.rs/crates/gray_matter): Smart front matter parser. An implementation of gray-matter in rust. Parses YAML, JSON, TOML and support for custom parsers.
|
||||
- [schemars](https://lib.rs/crates/schemars): Generate JSON Schemas from Rust code
|
||||
- [jsonschema](https://lib.rs/crates/jsonschema): JSON schema validaton library
|
||||
- [json-patch](https://lib.rs/crates/json-patch): RFC 6902, JavaScript Object Notation (JSON) Patch
|
||||
- [rss](https://lib.rs/crates/rss): Library for serializing the RSS web content syndication format
|
||||
- [postcard](https://lib.rs/crates/postcard): A no_std + serde compatible message library for Rust
|
||||
|
||||
### Encoding
|
||||
- [bincode](https://lib.rs/crates/bincode): A binary serialization / deserialization strategy for transforming structs into bytes and vice versa!
|
||||
- [serde](https://lib.rs/crates/serde): A generic serialization/deserialization framework
|
||||
- [serde_json](https://lib.rs/crates/serde_json): A [JSON](../../../files/JSON.md) serialization file format
|
||||
- [serde_yaml](https://lib.rs/crates/serde_yaml): [YAML](../../../files/YAML.md) data format for Serde
|
||||
- [bson](https://lib.rs/crates/bson): Encoding and decoding support for [BSON](../../../files/BSON.md) in Rust
|
||||
- [hex](https://lib.rs/crates/hex): Encoding and decoding data into/from hexadecimal representation
|
||||
- [base62](https://lib.rs/crates/base62): A Base62 encoding/decoding library
|
||||
- [toml](https://lib.rs/crates/toml): A native Rust encoder and decoder of [TOML](../../../files/TOML.md)-formatted files and streams.
|
||||
- [base64](https://lib.rs/crates/base64): encodes and decodes [base64](../../../files/Base64.md) as bytes or utf8
|
||||
- [base64-url](https://lib.rs/crates/base64-url): Base64 encode, decode, escape and unescape for URL applications
|
||||
- [encoding_rs](https://lib.rs/crates/encoding_rs): A Gecko-oriented implementation of the Encoding Standard
|
||||
- [data-encoding](https://lib.rs/crates/data-encoding): Efficient and customizable data-encoding functions like base64, base32, and hex
|
||||
- [shell-quote](https://lib.rs/crates/shell-quote): A Rust library for shell-quoting strings, e.g. for interpolating into a Bash script.
|
||||
- [urlencoding](https://lib.rs/crates/urlencoding): A Rust library for doing URL percentage encoding
|
||||
- [bytesize](https://lib.rs/crates/bytesize): Semantic wrapper for byte count representations
|
||||
- [hex-literal](https://lib.rs/crates/hex-literal): Macro for converting hexadecimal string to a byte array at compile time
|
||||
- [byte-unit](https://lib.rs/crates/byte-unit): A library for interacting with units of bytes
|
||||
- [bytes](https://lib.rs/crates/bytes): Types and traits for working with bytes
|
||||
|
||||
### Algorithms
|
||||
- [rand](https://lib.rs/crates/rand): Random number generators and other randomness functionality
|
||||
- [bonsai-bt](https://lib.rs/crates/bonsai-bt): Behaviour trees
|
||||
- [pathfinding](https://lib.rs/crates/pathfinding): Pathfinding, flow, and graph algorithms
|
||||
- [treediff](https://lib.rs/crates/treediff): Find the difference between arbitrary data structures
|
||||
- [raft](https://lib.rs/crates/raft): The rust language implementation of Raft algorithm
|
||||
|
||||
### Crypto
|
||||
- [rustls](https://lib.rs/crates/rustls): Rustls is a modern TLS library written in Rust
|
||||
- [rustls-pemfile](https://lib.rs/crates/rustls-pemfile): Basic .pem file parser for keys and certificates
|
||||
- [pem](https://lib.rs/crates/pem): Parse and encode PEM-encoded data
|
||||
- [x509-parser](https://lib.rs/crates/x509-parser): Parser for the X.509 v3 format (RFC 5280 certificates)
|
||||
- [openssl](https://lib.rs/crates/openssl): OpenSSL bindings
|
||||
- [hkdf](https://lib.rs/crates/hkdf): HMAC-based Extract-and-Expand Key Derivation Function (HKDF)
|
||||
- [ed25519-compact](https://lib.rs/crates/ed25519-compact): A small, self-contained, wasm-friendly Ed25519 implementation
|
||||
- [snow](https://lib.rs/crates/snow): A pure-rust implementation of the Noise Protocol Framework
|
||||
- [keyring](https://lib.rs/crates/keyring): Cross-platform library for managing passwords/credentials
|
||||
- [scrypt](https://lib.rs/crates/scrypt): Scrypt password-based key derivation function
|
||||
- [totp-rs](https://lib.rs/crates/totp-rs): RFC-compliant TOTP implementation with ease of use as a goal and additionnal QoL features
|
||||
- [mnemonic](https://lib.rs/crates/mnemonic): Encode any data into a sequence of English words
|
||||
- [jwt](https://lib.rs/crates/jwt): JSON Web Token library
|
||||
- [secrets](https://lib.rs/crates/secrets): Protected-access memory for cryptographic secrets
|
||||
- [redact](https://lib.rs/crates/redact): A simple library for keeping secrets out of logs
|
||||
- [noise](https://lib.rs/crates/noise): Procedural noise generation library
|
||||
- [ulid](https://lib.rs/crates/ulid): a Universally Unique Lexicographically Sortable Identifier implementation
|
||||
|
||||
#### Hashes
|
||||
- [digest](https://lib.rs/crates/digest): Traits for cryptographic hash functions and message authentication codes
|
||||
- [seahash](https://lib.rs/crates/seahash): A blazingly fast, portable hash function with proven statistical guarantees
|
||||
- [highway](https://lib.rs/crates/highway): Native Rust port of Google's HighwayHash, which makes use of SIMD instructions for a fast and strong hash function
|
||||
- [md5](https://lib.rs/crates/md5): The package provides the MD5 hash function
|
||||
- [crc32c](https://lib.rs/crates/crc32c): Safe implementation for hardware accelerated CRC32C instructions with software fallback
|
||||
- [blake3](https://lib.rs/crates/blake3): the BLAKE3 hash function
|
||||
- [siphasher](https://lib.rs/crates/siphasher): SipHash-2-4, SipHash-1-3 and 128-bit variants in pure Rust
|
||||
- [bcrypt](https://lib.rs/crates/bcrypt): Easily hash and verify passwords using bcrypt
|
||||
- [sha1](https://lib.rs/crates/sha1): SHA-1 hash function
|
||||
- [sha2](https://lib.rs/crates/sha2): Pure Rust implementation of the SHA-2 hash function family including SHA-224, SHA-256, SHA-384, and SHA-512
|
||||
- [sha3](https://lib.rs/crates/sha3): Pure Rust implementation of SHA-3, a family of Keccak-based hash functions including the SHAKE family of eXtendable-Output Functions (XOFs), as well as the accelerated variant TurboSHAKE
|
||||
|
||||
### Logging
|
||||
- [log](https://lib.rs/crates/log): A lightweight logging facade for Rust
|
||||
- [env_logger](https://lib.rs/crates/env_logger): A logging implementation for `log` which is configured via an environment variable
|
||||
- [prometheus](https://lib.rs/crates/prometheus): Prometheus instrumentation library for Rust applications
|
||||
- [opentelemetry](https://lib.rs/crates/opentelemetry): OpenTelemetry API for Rust
|
||||
- [sentry-core](https://lib.rs/crates/sentry-core): Core sentry library used for instrumentation and integration development
|
||||
- [logging_timer](https://lib.rs/crates/logging_timer): Simple timers that log the elapsed time when dropped
|
||||
- [dioxus-logger](https://lib.rs/crates/dioxus-logger): A logging utility to provide a standard interface whether you're targeting web desktop, fullstack, and more in Dioxus
|
||||
- [tracing](https://lib.rs/crates/tracing): advanced logger
|
||||
- [tracing-appender](https://lib.rs/crates/tracing-appender): Provides utilities for file appenders and making non-blocking writers
|
||||
- [tracing-loki](https://lib.rs/crates/tracing-loki): A tracing layer for shipping logs to Grafana Loki
|
||||
- [env_logger](https://lib.rs/crates/env_logger): A logging implementation for `log` which is configured via an environment variable
|
||||
|
||||
### Mail
|
||||
- [lettre](https://lib.rs/crates/lettre): [Email](../../../internet/eMail.md) client
|
||||
|
@ -1094,93 +982,24 @@ Currently, all supported targets follow the assembly code syntax used by LLVM's
|
|||
### Templates
|
||||
- [maud](https://lib.rs/crates/maud): Compile-time [HTML](../../../internet/HTML.md) templates
|
||||
- [tera](https://lib.rs/crates/tera): Template engine based on [Jinja](../../../tools/Jinja.md) templates
|
||||
- [subst](https://lib.rs/crates/subst): shell-like variable substitution
|
||||
- [minijinja](https://lib.rs/crates/minijinja): a powerful template engine for Rust with minimal dependencies
|
||||
- [handlebars](https://lib.rs/crates/handlebars): Handlebars templating implemented in Rust
|
||||
|
||||
### Media
|
||||
#### Images
|
||||
- [image](https://lib.rs/crates/image): Imaging library. Provides basic image processing and encoders/decoders for common image formats.
|
||||
- [rgb](https://lib.rs/crates/rgb): Pixel types for Rust
|
||||
- [qrcode](https://lib.rs/crates/qrcode): QR code encoder in Rust
|
||||
- [gif](https://lib.rs/crates/gif): GIF de- and encoder
|
||||
- [opencv](https://lib.rs/crates/opencv): Rust bindings for OpenCV
|
||||
- [imgref](https://lib.rs/crates/imgref): A basic 2-dimensional slice for safe and convenient handling of pixel buffers with width, height & stride
|
||||
- [palette](https://lib.rs/crates/palette): Convert and manage colors with a focus on correctness, flexibility and ease of use
|
||||
- [imageproc](https://lib.rs/crates/imageproc): Image processing operations
|
||||
- [resvg](https://lib.rs/crates/resvg): An SVG rendering library
|
||||
- [png](https://lib.rs/crates/png): PNG decoding and encoding library in pure Rust
|
||||
- [webp](https://lib.rs/crates/webp): WebP conversion library
|
||||
- [image_hasher](https://lib.rs/crates/image_hasher): A simple library that provides perceptual hashing and difference calculation for images
|
||||
- [dify](https://lib.rs/crates/dify): A fast pixel-by-pixel image comparison tool in Rust
|
||||
- [qoi](https://lib.rs/crates/qoi): VERY fast encoder/decoder for QOI (Quite Okay Image) format
|
||||
- [auto-palette](https://lib.rs/crates/auto-palette): 🎨 A Rust library that extracts prominent color palettes from images automatically
|
||||
- [blockhash](https://lib.rs/crates/blockhash): A perceptual hashing algorithm for detecting similar images
|
||||
|
||||
#### Video
|
||||
- [ffmpeg-next](https://lib.rs/crates/ffmpeg-next): Safe FFmpeg wrapper
|
||||
- [video-rs](https://lib.rs/crates/video-rs): High-level video toolkit based on ffmpeg
|
||||
- [ffprobe](https://lib.rs/crates/ffprobe): Typed wrapper for the ffprobe CLI
|
||||
|
||||
#### Audio
|
||||
- [symphonia](https://lib.rs/crates/symphonia): Pure Rust media container and audio decoding library
|
||||
- [hound](https://lib.rs/crates/hound): A wav encoding and decoding library
|
||||
- [id3](https://lib.rs/crates/id3): A library for reading and writing ID3 metadata
|
||||
- [metaflac](https://lib.rs/crates/metaflac): A library for reading and writing FLAC metadata
|
||||
- [bliss-audio](https://lib.rs/crates/bliss-audio): A song analysis library for making playlists
|
||||
|
||||
### 3D
|
||||
- [glam](https://lib.rs/crates/glam): A simple and fast 3D math library for games and graphics
|
||||
- [tobj](https://lib.rs/crates/tobj): A lightweight OBJ loader in the spirit of tinyobjloader
|
||||
- [obj-rs](https://lib.rs/crates/obj-rs): Wavefront obj parser for Rust. It handles both 'obj' and 'mtl' formats.
|
||||
|
||||
### CLI
|
||||
- [argh](https://lib.rs/crates/argh): Derive-based argument parser optimized for code size
|
||||
- [clap](https://lib.rs/crates/clap): A simple to use, efficient, and full-featured Command Line Argument Parser
|
||||
- [yansi](https://lib.rs/crates/yansi): A dead simple ANSI terminal color painting library
|
||||
- [owo-colors](https://lib.rs/crates/owo-colors): Zero-allocation terminal colors that'll make people go owo
|
||||
- [named-colour](https://lib.rs/crates/named-colour): named-colour provides Hex Codes for popular colour names
|
||||
- [colored](https://lib.rs/crates/colored): The most simple way to add colors in your terminal
|
||||
- [crossterm](https://lib.rs/crates/crossterm): A crossplatform terminal library for manipulating terminals
|
||||
- [trauma](https://lib.rs/crates/trauma): Simplify and prettify HTTP downloads
|
||||
- [comfy-table](https://lib.rs/crates/comfy-table): An easy to use library for building beautiful tables with automatic content wrapping
|
||||
- [tabled](https://lib.rs/crates/tabled): An easy to use library for pretty print tables of Rust structs and enums
|
||||
- [tabular](https://lib.rs/crates/tabular): Plain text tables, aligned automatically
|
||||
- [rustyline](https://lib.rs/crates/rustyline): Rustyline, a readline implementation based on Antirez's Linenoise
|
||||
- [rpassword](https://lib.rs/crates/rpassword): Read passwords in console applications
|
||||
- [inquire](https://lib.rs/crates/inquire): inquire is a library for building interactive prompts on terminals
|
||||
- [clap](https://lib.rs/crates/clap): A simple to use, efficient, and full-featured Command Line Argument Parser
|
||||
- [crossterm](https://lib.rs/crates/crossterm): A crossplatform terminal library for manipulating terminals
|
||||
- [indicatif](https://lib.rs/crates/indicatif): A progress bar and cli reporting library for Rust
|
||||
- [spinners](https://lib.rs/crates/spinners): Elegant terminal spinners for Rust
|
||||
- [is-terminal](https://lib.rs/crates/is-terminal): Test whether a given stream is a terminal
|
||||
- [bishop](https://lib.rs/crates/bishop): Library for visualizing keys and hashes using OpenSSH's Drunken Bishop algorithm
|
||||
- [termimad](https://lib.rs/crates/termimad): Markdown Renderer for the Terminal
|
||||
- [rust-script](https://lib.rs/crates/rust-script): Command-line tool to run Rust "scripts" which can make use of crates
|
||||
- [sysinfo](https://lib.rs/crates/sysinfo): Library to get system information such as processes, CPUs, disks, components and networks
|
||||
- [which](https://lib.rs/crates/which): A Rust equivalent of Unix command "which". Locate installed executable in cross platforms.
|
||||
- [ctrlc](https://lib.rs/crates/ctrlc): Easy Ctrl-C handler for Rust projects
|
||||
- [subprocess](https://lib.rs/crates/subprocess): Execution of child processes and pipelines, inspired by Python's subprocess module, with Rust-specific extensions
|
||||
- [cmd_lib](https://lib.rs/crates/cmd_lib): Common rust commandline macros and utils, to write shell script like tasks easily
|
||||
- [argh](https://lib.rs/crates/argh): Derive-based argument parser optimized for code size
|
||||
- [owo-colors](https://lib.rs/crates/owo-colors): Zero-allocation terminal colors that'll make people go owo
|
||||
- [yansi](https://lib.rs/crates/yansi): A dead simple ANSI terminal color painting library
|
||||
|
||||
### Compression
|
||||
- [flate2](https://lib.rs/crates/flate2): DEFLATE compression and decompression exposed as Read/BufRead/Write streams. Supports miniz_oxide and multiple zlib implementations. Supports zlib, gzip, and raw deflate streams.
|
||||
- [tar](https://lib.rs/crates/tar): A Rust implementation of a [TAR](../../../applications/cli/compression/tar.md) file reader and writer.
|
||||
- [zstd](https://lib.rs/crates/zstd): Binding for the [zstd compression](../../../files/Zstd%20Compression.md) library
|
||||
- [unrar](https://lib.rs/crates/unrar): list and extract RAR archives
|
||||
- [zip](https://lib.rs/crates/zip): Library to support the reading and writing of zip files
|
||||
- [brotli](https://lib.rs/crates/brotli): A brotli compressor and decompressor
|
||||
- [huffman-compress2](https://lib.rs/crates/huffman-compress2): Huffman compression given a probability distribution over arbitrary symbols
|
||||
- [arithmetic-coding](https://lib.rs/crates/arithmetic-coding): fast and flexible arithmetic coding library
|
||||
|
||||
### Cache
|
||||
- [lru](https://lib.rs/crates/lru): A LRU cache implementation
|
||||
- [moka](https://lib.rs/crates/moka): A fast and concurrent cache library inspired by Java Caffeine
|
||||
- [ustr](https://lib.rs/crates/ustr): Fast, FFI-friendly string interning
|
||||
- [cacache](https://lib.rs/crates/cacache): Content-addressable, key-value, high-performance, on-disk cache
|
||||
- [cached](https://crates.io/crates/cached): Caching Crate
|
||||
- [memoize](https://lib.rs/crates/memoize): Attribute macro for auto-memoizing functions with somewhat-simple signatures
|
||||
- [internment](https://lib.rs/crates/internment): Easy interning of data
|
||||
- [http-cache-semantics](https://lib.rs/crates/http-cache-semantics): RFC 7234. Parses HTTP headers to correctly compute cacheability of responses, even in complex cases
|
||||
- [assets_manager](https://lib.rs/crates/assets_manager): Conveniently load, cache, and reload external resources
|
||||
|
||||
### Databases
|
||||
- [rusqlite](https://lib.rs/crates/rusqlite): Ergonomic wrapper for [SQLite](../SQLite.md)
|
||||
|
@ -1189,291 +1008,34 @@ Currently, all supported targets follow the assembly code syntax used by LLVM's
|
|||
- [rocksdb](https://lib.rs/crates/rocksdb): embedded database
|
||||
- [uuid](https://lib.rs/crates/uuid): UUID Generation
|
||||
- [polars](https://lib.rs/crates/polars): Dataframes computation
|
||||
- [surrealdb](https://crates.io/crates/surrealdb): A scalable, distributed, collaborative, document-graph database, for the realtime web
|
||||
- [sql-builder](https://lib.rs/crates/sql-builder): Simple SQL code generator
|
||||
- [pgvector](https://lib.rs/crates/pgvector): pgvector support for Rust
|
||||
- [sea-orm](https://lib.rs/crates/sea-orm): 🐚 An async & dynamic ORM for Rust
|
||||
- [sled](https://lib.rs/crates/sled): Lightweight high-performance pure-rust transactional embedded database
|
||||
|
||||
### Date and Time
|
||||
- [chrono](https://lib.rs/crates/chrono): Date and time library for Rust
|
||||
- [chrono-tz](https://lib.rs/crates/chrono-tz): TimeZone implementations for chrono from the IANA database
|
||||
- [humantime](https://lib.rs/crates/humantime): A parser and formatter for `std::time::{Duration, SystemTime}`
|
||||
- [duration-str](https://lib.rs/crates/duration-str): duration string parser
|
||||
- [cron](https://lib.rs/crates/cron): A cron expression parser and schedule explorer
|
||||
- [dateparser](https://lib.rs/crates/dateparser): Parse dates in string formats that are commonly used
|
||||
- [icalendar](https://lib.rs/crates/icalendar): Strongly typed iCalendar builder and parser
|
||||
|
||||
### Network
|
||||
- [tower](https://lib.rs/crates/tower): Tower is a library of modular and reusable components for building robust clients and servers
|
||||
- [tungstenite](https://lib.rs/crates/tungstenite): Lightweight stream-based WebSocket implementation
|
||||
- [tokio-websockets](http://ocean.hydrar.de/s/lib.rs/crates/tokio-websockets): High performance, strict, tokio-util based WebSockets implementation
|
||||
- [message-io](https://lib.rs/crates/message-io): Fast and easy-to-use event-driven network library
|
||||
- [ipnet](https://lib.rs/crates/ipnet): Provides types and useful methods for working with IPv4 and IPv6 network addresses
|
||||
- [object_store](https://lib.rs/crates/object_store): A generic object store interface for uniformly interacting with AWS S3, Google Cloud Storage, Azure Blob Storage and local files
|
||||
- [matchit](https://lib.rs/crates/matchit): A high performance, zero-copy URL router
|
||||
- [tun](https://lib.rs/crates/tun): TUN device creation and handling
|
||||
- [quiche](https://lib.rs/crates/quiche): 🥧 Savoury implementation of the QUIC transport protocol and HTTP/3
|
||||
- [arti-client](https://lib.rs/crates/arti-client): Library for connecting to the Tor network as an anonymous client
|
||||
- [etherparse](https://lib.rs/crates/etherparse): A library for parsing & writing a bunch of packet based protocols (EthernetII, IPv4, IPv6, UDP, TCP ...)
|
||||
- [ldap3](https://lib.rs/crates/ldap3): Pure-Rust LDAP Client
|
||||
- [hyperlocal](https://lib.rs/crates/hyperlocal): Hyper bindings for Unix domain sockets
|
||||
- [openssh-sftp-client](https://lib.rs/crates/openssh-sftp-client): Highlevel API used to communicate with openssh sftp server
|
||||
- [swarm-discovery](https://lib.rs/crates/swarm-discovery): Discovery service for IP-based swarms
|
||||
- [libmdns](https://lib.rs/crates/libmdns): mDNS Responder library for building discoverable LAN services in Rust
|
||||
- [networkmanager](https://lib.rs/crates/networkmanager): Bindings for the Linux NetworkManager
|
||||
- [renet](https://lib.rs/crates/renet): Server/Client network library for multiplayer games with authentication and connection management
|
||||
- [dhcproto](https://lib.rs/crates/dhcproto): A DHCP parser and encoder for DHCPv4/DHCPv6. dhcproto aims to be a functionally complete DHCP implementation.
|
||||
- [irc](https://lib.rs/crates/irc): the irc crate – usable, async IRC for Rust
|
||||
- [ssh2](https://lib.rs/crates/ssh2): Bindings to libssh2 for interacting with SSH servers and executing remote commands, forwarding local ports, etc
|
||||
- [openssh](https://lib.rs/crates/openssh): SSH through OpenSSH
|
||||
- [amqprs](https://lib.rs/crates/amqprs): AMQP 0-9-1 client implementation for RabbitMQ
|
||||
- [wyoming](https://lib.rs/crates/wyoming): Abstractions over the Wyoming protocol
|
||||
|
||||
### HTTP
|
||||
- [hyper](https://lib.rs/crates/hyper): A fast and correct [HTTP](../../../internet/HTTP.md) library
|
||||
- [reqwest](https://lib.rs/crates/reqwest): higher level [HTTP](../../../internet/HTTP.md) client library
|
||||
- [ureq](https://lib.rs/crates/ureq): Simple, safe HTTP client
|
||||
- [curl](https://lib.rs/crates/curl): Rust bindings to libcurl for making HTTP requests
|
||||
- [actix-web](https://lib.rs/crates/actix-web): Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust
|
||||
- [rocket](https://lib.rs/crates/rocket): web server framework for Rust
|
||||
- [thirtyfour](https://lib.rs/crates/thirtyfour): Thirtyfour is a Selenium / WebDriver library for Rust, for automated website UI testing
|
||||
- [http-types](https://lib.rs/crates/http-types): Common types for HTTP operations
|
||||
- [headers](https://lib.rs/crates/headers): typed HTTP headers
|
||||
- [cookie](https://lib.rs/crates/cookie): HTTP cookie parsing and cookie jar management. Supports signed and private (encrypted, authenticated) jars.
|
||||
- [http](https://lib.rs/crates/http): A set of types for representing HTTP requests and responses
|
||||
- [h2](https://lib.rs/crates/h2): An HTTP/2 client and server
|
||||
- [h3](https://lib.rs/crates/h3): An async HTTP/3 implementation
|
||||
- [mime](https://lib.rs/crates/mime): Strongly Typed Mimes
|
||||
- [scraper](https://lib.rs/crates/scraper): HTML parsing and querying with CSS selectors
|
||||
- [selectors](https://lib.rs/crates/selectors): CSS Selectors matching for Rust
|
||||
- [spider](https://lib.rs/crates/spider): A web crawler and scraper, building blocks for data curation workloads
|
||||
- [htmlize](https://lib.rs/crates/htmlize): Encode and decode HTML entities in UTF-8 according to the standard
|
||||
- [ammonia](https://lib.rs/crates/ammonia): HTML Sanitization
|
||||
- [rookie](https://lib.rs/crates/rookie): Load cookie from your web browsers
|
||||
- [tonic](https://lib.rs/crates/tonic): A gRPC over HTTP/2 implementation focused on high performance, interoperability, and flexibility
|
||||
- [web-sys](https://lib.rs/crates/web-sys): Bindings for all Web APIs, a procedurally generated crate from WebIDL
|
||||
- [jsonwebtoken](https://lib.rs/crates/jsonwebtoken): Create and decode JWTs in a strongly typed way
|
||||
- [http-range-header](https://lib.rs/crates/http-range-header): No-dep range header parser
|
||||
|
||||
#### Axum
|
||||
- [axum](https://lib.rs/crates/axum): Web framework that focuses on ergonomics and modularity
|
||||
- [axum-valid](https://crates.io/crates/axum-valid): Provides validation extractors for your Axum application, allowing you to validate data using validator, garde, validify or all of them.
|
||||
- [axum-prometheus](https://crates.io/crates/axum-prometheus): A tower middleware to collect and export HTTP metrics for Axum
|
||||
- [axum-htmx](https://crates.io/crates/axum-htmx): A set of htmx extractors, responders, and request guards for axum.
|
||||
- [axum_session](https://crates.io/crates/axum_session): 📝 Session management layer for axum that supports HTTP and Rest.
|
||||
- [axum_csrf](https://crates.io/crates/axum_csrf): Library to Provide a CSRF (Cross-Site Request Forgery) protection layer.
|
||||
|
||||
### Text
|
||||
- [regex](https://lib.rs/crates/regex): An implementation of [regular expressions](../../../tools/Regex.md) for Rust. This implementation uses finite automata and guarantees linear time matching on all inputs.
|
||||
- [fancy-regex](https://lib.rs/crates/fancy-regex): An implementation of regexes, supporting a relatively rich set of features, including backreferences and look-around
|
||||
- [pretty_regex](https://lib.rs/crates/pretty_regex): 🧶 Elegant and readable way of writing regular expressions
|
||||
- [comfy-table](https://lib.rs/crates/comfy-table): An easy to use library for building beautiful tables with automatic content wrapping
|
||||
- [similar](https://lib.rs/crates/similar): A diff library for Rust
|
||||
- [dissimilar](https://lib.rs/crates/dissimilar): Diff library with semantic cleanup, based on Google's diff-match-patch
|
||||
- [strsim](https://lib.rs/crates/strsim): Implementations of string similarity metrics. Includes Hamming, Levenshtein, OSA, Damerau-Levenshtein, Jaro, Jaro-Winkler, and Sørensen-Dice.
|
||||
- [enquote](https://lib.rs/crates/enquote): Quotes and unquotes strings
|
||||
- [emojis](https://lib.rs/crates/emojis): ✨ Lookup emoji in *O(1)* time, access metadata and GitHub shortcodes, iterate over all emoji, and more!
|
||||
- [text-splitter](https://lib.rs/crates/text-splitter): Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rust and Python.
|
||||
- [wildcard](https://lib.rs/crates/wildcard): Wildcard matching
|
||||
- [wildmatch](https://lib.rs/crates/wildmatch): Simple string matching with single- and multi-character wildcard operator
|
||||
- [textwrap](https://lib.rs/crates/textwrap): Library for word wrapping, indenting, and dedenting strings. Has optional support for Unicode and emojis as well as machine hyphenation.
|
||||
- [pad](https://lib.rs/crates/pad): Library for padding strings at runtime
|
||||
- [const-str](https://lib.rs/crates/const-str): compile-time string operations
|
||||
- [const_format](https://lib.rs/crates/const_format): Compile-time string formatting
|
||||
- [convert_case](https://lib.rs/crates/convert_case): Convert strings into any case
|
||||
- [heck](https://lib.rs/crates/heck): heck is a case conversion library
|
||||
- [html2md](https://lib.rs/crates/html2md): Library to convert simple html documents into markdown
|
||||
|
||||
### AI
|
||||
- [safetensors](https://lib.rs/crates/safetensors): Provides functions to read and write safetensors which aim to be safer than their PyTorch counterpart.
|
||||
- [burn](https://lib.rs/crates/burn): Flexible and Comprehensive Deep Learning Framework in Rust
|
||||
- [ollama-rs](https://lib.rs/crates/ollama-rs): A Rust library for interacting with the Ollama API
|
||||
- [linfa](https://lib.rs/crates/linfa): A Machine Learning framework for Rust
|
||||
- [neurons](https://lib.rs/crates/neurons): Neural networks from scratch, in Rust
|
||||
|
||||
### Concurrency
|
||||
- [parking_lot](https://lib.rs/crates/parking_lot): More compact and efficient implementations of the standard synchronization primitives
|
||||
- [crossbeam](https://lib.rs/crates/crossbeam): Tools for concurrent programming
|
||||
- [rayon](https://lib.rs/crates/rayon): Simple work-stealing parallelism for Rust
|
||||
- [dashmap](https://lib.rs/crates/dashmap): fast hashmap
|
||||
- [spin](https://lib.rs/crates/spin): Spin-based synchronization primitives
|
||||
- [flume](https://lib.rs/crates/flume): A blazingly fast multi-producer channel
|
||||
- [state](https://lib.rs/crates/state): A library for safe and effortless global and thread-local state management
|
||||
- [atomic](https://lib.rs/crates/atomic): Generic `Atomic<T>` wrapper type
|
||||
- [yaque](https://lib.rs/crates/yaque): Yaque is yet another disk-backed persistent queue for Rust
|
||||
- [kanal](https://lib.rs/crates/kanal): The fast sync and async channel that Rust deserves
|
||||
|
||||
### Memory Management
|
||||
- [jemallocator](https://lib.rs/crates/jemallocator): jemalloc allocator
|
||||
- [memmap2](https://lib.rs/crates/memmap2): Map something to memory
|
||||
- [sharded-slab](https://lib.rs/crates/sharded-slab): lock free concurrent slab allocation
|
||||
- [heapless](https://lib.rs/crates/heapless): static friendly data structures without heap allocation
|
||||
- [bumpalo](https://lib.rs/crates/bumpalo): bump allocation arena
|
||||
- [singlyton](https://lib.rs/crates/singlyton): [Singleton](../patterns/creational/Singleton%20Pattern.md) for Rust
|
||||
- [pipe](https://lib.rs/crates/pipe): Synchronous Read/Write memory pipe
|
||||
- [memory_storage](https://lib.rs/crates/memory_storage): Vec like data structure with constant index
|
||||
- [effective-limits](https://lib.rs/crates/effective-limits): Estimate effective resource limits for a process
|
||||
- [iter-chunks](https://lib.rs/crates/iter-chunks): Extend Iterator with chunks
|
||||
- [shared_vector](https://lib.rs/crates/shared_vector): Reference counted vector data structure
|
||||
- [census](https://lib.rs/crates/census): Keeps an inventory of living objects
|
||||
- [static_cell](https://lib.rs/crates/static_cell): Statically allocated, initialized at runtime cell
|
||||
- [arcstr](https://lib.rs/crates/arcstr): A better reference-counted string type, with zero-cost (allocation-free) support for string literals, and reference counted substrings
|
||||
- [bytebuffer](https://lib.rs/crates/bytebuffer): A bytebuffer for networking and binary protocols
|
||||
|
||||
### Science
|
||||
- [syunit](https://lib.rs/crates/syunit): SI Units
|
||||
- [uom](https://lib.rs/crates/uom): Units of measurement
|
||||
- [measurements](https://lib.rs/crates/measurements): Handle metric, imperial, and other measurements with ease! Types: Length, Temperature, Weight, Volume, Pressure
|
||||
- [t4t](https://lib.rs/crates/t4t): game theory toolbox
|
||||
|
||||
### Hardware / Embedded
|
||||
- [virt](https://lib.rs/crates/virt): Rust bindings to the libvirt C library
|
||||
- [qapi](https://lib.rs/crates/qapi): QEMU QMP and Guest Agent API
|
||||
- [bootloader](https://lib.rs/crates/bootloader): An experimental x86_64 bootloader that works on both BIOS and UEFI systems
|
||||
- [embedded-graphics](https://lib.rs/crates/embedded-graphics): Embedded graphics library for small hardware displays
|
||||
- [riscv](https://lib.rs/crates/riscv): Low level access to RISC-V processors
|
||||
- [aarch64-cpu](https://lib.rs/crates/aarch64-cpu): Low level access to processors using the AArch64 execution state
|
||||
- [uefi](https://lib.rs/crates/uefi): safe UEFI wrapper
|
||||
- [elf](https://lib.rs/crates/elf): A pure-rust library for parsing ELF files
|
||||
- [smoltcp](https://lib.rs/crates/smoltcp): A TCP/IP stack designed for bare-metal, real-time systems without a heap
|
||||
- [fatfs](https://lib.rs/crates/fatfs): FAT filesystem library
|
||||
|
||||
### Metrics
|
||||
- [criterion2](https://lib.rs/crates/criterion2): Statistics-driven micro-benchmarking library
|
||||
- [inferno](https://lib.rs/crates/inferno): Rust port of the FlameGraph performance profiling tool suite
|
||||
- [divan](https://lib.rs/crates/divan): Statistically-comfy benchmarking library
|
||||
|
||||
### Testing
|
||||
- [test-log](https://lib.rs/crates/test-log): A replacement of the `#[test]` attribute that initializes logging and/or tracing infrastructure before running tests
|
||||
- [googletest](https://lib.rs/crates/googletest): A rich assertion and matcher library inspired by GoogleTest for C++
|
||||
- [predicates](https://lib.rs/crates/predicates): An implementation of boolean-valued predicate functions
|
||||
- [validator](https://lib.rs/crates/validator): Common validation functions (email, url, length, …) and trait - to be used with validator_derive
|
||||
- [garde](https://lib.rs/crates/garde): Validation library
|
||||
- [fake](https://lib.rs/crates/fake): An easy to use library and command line for generating fake data like name, number, address, lorem, dates, etc
|
||||
- [static_assertions](https://lib.rs/crates/static_assertions): Compile-time assertions to ensure that invariants are met
|
||||
|
||||
### i18n
|
||||
- [iso_currency](https://lib.rs/crates/iso_currency): ISO 4217 currency codes
|
||||
- [iso_country](https://lib.rs/crates/iso_country): ISO3166-1 countries
|
||||
- [sys-locale](https://lib.rs/crates/sys-locale): Small and lightweight library to obtain the active system locale
|
||||
|
||||
### Async
|
||||
- [tokio](https://lib.rs/crates/tokio): An event-driven, non-blocking I/O platform for writing asynchronous I/O backed applications
|
||||
- [futures](https://lib.rs/crates/futures): An implementation of futures and streams featuring zero allocations, composability, and iterator-like interfaces
|
||||
- [mio](https://lib.rs/crates/mio): Lightweight non-blocking I/O
|
||||
- [deadpool](https://lib.rs/crates/deadpool): Dead simple async pool
|
||||
- [blocking](https://lib.rs/crates/blocking): A thread pool for isolating blocking I/O in async programs
|
||||
- [pollster](https://lib.rs/crates/pollster): Synchronously block the thread until a future completes
|
||||
- [smol](https://lib.rs/crates/smol): A small and fast async runtime
|
||||
- [async-stream](https://lib.rs/crates/async-stream): Asynchronous streams using async & await notation
|
||||
- [async-trait](https://lib.rs/crates/async-trait): Type erasure for async trait methods
|
||||
- [once_cell](https://lib.rs/crates/once_cell): Lazy values
|
||||
|
||||
### Macros
|
||||
- [proc-macro2](https://lib.rs/crates/proc-macro2): A substitute implementation of the compiler’s proc_macro API to decouple token-based libraries from the procedural macro use case
|
||||
- [syn](https://lib.rs/crates/syn): Parse Rust syntax into AST
|
||||
- [quote](https://lib.rs/crates/quote): Turn Rust syntax into TokenStream
|
||||
- [paste](https://lib.rs/crates/paste): Concat Rust idents
|
||||
|
||||
### Build Tools
|
||||
- [flamegraph](https://lib.rs/crates/flamegraph): A simple cargo subcommand for generating flamegraphs, using inferno under the hood
|
||||
- [cargo-hack](https://lib.rs/crates/cargo-hack): Cargo subcommand to provide various options useful for testing and continuous integration
|
||||
- [cargo-outdated](https://lib.rs/crates/cargo-outdated): Cargo subcommand for displaying when dependencies are out of date
|
||||
- [cargo-binstall](https://lib.rs/crates/cargo-binstall): Binary installation for rust projects
|
||||
- [cargo-cache](https://lib.rs/crates/cargo-cache): Manage cargo cache, show sizes and remove directories selectively
|
||||
- [cargo-watch](https://lib.rs/crates/cargo-watch): Watches over your Cargo project’s source
|
||||
- [cargo-expand](https://lib.rs/crates/cargo-expand): Wrapper around `rustc -Zunpretty=expanded`. Shows the result of macro expansion and `#[derive]` expansion.
|
||||
- [cargo-audit](https://lib.rs/crates/cargo-audit): Audit Cargo.lock for crates with security vulnerabilities
|
||||
- [cargo-aur](https://lib.rs/crates/cargo-aur): Prepare Rust projects to be released on the Arch Linux User Repository
|
||||
- [cargo-bom](https://lib.rs/crates/cargo-bom): Bill of Materials for Rust Crates
|
||||
- [cc](https://lib.rs/crates/cc): A build-time dependency for Cargo build scripts to assist in invoking the native C compiler to compile native C code into a static archive to be linked into Rust code
|
||||
- [cmake](https://lib.rs/crates/cmake): A build dependency for running cmake to build a native library
|
||||
- [cross](https://lib.rs/crates/cross): Zero setup cross compilation and cross testing
|
||||
- [wasm-bindgen](https://lib.rs/crates/wasm-bindgen): Easy support for interacting between JS and Rust
|
||||
|
||||
### Math
|
||||
- [num](https://lib.rs/crates/num): A collection of numeric types and traits for Rust, including bigint, complex, rational, range iterators, generic integers, and more!
|
||||
- [num-format](https://lib.rs/crates/num-format): A Rust crate for producing string-representations of numbers, formatted according to international standards
|
||||
- [num-rational](https://lib.rs/crates/num-rational): Rational numbers implementation for Rust
|
||||
- [num-complex](https://lib.rs/crates/num-complex): Complex numbers implementation for Rust
|
||||
- [statrs](https://lib.rs/crates/statrs): Statistical computing library for Rust
|
||||
- [bigdecimal](https://lib.rs/crates/bigdecimal): Arbitrary precision decimal numbers
|
||||
- [nalgebra](https://lib.rs/crates/nalgebra): General-purpose linear algebra library with transformations and statically-sized or dynamically-sized matrices
|
||||
- [euclid](https://lib.rs/crates/euclid): Geometry primitives
|
||||
- [ultraviolet](https://lib.rs/crates/ultraviolet): A crate to do linear algebra, fast
|
||||
- [peroxide](https://lib.rs/crates/peroxide): Rust comprehensive scientific computation library contains linear algebra, numerical analysis, statistics and machine learning tools with farmiliar syntax
|
||||
|
||||
### Desktop
|
||||
- [notify-rust](https://lib.rs/crates/notify-rust): Show desktop notifications (linux, bsd, mac). Pure Rust dbus client and server.
|
||||
- [arboard](https://lib.rs/crates/arboard): Image and text handling for the OS clipboard
|
||||
|
||||
### Configuration
|
||||
- [config](https://lib.rs/crates/config): Layered configuration system for Rust applications
|
||||
- [envy](https://lib.rs/crates/envy): deserialize env vars into typesafe structs
|
||||
|
||||
### Language Extensions
|
||||
#### Enums
|
||||
- [strum](https://lib.rs/crates/strum): Helpful macros for working with enums and strings
|
||||
- [enum_dispatch](https://lib.rs/crates/enum_dispatch): Near drop-in replacement for dynamic-dispatched method calls with up to 10x the speed
|
||||
- [num_enum](https://lib.rs/crates/num_enum): Procedural macros to make inter-operation between primitives and enums easier
|
||||
- [enum-display](https://lib.rs/crates/enum-display): A macro to derive Display for enums
|
||||
|
||||
#### Memory
|
||||
- [smol_str](https://lib.rs/crates/smol_str): small-string optimized string type with O(1) clone
|
||||
- [beef](https://lib.rs/crates/beef): More compact Cow
|
||||
- [dyn-clone](https://lib.rs/crates/dyn-clone): Clone trait that is dyn-compatible
|
||||
- [memoffset](https://lib.rs/crates/memoffset): offset_of functionality for Rust structs
|
||||
- [az](https://lib.rs/crates/az): Casts and checked casts
|
||||
- [zerocopy](https://lib.rs/crates/zerocopy): Zerocopy makes zero-cost memory manipulation effortless. We write "unsafe" so you don't have to.
|
||||
- [once_cell](https://lib.rs/crates/once_cell): Single assignment cells and lazy values
|
||||
- [lazy_static](https://lib.rs/crates/lazy_static): A macro for declaring lazily evaluated statics in Rust
|
||||
- [globals](https://lib.rs/crates/globals): Painless global variables in Rust
|
||||
- [lazy_format](https://lib.rs/crates/lazy_format): A utility crate for lazily formatting values for later
|
||||
- [fragile](https://lib.rs/crates/fragile): Provides wrapper types for sending non-send values to other threads
|
||||
|
||||
#### Syntax
|
||||
- [tap](https://lib.rs/crates/tap): Generic extensions for tapping values in Rust
|
||||
- [option_trait](https://lib.rs/crates/option_trait): Helper traits for more generalized options
|
||||
- [cascade](https://lib.rs/crates/cascade): Dart-like cascade macro for Rust
|
||||
- [enclose](https://lib.rs/crates/enclose): A convenient macro, for cloning values into a closure
|
||||
- [extend](https://lib.rs/crates/extend): Create extensions for types you don't own with extension traits but without the boilerplate
|
||||
- [hex_lit](https://lib.rs/crates/hex_lit): Hex macro literals without use of hex macros
|
||||
- [replace_with](https://lib.rs/crates/replace_with): Temporarily take ownership of a value at a mutable location, and replace it with a new value based on the old one
|
||||
- [scopeguard](https://lib.rs/crates/scopeguard): A RAII scope guard that will run a given closure when it goes out of scope, even if the code between panics (assuming unwinding panic).
|
||||
- [backon](https://lib.rs/crates/backon): Make retry like a built-in feature provided by Rust
|
||||
- [tryhard](https://lib.rs/crates/tryhard): Easily retry futures
|
||||
- [retry](https://lib.rs/crates/retry): Utilities for retrying operations that can fail
|
||||
- [statum](https://lib.rs/crates/statum): Compile-time state machine magic for Rust: Zero-boilerplate typestate patterns with automatic transition validation
|
||||
- [formatx](https://lib.rs/crates/formatx): A macro for formatting non literal strings at runtime
|
||||
- [erased](https://lib.rs/crates/erased): Erase the type of a reference or box, retaining the lifetime
|
||||
- [include_dir](https://lib.rs/crates/include_dir): Embed the contents of a directory in your binary
|
||||
- [stacker](https://lib.rs/crates/stacker): A stack growth library useful when implementing deeply recursive algorithms that may accidentally blow the stack
|
||||
- [recursive](https://lib.rs/crates/recursive): Easy recursion without stack overflows
|
||||
|
||||
#### Type Extensions
|
||||
- [itertools](https://lib.rs/crates/itertools): Extra iterator adaptors, iterator methods, free functions, and macros
|
||||
- [itermore](https://lib.rs/crates/itermore): 🤸♀️ More iterator adaptors
|
||||
- [derive_more](https://lib.rs/crates/derive_more): Adds #[derive(x)] macros for more traits
|
||||
- [derive_builder](https://lib.rs/crates/derive_builder): Rust macro to automatically implement the builder pattern for arbitrary structs
|
||||
- [ordered-float](https://lib.rs/crates/ordered-float): Wrappers for total ordering on floats
|
||||
- [stdext](https://lib.rs/crates/stdext): Extensions for the Rust standard library structures
|
||||
- [bounded-integer](https://lib.rs/crates/bounded-integer): Bounded integers
|
||||
- [tuples](https://lib.rs/crates/tuples): Provides many useful tools related to tuples
|
||||
- [fallible-iterator](https://lib.rs/crates/fallible-iterator): Fallible iterator traits
|
||||
- [sequential](https://lib.rs/crates/sequential): A configurable sequential number generator
|
||||
|
||||
#### Compilation
|
||||
- [cfg-if](https://lib.rs/crates/cfg-if): A macro to ergonomically define an item depending on a large number of #[cfg] parameters. Structured like an if-else chain, the first matching branch is the item that gets emitted.
|
||||
- [cfg_aliases](https://lib.rs/crates/cfg_aliases): A tiny utility to help save you a lot of effort with long winded #[cfg()] checks
|
||||
- [nameof](https://lib.rs/crates/nameof): Provides a Rust macro to determine the string name of a binding, type, const, or function
|
||||
- [tynm](https://lib.rs/crates/tynm): Returns type names in shorter form
|
||||
|
||||
#### Const
|
||||
- [constcat](https://lib.rs/crates/constcat): concat! with support for const variables and expressions
|
||||
- [konst](https://lib.rs/crates/konst): Const equivalents of std functions, compile-time comparison, and parsing
|
||||
|
||||
### Geo
|
||||
- [geo](https://lib.rs/crates/geo): Geospatial primitives and algorithms
|
||||
- [geojson](https://lib.rs/crates/geojson): Read and write GeoJSON vector geographic data
|
||||
- [geozero](https://lib.rs/crates/geozero): Zero-Copy reading and writing of geospatial data in WKT/WKB, GeoJSON, MVT, GDAL, and other formats
|
||||
- [versatiles](https://lib.rs/crates/versatiles): A toolbox for converting, checking and serving map tiles in various formats
|
||||
- [ipcap](https://lib.rs/crates/ipcap): 🌍 A CLI & library for decoding IP addresses into state, postal code, country, coordinates, etc without internet access
|
||||
|
|
|
@ -2,9 +2,6 @@
|
|||
obj: meta/collection
|
||||
---
|
||||
|
||||
# Best Practices
|
||||
- [URL Suffix API](./URL%20Suffix%20API.md)
|
||||
|
||||
# Creational Patterns
|
||||
- [Abstract Factory](creational/Abstract%20Factory%20Pattern.md)
|
||||
- [Builder](creational/Builder%20Pattern.md)
|
||||
|
|
|
@ -1,15 +0,0 @@
|
|||
# URL Suffix API
|
||||
When designing a website, consider leveraging URL suffixes to indicate the format of the resource being accessed, similar to how file extensions are used in operating systems.
|
||||
|
||||
For example, a webpage located at `/blog/post/id` that renders human-readable content could have its machine-readable data served by appending a format-specific suffix to the same URL, such as `/blog/post/id.json`.
|
||||
|
||||
#### Benefits:
|
||||
|
||||
1. **Intuitive API from Website Usage**
|
||||
Users can easily derive API endpoints from existing website URLs by appending the desired format suffix.
|
||||
|
||||
2. **Interchangeable Formats**
|
||||
The same approach allows for multiple formats (e.g., `.json`, `.msgpack`, `.protobuf`) to be supported seamlessly, improving flexibility and usability.
|
||||
|
||||
|
||||
This method simplifies the architecture, enhances consistency, and provides an elegant mechanism to serve both human-readable and machine-readable content from the same base URL.
|
|
@ -1,84 +0,0 @@
|
|||
---
|
||||
obj: format
|
||||
website: https://jsonlines.org
|
||||
extension: "jsonl"
|
||||
mime: "application/jsonl"
|
||||
rev: 2024-12-02
|
||||
---
|
||||
|
||||
# JSON Lines
|
||||
This page describes the JSON Lines text format, also called newline-delimited JSON. JSON Lines is a convenient format for storing structured data that may be processed one record at a time. It works well with unix-style text processing tools and shell pipelines. It's a great format for log files. It's also a flexible format for passing messages between cooperating processes.
|
||||
|
||||
The JSON Lines format has three requirements:
|
||||
- **UTF-8 Encoding**: JSON allows encoding Unicode strings with only ASCII escape sequences, however those escapes will be hard to read when viewed in a text editor. The author of the JSON Lines file may choose to escape characters to work with plain ASCII files. Encodings other than UTF-8 are very unlikely to be valid when decoded as UTF-8 so the chance of accidentally misinterpreting characters in JSON Lines files is low.
|
||||
- **Each Line is a Valid JSON Value**: The most common values will be objects or arrays, but any JSON value is permitted.
|
||||
- **Line Separator is `\n`**: This means `\r\n` is also supported because surrounding white space is implicitly ignored when parsing JSON values.
|
||||
|
||||
## Better than CSV
|
||||
```json
|
||||
["Name", "Session", "Score", "Completed"]
|
||||
["Gilbert", "2013", 24, true]
|
||||
["Alexa", "2013", 29, true]
|
||||
["May", "2012B", 14, false]
|
||||
["Deloise", "2012A", 19, true]
|
||||
```
|
||||
|
||||
CSV seems so easy that many programmers have written code to generate it themselves, and almost every implementation is different. Handling broken CSV files is a common and frustrating task. CSV has no standard encoding, no standard column separator and multiple character escaping standards. String is the only type supported for cell values, so some programs attempt to guess the correct types.
|
||||
|
||||
JSON Lines handles tabular data cleanly and without ambiguity. Cells may use the standard JSON types.
|
||||
|
||||
The biggest missing piece is an import/export filter for popular spreadsheet programs so that non-programmers can use this format.
|
||||
|
||||
## Self-describing data
|
||||
```json
|
||||
{"name": "Gilbert", "session": "2013", "score": 24, "completed": true}
|
||||
{"name": "Alexa", "session": "2013", "score": 29, "completed": true}
|
||||
{"name": "May", "session": "2012B", "score": 14, "completed": false}
|
||||
{"name": "Deloise", "session": "2012A", "score": 19, "completed": true}
|
||||
```
|
||||
|
||||
JSON Lines enables applications to read objects line-by-line, with each line fully describing a JSON object. The example above contains the same data as the tabular example above, but allows applications to split files on newline boundaries for parallel loading, and eliminates any ambiguity if fields are omitted or re-ordered.
|
||||
|
||||
## Easy Nested Data
|
||||
```json
|
||||
{"name": "Gilbert", "wins": [["straight", "7♣"], ["one pair", "10♥"]]}
|
||||
{"name": "Alexa", "wins": [["two pair", "4♠"], ["two pair", "9♠"]]}
|
||||
{"name": "May", "wins": []}
|
||||
{"name": "Deloise", "wins": [["three of a kind", "5♣"]]}
|
||||
```
|
||||
|
||||
JSON Lines' biggest strength is in handling lots of similar nested data structures. One `.jsonl` file is easier to work with than a directory full of XML files.
|
||||
|
||||
If you have large nested structures then reading the JSON Lines text directly isn't recommended. Use the "jq" tool to make viewing large structures easier:
|
||||
|
||||
```
|
||||
grep pair winning_hands.jsonl | jq .
|
||||
|
||||
{
|
||||
"name": "Gilbert",
|
||||
"wins": [
|
||||
[
|
||||
"straight",
|
||||
"7♣"
|
||||
],
|
||||
[
|
||||
"one pair",
|
||||
"10♥"
|
||||
]
|
||||
]
|
||||
}
|
||||
{
|
||||
"name": "Alexa",
|
||||
"wins": [
|
||||
[
|
||||
"two pair",
|
||||
"4♠"
|
||||
],
|
||||
[
|
||||
"two pair",
|
||||
"9♠"
|
||||
]
|
||||
]
|
||||
}
|
||||
```
|
||||
|
|
@ -1,251 +0,0 @@
|
|||
---
|
||||
obj: concept
|
||||
website: https://ogp.me
|
||||
rev: 2024-12-16
|
||||
---
|
||||
|
||||
# The Open Graph protocol
|
||||
The [Open Graph protocol](https://ogp.me/) enables any web page to become a rich object in a social graph. For instance, this is used on Facebook to allow any web page to have the same functionality as any other object on Facebook.
|
||||
|
||||
## Basic Metadata
|
||||
To turn your web pages into graph objects, you need to add basic metadata to your page. Which means that you'll place additional `<meta>` tags in the `<head>` of your web page. The four required properties for every page are:
|
||||
|
||||
- `og:title` - The title of your object as it should appear within the graph, e.g., "The Rock".
|
||||
- `og:type` - The type of your object, e.g., `video.movie`. Depending on the type you specify, other properties may also be required.
|
||||
- `og:image` - An image URL which should represent your object within the graph.
|
||||
- `og:url` - The canonical URL of your object that will be used as its permanent ID in the graph, e.g., "https://www.imdb.com/title/tt0117500/".
|
||||
|
||||
As an example, the following is the Open Graph protocol markup for [The Rock on IMDB](https://www.imdb.com/title/tt0117500/):
|
||||
|
||||
```html
|
||||
<html prefix="og: https://ogp.me/ns#">
|
||||
<head>
|
||||
<title>The Rock (1996)</title>
|
||||
<meta property="og:title" content="The Rock" />
|
||||
<meta property="og:type" content="video.movie" />
|
||||
<meta property="og:url" content="https://www.imdb.com/title/tt0117500/" />
|
||||
<meta property="og:image" content="https://ia.media-imdb.com/images/rock.jpg" />
|
||||
...
|
||||
</head>
|
||||
...
|
||||
</html>
|
||||
```
|
||||
|
||||
### Optional Metadata
|
||||
The following properties are optional for any object and are generally recommended:
|
||||
|
||||
- `og:audio` - A URL to an audio file to accompany this object.
|
||||
- `og:description` - A one to two sentence description of your object.
|
||||
- `og:determiner` - The word that appears before this object's title in a sentence. An enum of (`a`, `an`, `the`, `""`, `auto`). If `auto` is chosen, the consumer of your data should chose between `a` or `an`. Default is `""` (blank).
|
||||
- `og:locale` - The locale these tags are marked up in. Of the format `language_TERRITORY`. Default is `en_US`.
|
||||
- `og:locale:alternate` - An array of other locales this page is available in.
|
||||
- `og:site_name` - If your object is part of a larger web site, the name which should be displayed for the overall site. e.g., "IMDb".
|
||||
- `og:video` - A URL to a video file that complements this object.
|
||||
|
||||
For example (line-break solely for display purposes):
|
||||
|
||||
```html
|
||||
<meta property="og:audio" content="https://example.com/bond/theme.mp3" />
|
||||
<meta property="og:description"
|
||||
content="Sean Connery found fame and fortune as the
|
||||
suave, sophisticated British agent, James Bond." />
|
||||
<meta property="og:determiner" content="the" />
|
||||
<meta property="og:locale" content="en_GB" />
|
||||
<meta property="og:locale:alternate" content="fr_FR" />
|
||||
<meta property="og:locale:alternate" content="es_ES" />
|
||||
<meta property="og:site_name" content="IMDb" />
|
||||
<meta property="og:video" content="https://example.com/bond/trailer.swf" />
|
||||
```
|
||||
|
||||
## Structured Properties
|
||||
Some properties can have extra metadata attached to them. These are specified in the same way as other metadata with `property` and `content`, but the `property` will have extra `:`.
|
||||
|
||||
The `og:image` property has some optional structured properties:
|
||||
|
||||
- `og:image:url` - Identical to `og:image`.
|
||||
- `og:image:secure_url` - An alternate url to use if the webpage requires HTTPS.
|
||||
- `og:image:type` - A MIME type for this image.
|
||||
- `og:image:width` - The number of pixels wide.
|
||||
- `og:image:height` - The number of pixels high.
|
||||
- `og:image:alt` - A description of what is in the image (not a caption). If the page specifies an og:image it should specify `og:image:alt`.
|
||||
|
||||
A full image example:
|
||||
|
||||
```html
|
||||
<meta property="og:image" content="https://example.com/ogp.jpg" />
|
||||
<meta property="og:image:secure_url" content="https://secure.example.com/ogp.jpg" />
|
||||
<meta property="og:image:type" content="image/jpeg" />
|
||||
<meta property="og:image:width" content="400" />
|
||||
<meta property="og:image:height" content="300" />
|
||||
<meta property="og:image:alt" content="A shiny red apple with a bite taken out" />
|
||||
```
|
||||
|
||||
The `og:video` tag has the identical tags as `og:image`. Here is an example:
|
||||
|
||||
```html
|
||||
<meta property="og:video" content="https://example.com/movie.swf" />
|
||||
<meta property="og:video:secure_url" content="https://secure.example.com/movie.swf" />
|
||||
<meta property="og:video:type" content="application/x-shockwave-flash" />
|
||||
<meta property="og:video:width" content="400" />
|
||||
<meta property="og:video:height" content="300" />
|
||||
```
|
||||
|
||||
The `og:audio` tag only has the first 3 properties available (since size doesn't make sense for sound):
|
||||
|
||||
```html
|
||||
<meta property="og:audio" content="https://example.com/sound.mp3" />
|
||||
<meta property="og:audio:secure_url" content="https://secure.example.com/sound.mp3" />
|
||||
<meta property="og:audio:type" content="audio/mpeg" />
|
||||
```
|
||||
|
||||
## Arrays
|
||||
If a tag can have multiple values, just put multiple versions of the same `<meta>` tag on your page. The first tag (from top to bottom) is given preference during conflicts.
|
||||
|
||||
```html
|
||||
<meta property="og:image" content="https://example.com/rock.jpg" />
|
||||
<meta property="og:image" content="https://example.com/rock2.jpg" />
|
||||
```
|
||||
|
||||
Put structured properties after you declare their root tag. Whenever another root element is parsed, that structured property is considered to be done and another one is started.
|
||||
|
||||
For example:
|
||||
|
||||
```html
|
||||
<meta property="og:image" content="https://example.com/rock.jpg" />
|
||||
<meta property="og:image:width" content="300" />
|
||||
<meta property="og:image:height" content="300" />
|
||||
<meta property="og:image" content="https://example.com/rock2.jpg" />
|
||||
<meta property="og:image" content="https://example.com/rock3.jpg" />
|
||||
<meta property="og:image:height" content="1000" />
|
||||
```
|
||||
|
||||
means there are 3 images on this page, the first image is `300x300`, the middle one has unspecified dimensions, and the last one is `1000px` tall.
|
||||
|
||||
## Object Types
|
||||
In order for your object to be represented within the graph, you need to specify its type. This is done using the `og:type` property:
|
||||
|
||||
```html
|
||||
<meta property="og:type" content="website" />
|
||||
```
|
||||
|
||||
When the community agrees on the schema for a type, it is added to the list of global types. All other objects in the type system are CURIEs of the form.
|
||||
|
||||
```html
|
||||
<head prefix="my_namespace: https://example.com/ns#">
|
||||
<meta property="og:type" content="my_namespace:my_type" />
|
||||
```
|
||||
|
||||
The global types are grouped into verticals. Each vertical has its own namespace. The `og:type` values for a namespace are always prefixed with the namespace and then a period. This is to reduce confusion with user-defined namespaced types which always have colons in them.
|
||||
|
||||
### Music
|
||||
|
||||
- Namespace URI: [`https://ogp.me/ns/music#`](https://ogp.me/ns/music)
|
||||
|
||||
`og:type` values:
|
||||
|
||||
[`music.song`](https://ogp.me/#type_music.song)
|
||||
|
||||
- `music:duration` - [integer](https://ogp.me/#integer) >=1 - The song's length in seconds.
|
||||
- `music:album` - [music.album](https://ogp.me/#type_music.album) [array](https://ogp.me/#array) - The album this song is from.
|
||||
- `music:album:disc` - [integer](https://ogp.me/#integer) >=1 - Which disc of the album this song is on.
|
||||
- `music:album:track` - [integer](https://ogp.me/#integer) >=1 - Which track this song is.
|
||||
- `music:musician` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - The musician that made this song.
|
||||
|
||||
[`music.album`](https://ogp.me/#type_music.album)
|
||||
|
||||
- `music:song` - [music.song](https://ogp.me/#type_music.song) - The song on this album.
|
||||
- `music:song:disc` - [integer](https://ogp.me/#integer) >=1 - The same as `music:album:disc` but in reverse.
|
||||
- `music:song:track` - [integer](https://ogp.me/#integer) >=1 - The same as `music:album:track` but in reverse.
|
||||
- `music:musician` - [profile](https://ogp.me/#type_profile) - The musician that made this song.
|
||||
- `music:release_date` - [datetime](https://ogp.me/#datetime) - The date the album was released.
|
||||
|
||||
[`music.playlist`](https://ogp.me/#type_music.playlist)
|
||||
|
||||
- `music:song` - Identical to the ones on [music.album](https://ogp.me/#type_music.album)
|
||||
- `music:song:disc`
|
||||
- `music:song:track`
|
||||
- `music:creator` - [profile](https://ogp.me/#type_profile) - The creator of this playlist.
|
||||
|
||||
[`music.radio_station`](https://ogp.me/#type_music.radio_station)
|
||||
|
||||
- `music:creator` - [profile](https://ogp.me/#type_profile) - The creator of this station.
|
||||
|
||||
### Video
|
||||
|
||||
- Namespace URI: [`https://ogp.me/ns/video#`](https://ogp.me/ns/video)
|
||||
|
||||
`og:type` values:
|
||||
|
||||
[`video.movie`](https://ogp.me/#type_video.movie)
|
||||
|
||||
- `video:actor` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Actors in the movie.
|
||||
- `video:actor:role` - [string](https://ogp.me/#string) - The role they played.
|
||||
- `video:director` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Directors of the movie.
|
||||
- `video:writer` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Writers of the movie.
|
||||
- `video:duration` - [integer](https://ogp.me/#integer) >=1 - The movie's length in seconds.
|
||||
- `video:release_date` - [datetime](https://ogp.me/#datetime) - The date the movie was released.
|
||||
- `video:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this movie.
|
||||
|
||||
[`video.episode`](https://ogp.me/#type_video.episode)
|
||||
|
||||
- `video:actor` - Identical to [video.movie](https://ogp.me/#type_video.movie)
|
||||
- `video:actor:role`
|
||||
- `video:director`
|
||||
- `video:writer`
|
||||
- `video:duration`
|
||||
- `video:release_date`
|
||||
- `video:tag`
|
||||
- `video:series` - [video.tv_show](https://ogp.me/#type_video.tv_show) - Which series this episode belongs to.
|
||||
|
||||
[`video.tv_show`](https://ogp.me/#type_video.tv_show)
|
||||
|
||||
A multi-episode TV show. The metadata is identical to [video.movie](https://ogp.me/#type_video.movie).
|
||||
|
||||
[`video.other`](https://ogp.me/#type_video.other)
|
||||
|
||||
A video that doesn't belong in any other category. The metadata is identical to [video.movie](https://ogp.me/#type_video.movie).
|
||||
|
||||
### No Vertical
|
||||
These are globally defined objects that just don't fit into a vertical but yet are broadly used and agreed upon.
|
||||
|
||||
`og:type` values:
|
||||
|
||||
[`article`](https://ogp.me/#type_article) - Namespace URI: [`https://ogp.me/ns/article#`](https://ogp.me/ns/article)
|
||||
|
||||
- `article:published_time` - [datetime](https://ogp.me/#datetime) - When the article was first published.
|
||||
- `article:modified_time` - [datetime](https://ogp.me/#datetime) - When the article was last changed.
|
||||
- `article:expiration_time` - [datetime](https://ogp.me/#datetime) - When the article is out of date after.
|
||||
- `article:author` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Writers of the article.
|
||||
- `article:section` - [string](https://ogp.me/#string) - A high-level section name. E.g. Technology
|
||||
- `article:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this article.
|
||||
|
||||
[`book`](https://ogp.me/#type_book) - Namespace URI: [`https://ogp.me/ns/book#`](https://ogp.me/ns/book)
|
||||
|
||||
- `book:author` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Who wrote this book.
|
||||
- `book:isbn` - [string](https://ogp.me/#string) - The [ISBN](https://en.wikipedia.org/wiki/International_Standard_Book_Number)
|
||||
- `book:release_date` - [datetime](https://ogp.me/#datetime) - The date the book was released.
|
||||
- `book:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this book.
|
||||
|
||||
[`profile`](https://ogp.me/#type_profile) - Namespace URI: [`https://ogp.me/ns/profile#`](https://ogp.me/ns/profile)
|
||||
|
||||
- `profile:first_name` - [string](https://ogp.me/#string) - A name normally given to an individual by a parent or self-chosen.
|
||||
- `profile:last_name` - [string](https://ogp.me/#string) - A name inherited from a family or marriage and by which the individual is commonly known.
|
||||
- `profile:username` - [string](https://ogp.me/#string) - A short unique string to identify them.
|
||||
- `profile:gender` - [enum](https://ogp.me/#enum)(male, female) - Their gender.
|
||||
|
||||
[`website`](https://ogp.me/#type_website) - Namespace URI: [`https://ogp.me/ns/website#`](https://ogp.me/ns/website)
|
||||
|
||||
No additional properties other than the basic ones. Any non-marked up webpage should be treated as `og:type` website.
|
||||
|
||||
## Types
|
||||
The following types are used when defining attributes in Open Graph protocol.
|
||||
|
||||
| **Type** | **Description** | **Literals** |
|
||||
| -------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
|
||||
| Boolean | A Boolean represents a true or false value | true, false, 1, 0 |
|
||||
| DateTime | A DateTime represents a temporal value composed of a date (year, month, day) and an optional time component (hours, minutes) | ISO 8601 |
|
||||
| Enum | A type consisting of bounded set of constant string values (enumeration members). | A string value that is a member of the enumeration |
|
||||
| Float | A 64-bit signed floating point number | All literals that conform to the following formats: `1.234`, `-1.234`, `1.2e3`, `-1.2e3`, `7E-10` |
|
||||
| Integer | A 32-bit signed integer. | All literals that conform to the following formats: `1234`, `-123` |
|
||||
| String | A sequence of Unicode characters | All literals composed of Unicode characters with no escape characters |
|
||||
| URL | A sequence of Unicode characters that identify an Internet resource. | All valid URLs that utilize the `http://` or `https://` protocols |
|
|
@ -12,8 +12,6 @@ Installation of Arch Linux is typically done manually following the [Wiki](https
|
|||
curl -L matmoul.github.io/archfi | bash
|
||||
```
|
||||
|
||||
You can create a (custom) ISO with [archiso](./archiso.md).
|
||||
|
||||
## Basic Install
|
||||
```shell
|
||||
# Set keyboard
|
||||
|
|
|
@ -43,41 +43,3 @@ A typical Linux system has, among others, the following directories:
|
|||
| `/var` | This directory contains files which may change in size, such as spool and [log](../dev/Log.md) files. |
|
||||
| `/var/cache` | Data cached for programs. |
|
||||
| `/var/log` | Miscellaneous [log](../dev/Log.md) files. |
|
||||
## Kernel Commandline
|
||||
The kernel, the programs running in the initrd and in the host system may be configured at boot via kernel command line arguments.
|
||||
|
||||
The current cmdline can be seen at `/proc/cmdline`.
|
||||
For setting the cmdline use `/etc/kernel/cmdline` if you use UKIs.
|
||||
|
||||
**Common Kernel Cmdline Arguments:**
|
||||
|
||||
| Argument | Description |
|
||||
| ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `quiet` | Parameter understood by both the kernel and the system and service manager to control console log verbosity. |
|
||||
| `splash` | Show a plymouth splash screen while booting. |
|
||||
| `init=` | This sets the initial command to be executed by the kernel. If this is not set, or cannot be found, the kernel will try `/sbin/init`, then `/etc/init`, then `/bin/init`, then `/bin/sh` and panic if all of this fails. |
|
||||
| `ro` and `rw` | The `ro` option tells the kernel to mount the root filesystem as 'read-only'. The `rw` option tells the kernel to mount the root filesystem read/write. This is the default. |
|
||||
| `resume=...` | This tells the kernel the location of the suspend-to-disk data that you want the machine to resume from after hibernation. Usually, it is the same as your swap partition or file. Example: `resume=/dev/hda2` |
|
||||
| `panic=N` | By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after `N` seconds (if `N` is greater than zero). This panic timeout can also be set by `echo N > /proc/sys/kernel/panic` |
|
||||
| `plymouth.enable=` | May be used to disable the Plymouth boot splash. For details, see plymouth. |
|
||||
| `vconsole.keymap=, vconsole.keymap_toggle=, vconsole.font=, vconsole.font_map=, vconsole.font_unimap=` | Parameters understood by the virtual console setup logic. For details, see `vconsole.conf` |
|
||||
| `luks=, rd.luks=` | Defaults to "yes". If "no", disables the crypt mount generator entirely. `rd.luks=` is honored only in the initrd while `luks=` is honored by both the main system and in the initrd. |
|
||||
| `luks.crypttab=, rd.luks.crypttab=` | Defaults to "yes". If "no", causes the generator to ignore any devices configured in `/etc/crypttab` (`luks.uuid=` will still work however). `rd.luks.crypttab=` is honored only in initrd while `luks.crypttab=` is honored by both the main system and in the initrd. |
|
||||
| `luks.uuid=, rd.luks.uuid=` | Takes a LUKS superblock UUID as argument. This will activate the specified device as part of the boot process as if it was listed in `/etc/crypttab`. This option may be specified more than once in order to set up multiple devices. `rd.luks.uuid=` is honored only in the initrd, while `luks.uuid=` is honored by both the main system and in the initrd. |
|
||||
| `luks.name=, rd.luks.name=` | Takes a LUKS super block UUID followed by an `=` and a name. This implies `rd.luks.uuid=` or `luks.uuid=` and will additionally make the LUKS device given by the UUID appear under the provided name. `rd.luks.name=` is honored only in the initrd, while `luks.name=` is honored by both the main system and in the initrd. |
|
||||
| `luks.options=, rd.luks.options=` | Takes a LUKS super block UUID followed by an `=` and a string of options separated by commas as argument. This will override the options for the given UUID. If only a list of options, without a UUID, is specified, they apply to any UUIDs not specified elsewhere, and without an entry in `/etc/crypttab`. `rd.luks.options=` is honored only by initial RAM disk (initrd) while `luks.options=` is honored by both the main system and in the initrd. |
|
||||
| `fstab=, rd.fstab=` | Defaults to "yes". If "no", causes the generator to ignore any mounts or swap devices configured in `/etc/fstab`. `rd.fstab=` is honored only in the initrd, while `fstab=` is honored by both the main system and the initrd. |
|
||||
| `root=` | Configures the operating system's root filesystem to mount when running in the initrd. This accepts a device node path (usually `/dev/disk/by-uuid/...` or similar), or the special values `gpt-auto`, `fstab`, and `tmpfs`. Use `gpt-auto` to explicitly request automatic root file system discovery via `systemd-gpt-auto-generator`. Use `fstab` to explicitly request automatic root file system discovery via the initrd `/etc/fstab` rather than via kernel command line. Use `tmpfs` in order to mount a tmpfs file system as root file system of the OS. This is useful in combination with `mount.usr=` in order to combine a volatile root file system with a separate, immutable `/usr/` file system. Also see `systemd.volatile=` below. |
|
||||
| `rootfstype=` | Takes the root filesystem type that will be passed to the mount command. `rootfstype=` is honored by the initrd. |
|
||||
| `mount.usr=` | Takes the `/usr/` filesystem to be mounted by the initrd. If `mount.usrfstype=` or `mount.usrflags=` is set, then `mount.usr=` will default to the value set in `root=`. Otherwise, this parameter defaults to the `/usr/` entry found in `/etc/fstab` on the root filesystem. |
|
||||
| `mount.usrfstype=` | Takes the `/usr` filesystem type that will be passed to the mount command. |
|
||||
| `systemd.volatile=` | Controls whether the system shall boot up in volatile mode. |
|
||||
| `systemd.swap=` | Takes a boolean argument or enables the option if specified without an argument. If disabled, causes the generator to ignore any swap devices configured in `/etc/fstab`. Defaults to enabled. |
|
||||
|
||||
## Misc
|
||||
### Cause a kernel panic
|
||||
To manually cause a kernel panic run:
|
||||
|
||||
```sh
|
||||
echo c > /proc/sysrq-trigger
|
||||
```
|
||||
|
|
|
@ -1,105 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
arch-wiki: https://wiki.archlinux.org/title/Plymouth
|
||||
rev: 2024-12-20
|
||||
---
|
||||
|
||||
# Plymouth
|
||||
Plymouth is a project from Fedora providing a flicker-free graphical boot process. It relies on kernel mode setting (KMS) to set the native resolution of the display as early as possible, then provides an eye-candy splash screen leading all the way up to the login manager.
|
||||
|
||||
## Setup
|
||||
By default, Plymouth logs the boot messages into `/var/log/boot.log`, and does not show the graphical splash screen.
|
||||
- If you want to see the splash screen, append `splash` to the kernel parameters.
|
||||
- If you want silent boot, append `quiet` too.
|
||||
- If you want to disable the logging, append `plymouth.boot-log=/dev/null`. Alternatively, add `plymouth.nolog` which also disables console redirection.
|
||||
|
||||
To start Plymouth on early boot, you must configure your initramfs generator to create images including Plymouth.
|
||||
|
||||
For mkinitcpio, add plymouth to the `HOOKS` array in `mkinitcpio.conf`:
|
||||
|
||||
```sh
|
||||
# /etc/mkinitcpio.conf
|
||||
|
||||
HOOKS=(... plymouth ...)
|
||||
```
|
||||
|
||||
If you are using the `systemd` hook, it must be before `plymouth`.
|
||||
|
||||
Furthermore make sure you place `plymouth` before the `crypt` hook if your system is encrypted with dm-crypt.
|
||||
|
||||
## Configuration
|
||||
Plymouth can be configured in file `/etc/plymouth/plymouthd.conf`. You can see the default values in `/usr/share/plymouth/plymouthd.defaults`.
|
||||
|
||||
### Changing the theme
|
||||
Plymouth comes with a selection of themes:
|
||||
- BGRT: A variation of Spinner that keeps the OEM logo if available (BGRT stands for Boot Graphics Resource Table)
|
||||
- Fade-in: "Simple theme that fades in and out with shimmering stars"
|
||||
- Glow: "Corporate theme with pie chart boot progress followed by a glowing emerging logo"
|
||||
- Script: "Script example plugin" (Despite the description seems to be a quite nice Arch logo theme)
|
||||
- Solar: "Space theme with violent flaring blue star"
|
||||
- Spinner: "Simple theme with a loading spinner"
|
||||
- Spinfinity: "Simple theme that shows a rotating infinity sign in the center of the screen"
|
||||
- Tribar: "Text mode theme with tricolor progress bar"
|
||||
- (Text: "Text mode theme with tricolor progress bar")
|
||||
- (Details: "Verbose fallback theme")
|
||||
|
||||
The theme can be changed by editing the configuration file:
|
||||
|
||||
```ini
|
||||
# /etc/plymouth/plymouthd.conf
|
||||
|
||||
[Daemon]
|
||||
Theme=theme
|
||||
```
|
||||
|
||||
or by running:
|
||||
|
||||
```sh
|
||||
plymouth-set-default-theme -R theme
|
||||
```
|
||||
|
||||
Every time a theme is changed, the initrd must be rebuilt. The `-R` option ensures that it is rebuilt (otherwise regenerate the initramfs manually).
|
||||
|
||||
### Install new themes
|
||||
All currently installed themes can be listed by using this command:
|
||||
|
||||
```sh
|
||||
plymouth-set-default-theme -l
|
||||
# or:
|
||||
ls /usr/share/plymouth/themes
|
||||
```
|
||||
|
||||
### Show delay
|
||||
Plymouth has a configuration option to delay the splash screen:
|
||||
|
||||
```ini
|
||||
# /etc/plymouth/plymouthd.conf
|
||||
|
||||
[Daemon]
|
||||
ShowDelay=5
|
||||
```
|
||||
|
||||
On systems that boot quickly, you may only see a flicker of your splash theme before your DM or login prompt is ready. You can set `ShowDelay` to an interval (in seconds) longer than your boot time to prevent this flicker and only show a blank screen. The default is 0 seconds, so you should not need to change this to a different value to see your splash earlier during boot.
|
||||
|
||||
### HiDPI
|
||||
Edit the configuration file:
|
||||
|
||||
```ini
|
||||
# /etc/plymouth/plymouthd.conf
|
||||
|
||||
[Daemon]
|
||||
DeviceScale=an-integer-scaling-factor
|
||||
```
|
||||
|
||||
and regenerate the initramfs.
|
||||
|
||||
## Misc
|
||||
### Show boot messages
|
||||
During boot you can switch to boot messages by pressing the `Esc` key.
|
||||
|
||||
### Disable with kernel parameters
|
||||
If you experience problems during boot, you can temporary disable Plymouth with the following kernel parameters:
|
||||
|
||||
```
|
||||
plymouth.enable=0 disablehooks=plymouth
|
||||
```
|
|
@ -1,74 +0,0 @@
|
|||
---
|
||||
obj: concept
|
||||
arch-wiki: https://wiki.archlinux.org/title/XDG_user_directories
|
||||
rev: 2025-01-08
|
||||
---
|
||||
|
||||
# XDG Directories
|
||||
The XDG User Directories are a standardized way to define and access common user directories in Unix-like operating systems, primarily defined by the XDG Base Directory Specification from the FreeDesktop.org project.
|
||||
|
||||
These directories provide users and applications with predefined paths for storing specific types of files, such as documents, downloads, music, and more. By using these directories, applications can integrate better with the operating system's file structure and provide a consistent experience for users.
|
||||
|
||||
## Creating default directories
|
||||
Creating a full suite of localized default user directories within the `$HOME` directory can be done automatically by running:
|
||||
|
||||
```sh
|
||||
xdg-user-dirs-update
|
||||
```
|
||||
|
||||
> **Tip**: To force the creation of English-named directories, `LC_ALL=C.UTF-8 xdg-user-dirs-update --force` can be used.
|
||||
|
||||
When executed, it will also automatically:
|
||||
- Create a local `~/.config/user-dirs.dirs` configuration file: used by applications to find and use home directories specific to an account.
|
||||
- Create a local `~/.config/user-dirs.locale` configuration file: used to set the language according to the locale in use.
|
||||
|
||||
The user service `xdg-user-dirs-update.service` will also be installed and enabled by default, in order to keep your directories up to date by running this command at the beginning of each login session.
|
||||
|
||||
## Creating custom directories
|
||||
Both the local `~/.config/user-dirs.dirs` and global `/etc/xdg/user-dirs.defaults` configuration files use the following environmental variable format to point to user directories: `XDG_DIRNAME_DIR="$HOME/directory_name"` An example configuration file may likely look like this (these are all the template directories):
|
||||
|
||||
```sh
|
||||
# ~/.config/user-dirs.dirs
|
||||
|
||||
XDG_DESKTOP_DIR="$HOME/Desktop"
|
||||
XDG_DOCUMENTS_DIR="$HOME/Documents"
|
||||
XDG_DOWNLOAD_DIR="$HOME/Downloads"
|
||||
XDG_MUSIC_DIR="$HOME/Music"
|
||||
XDG_PICTURES_DIR="$HOME/Pictures"
|
||||
XDG_PUBLICSHARE_DIR="$HOME/Public"
|
||||
XDG_TEMPLATES_DIR="$HOME/Templates"
|
||||
XDG_VIDEOS_DIR="$HOME/Videos"
|
||||
```
|
||||
|
||||
As xdg-user-dirs will source the local configuration file to point to the appropriate user directories, it is therefore possible to specify custom folders. For example, if a custom folder for the `XDG_DOWNLOAD_DIR` variable has named `$HOME/Internet` in `~/.config/user-dirs.dirs` any application that uses this variable will use this directory.
|
||||
|
||||
> **Note**: Like with many configuration files, local settings override global settings. It will also be necessary to create any new custom directories.
|
||||
|
||||
Alternatively, it is also possible to specify custom folders using the command line. For example, the following command will produce the same results as the above configuration file edit:
|
||||
|
||||
```sh
|
||||
xdg-user-dirs-update --set DOWNLOAD ~/Internet
|
||||
```
|
||||
|
||||
## Querying configured directories
|
||||
Once set, any user directory can be viewed with xdg-user-dirs. For example, the following command will show the location of the Templates directory, which of course corresponds to the `XDG_TEMPLATES_DIR` variable in the local configuration file:
|
||||
|
||||
```sh
|
||||
xdg-user-dir TEMPLATES
|
||||
```
|
||||
|
||||
## Specification
|
||||
Please read the full specification. This section will attempt to break down the essence of what it tries to achieve.
|
||||
|
||||
Only `XDG_RUNTIME_DIR` is set by default through `pam_systemd`. It is up to the user to explicitly define the other variables according to the specification.
|
||||
|
||||
### User directories
|
||||
- `XDG_CONFIG_HOME`: Where user-specific configurations should be written (analogous to `/etc`). Should default to `$HOME/.config`.
|
||||
- `XDG_CACHE_HOME`: Where user-specific non-essential (cached) data should be written (analogous to `/var/cache`). Should default to `$HOME/.cache`.
|
||||
- `XDG_DATA_HOME`: Where user-specific data files should be written (analogous to `/usr/share`). Should default to `$HOME/.local/share`.
|
||||
- `XDG_STATE_HOME`: Where user-specific state files should be written (analogous to `/var/lib`). Should default to `$HOME/.local/state`.
|
||||
- `XDG_RUNTIME_DIR`: Used for non-essential, user-specific data files such as sockets, named pipes, etc. Not required to have a default value; warnings should be issued if not set or equivalents provided. Must be owned by the user with an access mode of `0700`. Filesystem fully featured by standards of OS. Must be on the local filesystem. May be subject to periodic cleanup. Modified every 6 hours or set sticky bit if persistence is desired. Can only exist for the duration of the user's login. Should not store large files as it may be mounted as a tmpfs.`pam_systemd` sets this to `/run/user/$UID`.
|
||||
|
||||
### System directories
|
||||
- `XDG_DATA_DIRS`: List of directories separated by `:` (analogous to `PATH`). Should default to `/usr/local/share:/usr/share`.
|
||||
- `XDG_CONFIG_DIRS`: List of directories separated by `:` (analogous to `PATH`). Should default to `/etc/xdg`.
|
|
@ -1,202 +0,0 @@
|
|||
---
|
||||
obj: concept
|
||||
arch-wiki: https://wiki.archlinux.org/title/Zram
|
||||
source: https://docs.kernel.org/admin-guide/blockdev/zram.html
|
||||
wiki: https://en.wikipedia.org/wiki/Zram
|
||||
rev: 2024-12-20
|
||||
---
|
||||
|
||||
# Zram
|
||||
zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as a general-purpose RAM disk. The two most common uses for zram are for the storage of temporary files (`/tmp`) and as a swap device. Initially, zram had only the latter function, hence the original name "compcache" ("compressed cache").
|
||||
|
||||
## Usage as swap
|
||||
Initially the created zram block device does not reserve or use any RAM. Only as files need or want to be swapped out, they will be compressed and moved into the zram block device. The zram block device will then dynamically grow or shrink as required.
|
||||
|
||||
Even when assuming that zstd only achieves a conservative 1:2 compression ratio (real world data shows a common ratio of 1:3), zram will offer the advantage of being able to store more content in RAM than without memory compression.
|
||||
|
||||
### Manually
|
||||
To set up one zstd compressed zram device with half the system memory capacity and a higher-than-normal priority (only for the current session):
|
||||
|
||||
```sh
|
||||
modprobe zram
|
||||
zramctl /dev/zram0 --algorithm zstd --size "$(($(grep -Po 'MemTotal:\s*\K\d+' /proc/meminfo)/2))KiB"
|
||||
mkswap -U clear /dev/zram0
|
||||
swapon --discard --priority 100 /dev/zram0
|
||||
```
|
||||
|
||||
To disable it again, either reboot or run:
|
||||
|
||||
```sh
|
||||
swapoff /dev/zram0
|
||||
modprobe -r zram
|
||||
echo 1 > /sys/module/zswap/parameters/enabled
|
||||
```
|
||||
|
||||
For a permanent solution, use a method from one of the following sections.
|
||||
|
||||
### Using a udev rule
|
||||
The example below describes how to set up swap on zram automatically at boot with a single udev rule. No extra package should be needed to make this work.
|
||||
|
||||
Explicitly load the module at boot:
|
||||
|
||||
```ini
|
||||
# /etc/modules-load.d/zram.conf
|
||||
zram
|
||||
```
|
||||
|
||||
Create the following udev rule adjusting the disksize attribute as necessary:
|
||||
|
||||
```
|
||||
# /etc/udev/rules.d/99-zram.rules
|
||||
ACTION=="add", KERNEL=="zram0", ATTR{comp_algorithm}="zstd", ATTR{disksize}="4G", RUN="/usr/bin/mkswap -U clear /dev/%k", TAG+="systemd"
|
||||
```
|
||||
|
||||
Add `/dev/zram` to your fstab with a higher than default priority:
|
||||
|
||||
```
|
||||
# /etc/fstab
|
||||
/dev/zram0 none swap defaults,discard,pri=100 0 0
|
||||
```
|
||||
|
||||
### Using zram-generator
|
||||
`zram-generator` provides `systemd-zram-setup@zramN.service` units to automatically initialize zram devices without users needing to enable/start the template or its instances.
|
||||
|
||||
To use it, install `zram-generator`, and create `/etc/systemd/zram-generator.conf` with the following:
|
||||
|
||||
```ini
|
||||
# /etc/systemd/zram-generator.conf
|
||||
|
||||
[zram0]
|
||||
zram-size = min(ram / 2, 4096)
|
||||
compression-algorithm = zstd
|
||||
```
|
||||
|
||||
`zram-size` is the size (in MiB) of zram device, you can use ram to represent the total memory.
|
||||
|
||||
`compression-algorithm` specifies the algorithm used to compress in zram device.
|
||||
`cat /sys/block/zram0/comp_algorithm` gives the available compression algorithm (as well as the current one included in brackets).
|
||||
|
||||
Then run `daemon-reload`, start your configured `systemd-zram-setup@zramN.service` instance (`N` matching the numerical instance-ID, in the example it is `systemd-zram-setup@zram0.service`).
|
||||
|
||||
You can check the swap status of your configured `/dev/zramN` device(s) by reading the unit status of your `systemd-zram-setup@zramN.service` instance(s), by using `zramctl`, or by using `swapon`.
|
||||
|
||||
## zramctl
|
||||
zramctl is used to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices.
|
||||
|
||||
Usage:
|
||||
```sh
|
||||
# Get info:
|
||||
# If no option is given, all non-zero size zram devices are shown.
|
||||
zramctl [options]
|
||||
|
||||
# Reset zram:
|
||||
zramctl -r zramdev...
|
||||
|
||||
# Print name of first unused zram device:
|
||||
zramctl -f
|
||||
|
||||
# Set up a zram device:
|
||||
zramctl [-f | zramdev] [-s size] [-t number] [-a algorithm]
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
| Option | Description |
|
||||
| ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `-a, --algorithm lzo/lz4/lz4hc/deflate/842/zstd` | Set the compression algorithm to be used for compressing data in the zram device. The list of supported algorithms could be inaccurate as it depends on the current kernel configuration. A basic overview can be obtained by using the command `cat /sys/block/zram0/comp_algorithm`; |
|
||||
| `-f, --find` | Find the first unused zram device. If a `--size` argument is present, then initialize the device. |
|
||||
| `-n, --noheadings` | Do not print a header line in status output. ` |
|
||||
| `-o, --output list` | Define the status output columns to be used. If no output arrangement is specified, then a default set is used. See below for list of all supported columns. |
|
||||
| `--output-all` | Output all available columns. |
|
||||
| `--raw` | Use the raw format for status output. |
|
||||
| `-r, --reset` | Reset the options of the specified zram device(s). Zram device settings can be changed only after a reset. |
|
||||
| `-s, --size size` | Create a zram device of the specified size. Zram devices are aligned to memory pages; when the requested size is not a multiple of the page size, it will be rounded up to the next multiple. When not otherwise specified, the unit of the size parameter is bytes. |
|
||||
| `-t, --streams number` | Set the maximum number of compression streams that can be used for the device. The default is use all CPUs and one stream for kernels older than 4.6. |
|
||||
|
||||
### Output Columns
|
||||
|
||||
| Output | Description |
|
||||
| ------------ | ------------------------------------------------------------------ |
|
||||
| `NAME` | zram device name |
|
||||
| `DISKSIZE` | limit on the uncompressed amount of data |
|
||||
| `DATA` | uncompressed size of stored data |
|
||||
| `COMPR` | compressed size of stored data |
|
||||
| `ALGORITHM` | the selected compression algorithm |
|
||||
| `STREAMS` | number of concurrent compress operations |
|
||||
| `ZERO-PAGES` | empty pages with no allocated memory |
|
||||
| `TOTAL` | all memory including allocator fragmentation and metadata overhead |
|
||||
| `MEM-LIMIT` | memory limit used to store compressed data |
|
||||
| `MEM-USED` | memory zram has consumed to store compressed data |
|
||||
| `MIGRATED` | number of objects migrated by compaction |
|
||||
| `COMP-RATIO` | compression ratio: DATA/TOTAL |
|
||||
| `MOUNTPOINT` | where the device is mounted |
|
||||
|
||||
## Misc
|
||||
### Checking zram statistics
|
||||
Use zramctl. Example:
|
||||
|
||||
```
|
||||
$ zramctl
|
||||
|
||||
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
|
||||
/dev/zram0 zstd 32G 1.9G 318.6M 424.9M 16 [SWAP]
|
||||
|
||||
DISKSIZE = 32G: this zram device will store up to 32 GiB of uncompressed data.
|
||||
DATA = 1.9G: currently, 1.9 GiB (uncompressed) of data is being stored in this zram device
|
||||
COMPR = 318.6M: the 1.9 GiB uncompressed data was compressed to 318.6 MiB
|
||||
TOTAL = 424.9M: including metadata, the 1.9 GiB of uncompressed data is using up 424.9 MiB of physical RAM
|
||||
```
|
||||
|
||||
### Multiple zram devices
|
||||
By default, loading the zram module creates a single `/dev/zram0` device.
|
||||
|
||||
If you need more than one `/dev/zram` device, specify the amount using the `num_devices` kernel module parameter or add them as needed afterwards.
|
||||
|
||||
### Optimizing swap on zram
|
||||
Since zram behaves differently than disk swap, we can configure the system's swap to take full potential of the zram advantages:
|
||||
|
||||
```ini
|
||||
# /etc/sysctl.d/99-vm-zram-parameters.conf
|
||||
|
||||
vm.swappiness = 180
|
||||
vm.watermark_boost_factor = 0
|
||||
vm.watermark_scale_factor = 125
|
||||
vm.page-cluster = 0
|
||||
```
|
||||
|
||||
### Enabling a backing device for a zram block
|
||||
zram can be configured to push incompressible pages to a specified block device when under memory pressure.
|
||||
|
||||
To add a backing device manually:
|
||||
|
||||
```sh
|
||||
echo /dev/sdX > /sys/block/zram0/backing_dev
|
||||
```
|
||||
|
||||
To add a backing device to your zram block device using `zram-generator`, update `/etc/systemd/zram-generator.conf` with the following under your `[zramX]` device you want the backing device added to:
|
||||
|
||||
```ini
|
||||
# /etc/systemd/zram-generator.conf
|
||||
|
||||
writeback-device=/dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
|
||||
```
|
||||
|
||||
### Using zram for non-swap purposes
|
||||
zram can also be used as a generic RAM-backed block device, e.g. a `/dev/ram` with less physical memory usage, but slightly lower performance. However there are some caveats:
|
||||
- There is no partition table support (no automatic creation of `/dev/zramxpy`).
|
||||
- The block size is fixed to 4 kiB.
|
||||
|
||||
The obvious way around this is to stack a loop device on-top the zram, using [losetup](../applications/cli/system/losetup.md), specifying the desired block size using the `-b` option and the `-P` option to process partition tables and automatic creation of the partition loop devices.
|
||||
|
||||
```sh
|
||||
zramctl -f -s <SIZE>G
|
||||
```
|
||||
|
||||
Copy the disk image to the new `/dev/zramx`, then create a loop device. If the disk image has a partition table, the block size of the loop device must match the block size used by the partition table, which is typically 512 or 4096 bytes.
|
||||
|
||||
```sh
|
||||
losetup -f -b 512 -P /dev/zramx
|
||||
|
||||
mount /dev/loop0p1 /mnt/boot
|
||||
mount /dev/loop0p2 /mnt/root
|
||||
```
|
|
@ -1,426 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
arch-wiki: https://wiki.archlinux.org/title/Archiso
|
||||
repo: https://gitlab.archlinux.org/archlinux/archiso
|
||||
rev: 2024-12-17
|
||||
---
|
||||
|
||||
# archiso
|
||||
Archiso is a highly-customizable tool for building Arch Linux live CD/USB ISO images. The official images are built with archiso and include the following packages. It can be used as the basis for rescue systems, linux installers or other systems. This wiki article explains how to install archiso, and how to configure it to control aspects of the resulting ISO image such as included packages and files. Technical requirements and build steps can be found in the official project documentation. Archiso is implemented with a number of bash scripts. The core component of archiso is the mkarchiso command. Its options are documented in mkarchiso -h and not covered here.
|
||||
|
||||
## Prepare a custom profile
|
||||
Archiso comes with two profiles, `releng` and `baseline`.
|
||||
- `releng` is used to create the official monthly installation ISO. It can be used as a starting point for creating a customized ISO image.
|
||||
- `baseline` is a minimal configuration, that includes only the bare minimum packages required to boot the live environment from the medium.
|
||||
|
||||
If you wish to adapt or customize one of archiso's shipped profiles, copy it from `/usr/share/archiso/configs/profile-name/` to a writable directory with a name of your choice. For example:
|
||||
|
||||
```sh
|
||||
cp -r /usr/share/archiso/configs/releng/ archlive
|
||||
```
|
||||
|
||||
## Profile structure
|
||||
An archiso profile contains configuration that defines the resulting ISO image. The profile structure is documented in `/usr/share/doc/archiso/README.profile.rst`.
|
||||
|
||||
An archiso profile consists of several configuration files and a directory for files to be added to the resulting image.
|
||||
|
||||
```
|
||||
profile/
|
||||
├── airootfs/
|
||||
├── efiboot/
|
||||
├── syslinux/
|
||||
├── grub/
|
||||
├── bootstrap_packages.arch
|
||||
├── packages.arch
|
||||
├── pacman.conf
|
||||
└── profiledef.sh
|
||||
```
|
||||
|
||||
The required files and directories are explained in the following sections.
|
||||
|
||||
### profiledef.sh
|
||||
This file describes several attributes of the resulting image and is a place for customization to the general behavior of the image.
|
||||
|
||||
The image file is constructed from some of the variables in ``profiledef.sh``: ``<iso_name>-<iso_version>-<arch>.iso``
|
||||
(e.g. ``archlinux-202010-x86_64.iso``).
|
||||
|
||||
* ``iso_name``: The first part of the name of the resulting image (defaults to ``mkarchiso``)
|
||||
* ``iso_label``: The ISO's volume label (defaults to ``MKARCHISO``)
|
||||
* ``iso_publisher``: A free-form string that states the publisher of the resulting image (defaults to ``mkarchiso``)
|
||||
* ``iso_application``: A free-form string that states the application (i.e. its use-case) of the resulting image (defaults
|
||||
to ``mkarchiso iso``)
|
||||
* ``iso_version``: A string that states the version of the resulting image (defaults to ``""``)
|
||||
* ``install_dir``: A string (maximum eight characters long, which **must** consist of ``[a-z0-9]``) that states the
|
||||
directory on the resulting image into which all files will be installed (defaults to ``mkarchiso``)
|
||||
* ``buildmodes``: An optional list of strings, that state the build modes that the profile uses. Only the following are
|
||||
understood:
|
||||
|
||||
- ``bootstrap``: Build a compressed file containing a minimal system to bootstrap from
|
||||
- ``iso``: Build a bootable ISO image (implicit default, if no ``buildmodes`` are set)
|
||||
- ``netboot``: Build artifacts required for netboot using iPXE
|
||||
* ``bootmodes``: A list of strings, that state the supported boot modes of the resulting image. Only the following are
|
||||
understood:
|
||||
|
||||
- ``bios.syslinux.mbr``: Syslinux for x86 BIOS booting from a disk
|
||||
- ``bios.syslinux.eltorito``: Syslinux for x86 BIOS booting from an optical disc
|
||||
- ``uefi-ia32.grub.esp``: GRUB for IA32 UEFI booting from a disk
|
||||
- ``uefi-ia32.grub.eltorito``: GRUB for IA32 UEFI booting from an optical disc
|
||||
- ``uefi-x64.grub.esp``: GRUB for x64 UEFI booting from a disk
|
||||
- ``uefi-x64.grub.eltorito``: GRUB for x64 UEFI booting from an optical disc
|
||||
- ``uefi-ia32.systemd-boot.esp``: systemd-boot for IA32 UEFI booting from a disk
|
||||
- ``uefi-ia32.systemd-boot.eltorito``: systemd-boot for IA32UEFI booting from an optical disc
|
||||
- ``uefi-x64.systemd-boot.esp``: systemd-boot for x64 UEFI booting from a disk
|
||||
- ``uefi-x64.systemd-boot.eltorito``: systemd-boot for x64 UEFI booting from an optical disc
|
||||
Note that BIOS El Torito boot mode must always be listed before UEFI El Torito boot mode.
|
||||
* ``arch``: The architecture (e.g. ``x86_64``) to build the image for. This is also used to resolve the name of the packages
|
||||
file (e.g. ``packages.x86_64``)
|
||||
* ``pacman_conf``: The ``pacman.conf`` to use to install packages to the work directory when creating the image (defaults to
|
||||
the host's ``/etc/pacman.conf``)
|
||||
* ``airootfs_image_type``: The image type to create. The following options are understood (defaults to ``squashfs``):
|
||||
|
||||
- ``squashfs``: Create a squashfs image directly from the airootfs work directory
|
||||
- ``ext4+squashfs``: Create an ext4 partition, copy the airootfs work directory to it and create a squashfs image from it
|
||||
- ``erofs``: Create an EROFS image for the airootfs work directory
|
||||
* ``airootfs_image_tool_options``: An array of options to pass to the tool to create the airootfs image. ``mksquashfs`` and
|
||||
``mkfs.erofs`` are supported. See ``mksquashfs --help`` or ``mkfs.erofs --help`` for all possible options
|
||||
* ``bootstrap_tarball_compression``: An array containing the compression program and arguments passed to it for
|
||||
compressing the bootstrap tarball (defaults to ``cat``). For example: ``bootstrap_tarball_compression=(zstd -c -T0 --long -19)``.
|
||||
* ``file_permissions``: An associative array that lists files and/or directories who need specific ownership or
|
||||
permissions. The array's keys contain the path and the value is a colon separated list of owner UID, owner GID and
|
||||
access mode. E.g. ``file_permissions=(["/etc/shadow"]="0:0:400")``. When directories are listed with a trailing backslash (``/``) **all** files and directories contained within the listed directory will have the same owner UID, owner GID, and access mode applied recursively.
|
||||
|
||||
### bootstrap_packages.arch
|
||||
All packages to be installed into the environment of a bootstrap image have to be listed in an architecture specific file (e.g. ``bootstrap_packages.x86_64``), which resides top-level in the profile.
|
||||
|
||||
Packages have to be listed one per line. Lines starting with a ``#`` and blank lines are ignored.
|
||||
|
||||
This file is required when generating bootstrap images using the ``bootstrap`` build mode.
|
||||
|
||||
### packages.arch
|
||||
All packages to be installed into the environment of an ISO image have to be listed in an architecture specific file (e.g. ``packages.x86_64``), which resides top-level in the profile.
|
||||
|
||||
Packages have to be listed one per line. Lines starting with a ``#`` and blank lines are ignored.
|
||||
|
||||
This file is required when generating ISO images using the ``iso`` or ``netboot`` build modes.
|
||||
|
||||
### pacman.conf
|
||||
A configuration for pacman is required per profile.
|
||||
|
||||
Some configuration options will not be used or will be modified:
|
||||
|
||||
* ``CacheDir``: the profile's option is **only** used if it is not the default (i.e. ``/var/cache/pacman/pkg``) and if it is
|
||||
not the same as the system's option. In all other cases the system's pacman cache is used.
|
||||
* ``HookDir``: it is **always** set to the ``/etc/pacman.d/hooks`` directory in the work directory's airootfs to allow
|
||||
modification via the profile and ensure interoparability with hosts using dracut
|
||||
* ``RootDir``: it is **always** removed, as setting it explicitely otherwise refers to the host's root filesystem (see
|
||||
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
|
||||
* ``LogFile``: it is **always** removed, as setting it explicitely otherwise refers to the host's pacman log file (see
|
||||
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
|
||||
* ``DBPath``: it is **always** removed, as setting it explicitely otherwise refers to the host's pacman database (see
|
||||
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
|
||||
|
||||
### airootfs
|
||||
This optional directory may contain files and directories that will be copied to the work directory of the resulting image's root filesystem.
|
||||
The files are copied before packages are being installed to work directory location.
|
||||
Ownership and permissions of files and directories from the profile's ``airootfs`` directory are not preserved. The mode will be ``644`` for files and ``755`` for directories, all of them will be owned by root. To set custom ownership and/or permissions, use ``file_permissions`` in ``profiledef.sh``.
|
||||
|
||||
With this overlay structure it is possible to e.g. create users and set passwords for them, by providing ``airootfs/etc/passwd``, ``airootfs/etc/shadow``, ``airootfs/etc/gshadow`` (see ``man 5 passwd``, ``man 5 shadow`` and ``man 5 gshadow`` respectively).
|
||||
If user home directories exist in the profile's ``airootfs``, their ownership and (and top-level) permissions will be altered according to the provided information in the password file.
|
||||
|
||||
### Boot loader configuration
|
||||
A profile may contain configuration for several boot loaders. These reside in specific top-level directories, which are explained in the following subsections.
|
||||
|
||||
The following *custom template identifiers* are understood and will be replaced according to the assignments of the respective variables in ``profiledef.sh``:
|
||||
|
||||
* ``%ARCHISO_LABEL%``: Set this using the ``iso_label`` variable in ``profiledef.sh``.
|
||||
* ``%INSTALL_DIR%``: Set this using the ``install_dir`` variable in ``profiledef.sh``.
|
||||
* ``%ARCH%``: Set this using the ``arch`` variable in ``profiledef.sh``.
|
||||
|
||||
Additionally there are also *custom template identifiers* have harcoded values set by ``mkarchiso`` that cannot be overridden:
|
||||
|
||||
* ``%ARCHISO_UUID%``: the ISO 9660 modification date in UTC, i.e. its "UUID",
|
||||
* ``%ARCHISO_SEARCH_FILENAME%``: file path on ISO 9660 that can be used by GRUB to find the ISO volume
|
||||
(**for GRUB ``.cfg`` files only**).
|
||||
|
||||
### efiboot
|
||||
This directory is mandatory when the ``uefi-x64.systemd-boot.esp`` or ``uefi-x64.systemd-boot.eltorito`` bootmodes are selected in ``profiledef.sh``. It contains configuration for `systemd-boot`.
|
||||
|
||||
> **Note:** The directory is a top-level representation of the systemd-boot configuration directories and files found in the root of an EFI system partition.
|
||||
|
||||
The *custom template identifiers* are **only** understood in the boot loader entry `.conf` files (i.e. **not** in ``loader.conf``).
|
||||
|
||||
### syslinux
|
||||
This directory is mandatory when the ``bios.syslinux.mbr`` or the ``bios.syslinux.eltorito`` bootmodes are selected in ``profiledef.sh``.
|
||||
It contains configuration files for `syslinux` or `isolinux` , or `pxelinux` used in the resulting image.
|
||||
|
||||
The *custom template identifiers* are understood in all `.cfg` files in this directory.
|
||||
|
||||
### grub
|
||||
This directory is mandatory when any of the following bootmodes is used in ``profiledef.sh``:
|
||||
|
||||
- ``uefi-ia32.grub.esp`` or
|
||||
- ``uefi-ia32.grub.eltorito`` or
|
||||
- ``uefi-x64.grub.esp`` or
|
||||
- ``uefi-x64.grub.eltorito``
|
||||
|
||||
It contains configuration files for `GRUB` used in the resulting image.
|
||||
|
||||
## Customization
|
||||
### Selecting packages
|
||||
Edit `packages.x86_64` to select which packages are to be installed on the live system image, listing packages line by line.
|
||||
|
||||
### Custom local repository
|
||||
To add packages not located in standard Arch repositories (e.g. packages from the AUR or customized with the ABS), set up a custom local repository and add your custom packages to it. Then add your repository to `pacman.conf` as follows:
|
||||
|
||||
```ini
|
||||
[customrepo]
|
||||
SigLevel = Optional TrustAll
|
||||
Server = file:///path/to/customrepo
|
||||
```
|
||||
|
||||
> **Note**: The ordering within `pacman.conf` matters. To give top priority to your custom repository, place it above the other repository entries.
|
||||
> This `pacman.conf` is only used for building the image. It will not be used in the live environment.
|
||||
> Ensure that the repository is located in a directory accessible by the chrooted mkarchiso process, such as `/tmp`, to ensure the repository is read correctly during the image building process.
|
||||
|
||||
### Packages from multilib
|
||||
To install packages from the multilib repository, simply uncomment that repository in `pacman.conf`.
|
||||
|
||||
### Adding files to image
|
||||
The `airootfs` directory is used as the starting point for the root directory (`/`) of the live system on the image. All its contents will be copied over to the working directory before packages are installed.
|
||||
|
||||
Place any custom files and/or directories in the desired location under `airootfs/`. For example, if you have a set of iptables scripts on your current system you want to be used on your live image, copy them over as such:
|
||||
|
||||
```sh
|
||||
cp -r /etc/iptables archlive/airootfs/etc
|
||||
```
|
||||
|
||||
Similarly, some care is required for special configuration files that reside somewhere down the hierarchy. Missing parts of the directory structure can be simply created with `mkdir`.
|
||||
Tip: To add a file to the install user's home directory, place it in `archlive/airootfs/root/`. To add a file to all other users home directories, place it in `archlive/airootfs/etc/skel/`.
|
||||
|
||||
> **Note**: Custom files that conflict with those provided by packages will be overwritten unless a package specifies them as backup files.
|
||||
|
||||
By default, permissions will be 644 for files and 755 for directories. All of them will be owned by the root user. To set different permissions or ownership for specific files and/or folders, use the `file_permissions` associative array in `profiledef.sh`.
|
||||
|
||||
### Adding repositories to the image
|
||||
To add a repository that can be used in the live environment, create a suitably modified `pacman.conf` and place it in `archlive/airootfs/etc/`.
|
||||
|
||||
If the repository also uses a key, place the key in `archlive/airootfs/usr/share/pacman/keyrings/`. The key file name must end with `.gpg`. Additionally, the key must be trusted. This can be accomplished by creating a GnuPG exported trust file in the same directory. The file name must end with `-trusted`. The first field is the key fingerprint, and the second is the trust. You can reference `/usr/share/pacman/keyrings/archlinux-trusted` for an example.
|
||||
|
||||
#### archzfs example
|
||||
The files in this example are:
|
||||
|
||||
```
|
||||
airootfs
|
||||
├── etc
|
||||
│ ├── pacman.conf
|
||||
│ └── pacman.d
|
||||
│ └── archzfs_mirrorlist
|
||||
└── usr
|
||||
└── share
|
||||
└── pacman
|
||||
└── keyrings
|
||||
├── archzfs.gpg
|
||||
└── archzfs-trusted
|
||||
```
|
||||
|
||||
`airootfs/etc/pacman.conf`:
|
||||
|
||||
```ini
|
||||
[archzfs]
|
||||
Include = /etc/pacman.d/archzfs_mirrorlist
|
||||
```
|
||||
|
||||
`airootfs/etc/pacman.d/archzfs_mirrorlist`:
|
||||
|
||||
```
|
||||
Server = https://archzfs.com/$repo/$arch
|
||||
Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch
|
||||
Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch
|
||||
Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch
|
||||
Server = https://zxcvfdsa.com/archzfs/$repo/$arch
|
||||
```
|
||||
|
||||
`airootfs/usr/share/pacman/keyrings/archzfs-trusted`:
|
||||
|
||||
```
|
||||
DDF7DB817396A49B2A2723F7403BD972F75D9D76:4:
|
||||
```
|
||||
|
||||
`archzfs.gpg` itself can be obtained directly from the repository site at https://archzfs.com/archzfs.gpg.
|
||||
|
||||
### Kernel
|
||||
Although both archiso's included profiles only have linux, ISOs can be made to include other or even multiple kernels.
|
||||
|
||||
First, edit `packages.x86_64` to include kernel package names that you want. When mkarchiso runs, it will include all `work_dir/airootfs/boot/vmlinuz-*` and `work_dir/boot/initramfs-*.img` files in the ISO (and additionally in the FAT image used for UEFI booting).
|
||||
|
||||
mkinitcpio presets by default will build fallback initramfs images. For an ISO, the main initramfs image would not typically include the autodetect hook, thus making an additional fallback image unnecessary. To prevent the creation of an fallback initramfs image, so that it does not take up space or slow down the build process, place a custom preset in `archlive/airootfs/etc/mkinitcpio.d/pkgbase.preset`. For example, for linux-lts:
|
||||
|
||||
`archlive/airootfs/etc/mkinitcpio.d/linux-lts.preset`:
|
||||
|
||||
```
|
||||
PRESETS=('archiso')
|
||||
|
||||
ALL_kver='/boot/vmlinuz-linux-lts'
|
||||
ALL_config='/etc/mkinitcpio.conf'
|
||||
|
||||
archiso_image="/boot/initramfs-linux-lts.img"
|
||||
```
|
||||
|
||||
Finally create boot loader configuration to allow booting the kernel(s).
|
||||
|
||||
### Boot loader
|
||||
Archiso supports syslinux for BIOS booting and GRUB or systemd-boot for UEFI booting. Refer to the articles of the boot loaders for information on their configuration syntax.
|
||||
|
||||
mkarchiso expects that GRUB configuration is in the `grub` directory, systemd-boot configuration is in the `efiboot` directory and syslinux configuration in the `syslinux` directory.
|
||||
|
||||
### UEFI Secure Boot
|
||||
If you want to make your archiso bootable on a UEFI Secure Boot enabled environment, you must use a signed boot loader.
|
||||
|
||||
### systemd units
|
||||
To enable systemd services/sockets/timers for the live environment, you need to manually create the symbolic links just as `systemctl enable` does it.
|
||||
|
||||
For example, to enable `gpm.service`, which contains `WantedBy=multi-user.target`, run:
|
||||
|
||||
```sh
|
||||
mkdir -p archlive/airootfs/etc/systemd/system/multi-user.target.wants
|
||||
ln -s /usr/lib/systemd/system/gpm.service archlive/airootfs/etc/systemd/system/multi-user.target.wants/
|
||||
```
|
||||
|
||||
The required symlinks can be found out by reading the systemd unit, or if you have the service installed, by enabling it and observing the systemctl output.
|
||||
|
||||
### Login manager
|
||||
Starting X at boot is done by enabling your login manager's systemd service. If you do not know which `.service` to enable, you can easily find out in case you are using the same program on the system you build your ISO on. Just use:
|
||||
|
||||
```sh
|
||||
ls -l /etc/systemd/system/display-manager.service
|
||||
```
|
||||
|
||||
Now create the same symlink in `archlive/airootfs/etc/systemd/system/`.
|
||||
|
||||
### Changing automatic login
|
||||
The configuration for getty's automatic login is located under `airootfs/etc/systemd/system/getty@tty1.service.d/autologin.conf`.
|
||||
|
||||
You can modify this file to change the auto login user:
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=-/sbin/agetty --autologin username --noclear %I 38400 linux
|
||||
```
|
||||
|
||||
Or remove `autologin.conf` altogether to disable auto login.
|
||||
|
||||
If you are using the serial console, create `airootfs/etc/systemd/system/serial-getty@ttyS0.service.d/autologin.conf` with the following content instead:
|
||||
|
||||
```ini
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=-/sbin/agetty -o '-p -- \\u' --noclear --autologin root --keep-baud 115200,57600,38400,9600 - $TERM
|
||||
```
|
||||
|
||||
### Users and passwords
|
||||
To create a user which will be available in the live environment, you must manually edit `archlive/airootfs/etc/passwd`, `archlive/airootfs/etc/shadow`, `archlive/airootfs/etc/group` and `archlive/airootfs/etc/gshadow`.
|
||||
|
||||
> **Note**: If these files exist, they must contain the root user and group.
|
||||
|
||||
For example, to add a user `archie`. Add them to `archlive/airootfs/etc/passwd` following the passwd syntax:
|
||||
|
||||
```
|
||||
root:x:0:0:root:/root:/usr/bin/zsh
|
||||
archie:x:1000:1000::/home/archie:/usr/bin/zsh
|
||||
```
|
||||
|
||||
> **Note**: The passwd file must end with a newline.
|
||||
|
||||
Add the user to `archlive/airootfs/etc/shadow` following the syntax of shadow. If you want to define a password for the user, generate a password hash with `openssl passwd -6` and add it to the file. For example:
|
||||
|
||||
```
|
||||
root::14871::::::
|
||||
archie:$6$randomsalt$cij4/pJREFQV/NgAgh9YyBIoCRRNq2jp5l8lbnE5aLggJnzIRmNVlogAg8N6hEEecLwXHtMQIl2NX2HlDqhCU1:14871::::::
|
||||
```
|
||||
|
||||
Otherwise, you may keep the password field empty, meaning that the user can log in with no password.
|
||||
|
||||
Add the user's group and the groups which they will part of to `archlive/airootfs/etc/group` according to group syntax. For example:
|
||||
|
||||
```
|
||||
root:x:0:root
|
||||
adm:x:4:archie
|
||||
wheel:x:10:archie
|
||||
uucp:x:14:archie
|
||||
archie:x:1000:
|
||||
```
|
||||
|
||||
Create the appropriate `archlive/airootfs/etc/gshadow` according to gshadow:
|
||||
|
||||
```
|
||||
root:!*::root
|
||||
archie:!*::
|
||||
```
|
||||
|
||||
Make sure `/etc/shadow` and `/etc/gshadow` have the correct permissions:
|
||||
|
||||
`archlive/profiledef.sh`:
|
||||
|
||||
```
|
||||
file_permissions=(
|
||||
...
|
||||
["/etc/shadow"]="0:0:0400"
|
||||
["/etc/gshadow"]="0:0:0400"
|
||||
)
|
||||
```
|
||||
|
||||
After package installation, mkarchiso will create all specified home directories for users listed in `archlive/airootfs/etc/passwd` and copy `work_directory/x86_64/airootfs/etc/skel/*` to them. The copied files will have proper user and group ownership.
|
||||
|
||||
### Changing the distribution name used in the ISO
|
||||
Start by copying the file `/etc/os-release` into the `etc/` folder in the rootfs. Then, edit the file accordingly. You can also change the name inside of GRUB and syslinux.
|
||||
|
||||
### Adjusting the size of the root file system
|
||||
When installing packages in the live environment, for example on hardware requiring DKMS modules, the default size of the root file system might not allow the download and installation of such packages due to its size.
|
||||
|
||||
To adjust the size on the fly:
|
||||
|
||||
```sh
|
||||
mount -o remount,size=SIZE /run/archiso/cowspace
|
||||
```
|
||||
|
||||
To adjust the size at the bootloader stage (as a kernel cmdline by pressing `e` or `Tab`) use the boot option:
|
||||
|
||||
```sh
|
||||
cow_spacesize=SIZE
|
||||
```
|
||||
|
||||
To adjust the size while building an image add the boot option to:
|
||||
- `efiboot/loader/entries/*.cfg`
|
||||
- `grub/*.cfg`
|
||||
- `syslinux/*.cfg`
|
||||
|
||||
## Build the ISO
|
||||
Build an ISO which you can then burn to CD or USB by running:
|
||||
|
||||
```sh
|
||||
mkarchiso -v -w /path/to/work_dir -o /path/to/out_dir /path/to/profile/
|
||||
```
|
||||
|
||||
Replace `/path/to/profile/` with the path to your custom profile, or with `/usr/share/archiso/configs/releng/` if you are building an unmodified profile.
|
||||
|
||||
When run, the script will download and install the packages you specified to `work_directory/x86_64/airootfs`, create the kernel and init images, apply your customizations and finally build the ISO into the output directory.
|
||||
|
||||
> **Tip**: If memory allows, it is preferred to place the working directory on `tmpfs`.
|
||||
|
||||
```sh
|
||||
mkdir ./work
|
||||
mount -t tmpfs -o size=1G tmpfs ./work
|
||||
mkarchiso -v -w ./work -o /path/to/out_dir /path/to/profile/
|
||||
umount -r ./work
|
||||
```
|
||||
|
||||
### Removal of work directory
|
||||
|
||||
> **Warning**: If mkarchiso is interrupted, run `findmnt` to make sure there are no mount binds before deleting it - otherwise, you may lose data (e.g. an external device mounted at `/run/media/user/label` gets bound within `work/x86_64/airootfs/run/media/user/label` during the build process).
|
||||
|
||||
The temporary files are copied into work directory. After successfully building the ISO , the work directory and its contents can be deleted. E.g.:
|
||||
|
||||
```sh
|
||||
rm -rf /path/to/work_dir
|
||||
```
|
|
@ -14,8 +14,6 @@ obj: meta/collection
|
|||
- [MergerFS](MergerFS.md)
|
||||
- [LVM](./LVM.md)
|
||||
- [LUKS](./LUKS.md)
|
||||
- [tmpFS](./tmpFS.md)
|
||||
- [overlayfs](./overlayfs.md)
|
||||
|
||||
## Network
|
||||
- [SSHFS](SSHFS.md)
|
||||
|
|
|
@ -1,60 +0,0 @@
|
|||
---
|
||||
obj: filesystem
|
||||
arch-wiki: https://wiki.archlinux.org/title/Overlay_filesystem
|
||||
source: https://docs.kernel.org/filesystems/overlayfs.html
|
||||
wiki: https://en.wikipedia.org/wiki/OverlayFS
|
||||
rev: 2024-12-19
|
||||
---
|
||||
|
||||
# OverlayFS
|
||||
Overlayfs allows one, usually read-write, directory tree to be overlaid onto another, read-only directory tree. All modifications go to the upper, writable layer. This type of mechanism is most often used for live CDs but there is a wide variety of other uses.
|
||||
|
||||
The implementation differs from other "union filesystem" implementations in that after a file is opened all operations go directly to the underlying, lower or upper, filesystems. This simplifies the implementation and allows native performance in these cases.
|
||||
|
||||
## Usage
|
||||
To mount an overlay use the following mount options:
|
||||
|
||||
```sh
|
||||
mount -t overlay overlay -o lowerdir=/lower,upperdir=/upper,workdir=/work /merged
|
||||
```
|
||||
|
||||
> **Note**:
|
||||
> - The working directory (`workdir`) needs to be an empty directory on the same filesystem as the upper directory.
|
||||
> - The lower directory can be read-only or could be an overlay itself.
|
||||
> - The upper directory is normally writable.
|
||||
> - The workdir is used to prepare files as they are switched between the layers.
|
||||
|
||||
The lower directory can actually be a list of directories separated by `:`, all changes in the merged directory are still reflected in upper.
|
||||
|
||||
### Read-only overlay
|
||||
Sometimes, it is only desired to create a read-only view of the combination of two or more directories. In that case, it can be created in an easier manner, as the directories `upper` and `work` are not required:
|
||||
|
||||
```sh
|
||||
mount -t overlay overlay -o lowerdir=/lower1:/lower2 /merged
|
||||
```
|
||||
|
||||
When `upperdir` is not specified, the overlay is automatically mounted as read-only.
|
||||
|
||||
## Example:
|
||||
|
||||
```sh
|
||||
mount -t overlay overlay -o lowerdir=/lower1:/lower2:/lower3,upperdir=/upper,workdir=/work /merged
|
||||
```
|
||||
|
||||
> **Note**: The order of lower directories is the rightmost is the lowest, thus the upper directory is on top of the first directory in the left-to-right list of lower directories; NOT on top of the last directory in the list, as the order might seem to suggest.
|
||||
|
||||
The above example will have the order:
|
||||
|
||||
- /upper
|
||||
- /lower1
|
||||
- /lower2
|
||||
- /lower3
|
||||
|
||||
To add an overlayfs entry to `/etc/fstab` use the following format:
|
||||
|
||||
```
|
||||
# /etc/fstab
|
||||
overlay /merged overlay noauto,x-systemd.automount,lowerdir=/lower,upperdir=/upper,workdir=/work 0 0
|
||||
```
|
||||
|
||||
The `noauto` and `x-systemd.automount` mount options are necessary to prevent systemd from hanging on boot because it failed to mount the overlay. The overlay is now mounted whenever it is first accessed and requests are buffered until it is ready.
|
|
@ -1,30 +0,0 @@
|
|||
---
|
||||
obj: filesystem
|
||||
wiki: https://en.wikipedia.org/wiki/Tmpfs
|
||||
arch-wiki: https://wiki.archlinux.org/title/Tmpfs
|
||||
---
|
||||
|
||||
# tmpFS
|
||||
tmpfs is a temporary filesystem that resides in memory and/or swap partition(s). Mounting directories as tmpfs can be an effective way of speeding up accesses to their files, or to ensure that their contents are automatically cleared upon reboot.
|
||||
|
||||
## Usage
|
||||
|
||||
**Create a tmpfs**:
|
||||
`mount -t tmpfs -o [OPTIONS] tmpfs [MOUNT_POINT]`
|
||||
|
||||
**Resize a tmpfs**:
|
||||
`mount -t tmpfs -o remount,size=<NEW_SIZE> tmpfs [MOUNT_POINT]`
|
||||
|
||||
### Options
|
||||
|
||||
| **Option** | **Description** |
|
||||
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `size=bytes` | Specify an upper limit on the size of the filesystem. Size is given in bytes, rounded up to entire pages. A `k`, `m`, or `g` suffix can be used for Ki, Mi, or Gi. Use `%` to specify a percentage of physical RAM. Default: 50%. Set to `0` to remove the limit. |
|
||||
| `nr_blocks=blocks` | Similar to `size`, but in blocks of `PAGE_CACHE_SIZE`. Accepts `k`, `m`, or `g` suffixes, but not `%`. |
|
||||
| `nr_inodes=inodes` | Sets the maximum number of inodes. Default is half the number of physical RAM pages or the number of lowmem RAM pages (whichever is smaller). Use `k`, `m`, or `g` suffixes, but `%` is not supported. Set to `0` to remove the limit. |
|
||||
| `noswap` | Disables swap. Remounts must respect the original settings. By default, swap is enabled. |
|
||||
| `mode=mode` | Sets the initial permissions of the root directory. |
|
||||
| `gid=gid` | Sets the initial group ID of the root directory. |
|
||||
| `uid=uid` | Sets the initial user ID of the root directory. |
|
||||
| `huge=huge_option` | Sets the huge table memory allocation policy for all files (if `CONFIG_TRANSPARENT_HUGEPAGE` is enabled). Options: `never` (default), `always`, `within_size`, `advise`, `deny`, or `force`. |
|
||||
| `mpol=mpol_option` | Sets NUMA memory allocation policy (if `CONFIG_NUMA` is enabled). Options: `default`, `prefer:node`, `bind:nodelist`, `interleave`, `interleave:nodelist`, or `local`. Example: `mpol=bind:0-3,5,7,9-15`. |
|
|
@ -1,7 +1,5 @@
|
|||
---
|
||||
obj: concept
|
||||
arch-wiki: https://wiki.archlinux.org/title/Mkinitcpio
|
||||
rev: 2024-12-16
|
||||
---
|
||||
|
||||
# mkinitcpio
|
||||
|
@ -10,11 +8,20 @@ The initial ramdisk is in essence a very small environment (early userspace) whi
|
|||
## Configuration
|
||||
The primary configuration file for _mkinitcpio_ is `/etc/mkinitcpio.conf`. Additionally, preset definitions are provided by kernel packages in the `/etc/mkinitcpio.d` directory (e.g. `/etc/mkinitcpio.d/linux.preset`).
|
||||
|
||||
- `MODULES` : Kernel modules to be loaded before any boot hooks are run.
|
||||
- `BINARIES` : Additional binaries to be included in the initramfs image.
|
||||
- `FILES` : Additional files to be included in the initramfs image.
|
||||
- `HOOKS` : Hooks are scripts that execute in the initial ramdisk.
|
||||
- `COMPRESSION` : Used to compress the initramfs image.
|
||||
`MODULES`
|
||||
Kernel modules to be loaded before any boot hooks are run.
|
||||
|
||||
`BINARIES`
|
||||
Additional binaries to be included in the initramfs image.
|
||||
|
||||
`FILES`
|
||||
Additional files to be included in the initramfs image.
|
||||
|
||||
`HOOKS`
|
||||
Hooks are scripts that execute in the initial ramdisk.
|
||||
|
||||
`COMPRESSION`
|
||||
Used to compress the initramfs image.
|
||||
|
||||
### MODULES
|
||||
The `MODULES` array is used to specify modules to load before anything else is done.
|
||||
|
@ -54,28 +61,3 @@ The default `HOOKS` setting should be sufficient for most simple, single disk se
|
|||
| **lvm2** | Adds the device mapper kernel module and the `lvm` tool to the image. |
|
||||
| **fsck** | Adds the fsck binary and file system-specific helpers to allow running fsck against your root device (and `/usr` if separate) prior to mounting. If added after the **autodetect** hook, only the helper specific to your root file system will be added. Usage of this hook is **strongly** recommended, and it is required with a separate `/usr` partition. It is highly recommended that if you include this hook that you also include any necessary modules to ensure your keyboard will work in early userspace. |
|
||||
| **filesystems** | This includes necessary file system modules into your image. This hook is **required** unless you specify your file system modules in `MODULES`. |
|
||||
|
||||
### UKI
|
||||
A Unified Kernel Image (UKI) is a single executable file that can be directly booted by UEFI firmware or automatically sourced by boot-loaders.
|
||||
|
||||
In essence, a UKI combines all the necessary components for the operating system to start up, including:
|
||||
- EFI stub loader
|
||||
- Kernel command line
|
||||
- Microcode updates
|
||||
- Initramfs image (initial RAM file system)
|
||||
- Kernel image itself
|
||||
- Splash screen
|
||||
|
||||
To enable the UKI edit `/etc/mkinitcpio.d/linux.preset`:
|
||||
|
||||
```sh
|
||||
default_uki="/boot/EFI/Linux/arch-linux.efi"
|
||||
|
||||
fallback_uki="/boot/EFI/Linux/arch-linux-fallback.efi"
|
||||
```
|
||||
|
||||
Build the Unified Kernel Image:
|
||||
|
||||
```sh
|
||||
mkinitcpio --allpresets
|
||||
```
|
||||
|
|
|
@ -1,57 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
repo: https://github.com/Foxboron/sbctl
|
||||
rev: 2024-12-16
|
||||
---
|
||||
|
||||
# sbctl (Secure Boot Manager)
|
||||
sbctl intends to be a user-friendly secure boot key manager capable of setting up secure boot, offer key management capabilities, and keep track of files that needs to be signed in the boot chain.
|
||||
|
||||
## Usage
|
||||
Install the necessary packages:
|
||||
```sh
|
||||
pacman -S sbctl sbsigntools
|
||||
```
|
||||
|
||||
Check that Secure Boot "Setup Mode" is "Enabled" in UEFI:
|
||||
```sh
|
||||
sbctl status
|
||||
```
|
||||
|
||||
Create your own signing keys:
|
||||
```sh
|
||||
sbctl create-keys
|
||||
```
|
||||
|
||||
Sign the systemd bootloader:
|
||||
```sh
|
||||
sbctl sign -s \
|
||||
-o /usr/lib/systemd/boot/efi/systemd-bootx64.efi.signed \
|
||||
/usr/lib/systemd/boot/efi/systemd-bootx64.efi
|
||||
```
|
||||
|
||||
Enroll your custom keys:
|
||||
```sh
|
||||
sbctl enroll-keys
|
||||
|
||||
# Enroll and include Microsoft Keys
|
||||
sbctl enroll-keys --microsoft
|
||||
```
|
||||
|
||||
Sign EFI files:
|
||||
```sh
|
||||
sbctl sign -s /boot/EFI/Linux/arch-linux.efi
|
||||
sbctl sign -s /boot/EFI/Linux/arch-linux-fallback.efi
|
||||
sbctl sign -s /efi/EFI/systemd/systemd-bootx64.efi
|
||||
sbctl sign -s /efi/EFI/Boot/bootx64.efi
|
||||
```
|
||||
|
||||
Verify signature of EFI files:
|
||||
```sh
|
||||
sbctl verify
|
||||
```
|
||||
|
||||
Resign everything:
|
||||
```sh
|
||||
sbctl sign-all
|
||||
```
|
|
@ -12,7 +12,6 @@ systemd is a suite of basic building blocks for a [Linux](../Linux.md) system. I
|
|||
See also:
|
||||
- [Systemd-Timers](Systemd-Timers.md)
|
||||
- [systemd-boot](systemd-boot.md)
|
||||
- [systemd-cryptenroll](systemd-cryptenroll.md)
|
||||
|
||||
## Using Units
|
||||
Units commonly include, but are not limited to, services (_.service_), mount points (_.mount_), devices (_.device_) and sockets (_.socket_).
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
---
|
||||
obj: application
|
||||
arch-wiki: https://wiki.archlinux.org/title/Systemd-boot
|
||||
rev: 2024-12-17
|
||||
---
|
||||
|
||||
# Systemd Boot
|
||||
|
@ -21,8 +20,7 @@ bootctl update
|
|||
```
|
||||
|
||||
## Configuration
|
||||
The loader configuration is stored in the file `_esp_/loader/loader.conf`.
|
||||
|
||||
The loader configuration is stored in the file `_esp_/loader/loader.conf`
|
||||
Example:
|
||||
```
|
||||
default arch.conf
|
||||
|
@ -32,7 +30,7 @@ editor no
|
|||
```
|
||||
|
||||
### Adding loaders
|
||||
_systemd-boot_ will search for boot menu items in `_esp_/loader/entries/*.conf`.
|
||||
_systemd-boot_ will search for boot menu items in `_esp_/loader/entries/*.conf`
|
||||
|
||||
Values:
|
||||
- `title` : Name
|
||||
|
@ -59,18 +57,4 @@ systemctl reboot --boot-loader-entry=arch-custom.conf
|
|||
Firmware Setup:
|
||||
```shell
|
||||
systemctl reboot --firmware-setup
|
||||
```
|
||||
|
||||
## Keybindings
|
||||
While the menu is shown, the following keys are active:
|
||||
|
||||
| Key | Description |
|
||||
| ------------- | ----------------------------------------------------------------------------------- |
|
||||
| `Up` / `Down` | Select menu entry |
|
||||
| `Enter` | Boot the selected entry |
|
||||
| `d` | select the default entry to boot (stored in a non-volatile EFI variable) |
|
||||
| `t` / `T` | adjust the timeout (stored in a non-volatile EFI variable) |
|
||||
| `e` | edit the option line (kernel command line) for this bootup to pass to the EFI image |
|
||||
| `Q` | quit |
|
||||
| `v` | show the systemd-boot and UEFI version |
|
||||
| `P` | print the current configuration to the console |
|
||||
```
|
|
@ -1,130 +0,0 @@
|
|||
---
|
||||
obj: application
|
||||
arch-wiki: https://wiki.archlinux.org/title/Systemd-cryptenroll
|
||||
rev: 2024-12-16
|
||||
---
|
||||
|
||||
# systemd-cryptenroll
|
||||
systemd-cryptenroll allows enrolling smartcards, FIDO2 tokens and Trusted Platform Module security chips into LUKS devices, as well as regular passphrases. These devices are later unlocked by `systemd-cryptsetup@.service`, using the enrolled tokens.
|
||||
|
||||
## Usage
|
||||
|
||||
### **List keyslots**
|
||||
systemd-cryptenroll can list the keyslots in a LUKS device, similar to cryptsetup luksDump, but in a more user-friendly format.
|
||||
|
||||
```sh
|
||||
$ systemd-cryptenroll /dev/disk
|
||||
|
||||
SLOT TYPE
|
||||
0 password
|
||||
1 tpm2
|
||||
```
|
||||
|
||||
### **Erasing keyslots**
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --wipe-slot=SLOT
|
||||
```
|
||||
|
||||
Where `SLOT` can be:
|
||||
- A single keyslot index
|
||||
- A type of keyslot, which will erase all keyslots of that type. Valid types are `empty`, `password`, `recovery`, `pkcs11`, `fido2`, `tpm2`
|
||||
- A combination of all of the above, separated by commas
|
||||
- The string `all`, which erases all keyslots on the device. This option can only be used when enrolling another device or passphrase at the same time.
|
||||
|
||||
The `--wipe-slot` operation can be used in combination with all enrollment options, which is useful to update existing device enrollments:
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --wipe-slot=fido2 --fido2-device=auto
|
||||
```
|
||||
|
||||
### **Enrolling passphrases**
|
||||
#### Regular password
|
||||
This is equivalent to `cryptsetup luksAddKey`.
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --password
|
||||
```
|
||||
|
||||
#### Recovery key
|
||||
Recovery keys are mostly identical to passphrases, but are computer-generated instead of being chosen by a human, and thus have a guaranteed high entropy. The key uses a character set that is easy to type in, and may be scanned off screen via a QR code.
|
||||
|
||||
A recovery key is designed to be used as a fallback if the hardware tokens are unavailable, and can be used in place of regular passphrases whenever they are required.
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --recovery-key
|
||||
```
|
||||
|
||||
### Enrolling hardware devices
|
||||
The `--type-device` options must point to a valid device path of their respective type. A list of available devices can be obtained by passing the list argument to this option. Alternatively, if you only have a single device of the desired type connected, the auto option can be used to automatically select it.
|
||||
|
||||
#### PKCS#11 tokens or smartcards
|
||||
The token or smartcard must contain a RSA key pair, which will be used to encrypt the generated key that will be used to unlock the volume.
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --pkcs11-token-uri=device
|
||||
```
|
||||
|
||||
#### FIDO2 tokens
|
||||
Any FIDO2 token that supports the "hmac-secret" extension can be used with systemd-cryptenroll. The following example would enroll a FIDO2 token to an encrypted LUKS2 block device, requiring only user presence as authentication.
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --fido2-device=device --fido2-with-client-pin=no
|
||||
```
|
||||
|
||||
In addition, systemd-cryptenroll supports using the token's built-in user verification methods:
|
||||
- `--fido2-with-user-presence` defines whether to verify the user presence (i.e. by tapping the token) before unlocking, defaults to `yes`
|
||||
- `--fido2-with-user-verification` defines whether to require user verification before unlocking, defaults to `no`
|
||||
|
||||
By default, the cryptographic algorithm used when generating a FIDO2 credential is es256 which denotes Elliptic Curve Digital Signature Algorithm (ECDSA) over NIST P-256 with SHA-256. If desired and provided by the FIDO2 token, a different cryptographic algorithm can be specified during enrollment.
|
||||
|
||||
Suppose that a previous FIDO2 token has already been enrolled and the user wishes to enroll another, the following generates an eddsa credential which denotes EdDSA over Curve25519 with SHA-512 and authenticates the device with a previous enrolled token instead of a password.
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll /dev/disk --fido2-device=device --fido2-credential-algorithm=eddsa --unlock-fido2-device=auto
|
||||
```
|
||||
|
||||
#### Trusted Platform Module
|
||||
systemd-cryptenroll has native support for enrolling LUKS keys in TPMs. It requires the following:
|
||||
- `tpm2-tss` must be installed,
|
||||
- A LUKS2 device (currently the default type used by cryptsetup),
|
||||
- If you intend to use this method on your root partition, some tweaks need to be made to the initramfs
|
||||
|
||||
To begin, run the following command to list your installed TPMs and the driver in use:
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll --tpm2-device=list
|
||||
```
|
||||
|
||||
> **Tip**: If your computer has multiple TPMs installed, specify the one you wish to use with `--tpm2-device=/path/to/tpm2_device` in the following steps.
|
||||
|
||||
A key may be enrolled in both the TPM and the LUKS volume using only one command. The following example generates a new random key, adds it to the volume so it can be used to unlock it in addition to the existing keys, and binds this new key to PCR 7 (Secure Boot state):
|
||||
|
||||
```sh
|
||||
systemd-cryptenroll --tpm2-device=auto /dev/sdX
|
||||
```
|
||||
|
||||
where `/dev/sdX` is the full path to the encrypted LUKS volume. Use `--unlock-key-file=/path/to/keyfile` if the LUKS volume is unlocked by a keyfile instead of a passphrase.
|
||||
|
||||
> Note: It is possible to require a PIN to be entered in addition to the TPM state being correct. Simply add the option `--tpm2-with-pin=yes` to the command above and enter the PIN when prompted.
|
||||
|
||||
To check that the new key was enrolled, dump the LUKS configuration and look for a systemd-tpm2 token entry, as well as an additional entry in the Keyslots section:
|
||||
|
||||
```sh
|
||||
cryptsetup luksDump /dev/sdX
|
||||
```
|
||||
|
||||
To test that the key works, run the following command while the LUKS volume is closed:
|
||||
|
||||
```sh
|
||||
systemd-cryptsetup attach mapping_name /dev/sdX none tpm2-device=auto
|
||||
```
|
||||
|
||||
where `mapping_name` is your chosen name for the volume once opened.
|
||||
|
||||
##### Modules
|
||||
If your TPM requires a kernel module, edit `/etc/mkinitcpio.conf` and edit the `MODULES` line to add the module used by your TPM. For instance:
|
||||
|
||||
```sh
|
||||
MODULES=(tpm_tis)
|
||||
```
|
Loading…
Add table
Reference in a new issue