Compare commits

...
Sign in to create a new pull request.

44 commits
godot ... main

Author SHA1 Message Date
28933b6af8
update
Some checks failed
ci/woodpecker/push/validate_schema Pipeline failed
2025-01-28 10:31:34 +01:00
323d59d281
add bwrap + age 2025-01-09 10:52:16 +01:00
c1d6c28dff
add xdg-user-dirs 2025-01-08 13:11:28 +01:00
95e4663463
update pacman 2025-01-08 11:48:29 +01:00
012c4a1cde
update ufw
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-23 09:17:58 +01:00
5c4b3e14bf
add zram 2024-12-20 12:46:21 +01:00
adc93877f4
update linux 2024-12-20 12:45:40 +01:00
cd03683f24
add plymouth 2024-12-20 09:52:58 +01:00
59623955d4
add overlayfs 2024-12-20 08:59:41 +01:00
ad237ca6d2
update arch pkg 2024-12-20 08:42:03 +01:00
f1ac09f57f
update rsync 2024-12-20 08:09:59 +01:00
40e711e9d0
update sddm 2024-12-18 15:49:31 +01:00
f7374157b3
add archiso
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-17 14:25:09 +01:00
e0ff5de746
add tmpfs 2024-12-17 14:24:37 +01:00
d34710f673
add sddm
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-17 10:57:31 +01:00
e3a4a1a7d7
update systemd 2024-12-17 10:56:02 +01:00
c85814db1a
add sbctl + systemd-cryptenroll
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-16 16:20:32 +01:00
064dc6c5d3
add url api
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-16 10:30:51 +01:00
619913dec3
add ogp
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-16 09:45:52 +01:00
b0c4d4e19c
update tmux
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-16 08:59:39 +01:00
67b61cff70
add usql
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-10 10:25:15 +01:00
0b59b7e44c
update git
All checks were successful
ci/woodpecker/push/validate_schema Pipeline was successful
2024-12-04 14:07:33 +01:00
ae741d1ced
update 2024-12-03 21:20:35 +01:00
686440f307
woodpecker ci 2024-12-03 10:38:47 +01:00
c063dcd650
update 2024-12-03 10:38:21 +01:00
b0b8cf4428
add woodpecker ci 2024-12-03 10:31:42 +01:00
c465fd16f5
add json lines 2024-12-02 10:45:29 +01:00
4183941c78
add serie 2024-10-25 09:04:52 +02:00
c71c2d4da2
add owntracks + dawarich 2024-10-23 08:26:40 +02:00
4a2da573a2
add oha 2024-10-22 14:33:49 +02:00
a191343dad
add lychee 2024-10-22 08:19:29 +02:00
0d987a882c
add stew 2024-10-21 10:55:56 +02:00
dd36f51615
add nfs 2024-10-21 08:12:19 +02:00
8bc618e9ea
merge postgres 2024-09-30 13:40:22 +02:00
9dc6932a7c
add timescaledb 2024-09-30 13:37:38 +02:00
5c64b7e686
add pgvector 2024-09-30 11:02:04 +02:00
ef007be94c
add postgis 2024-09-30 10:54:21 +02:00
2b5dd3a5ee
add postgres 2024-09-30 10:22:19 +02:00
6d3a74a82d
fix 2024-09-30 07:45:10 +02:00
c091d75bc1
add authentik 2024-09-26 21:56:39 +02:00
cfb23e66e2
update vscode 2024-09-19 15:31:04 +02:00
464169cec5
add ocrs 2024-09-19 09:13:50 +02:00
3d1627074f
wip 2024-09-03 11:50:43 +02:00
4f93665533
wip 2024-09-03 11:48:51 +02:00
50 changed files with 6244 additions and 98 deletions

View file

@ -1,20 +0,0 @@
name: Validate Schema
on:
push:
branches:
- main
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Validation
uses: docker://git.hydrar.de/mdtools/mdtools:latest
with:
entrypoint: /bin/bash
args: scripts/validate_schema.sh

View file

@ -0,0 +1,9 @@
when:
- event: push
branch: main
steps:
- name: "Validate Schema"
image: git.hydrar.de/mdtools/mdtools:latest
commands:
- /bin/bash scripts/validate_schema.sh

View file

@ -1,6 +1,6 @@
---
obj: meta/collection
rev: 2024-07-14
rev: 2025-01-09
---
# Applications
@ -38,6 +38,7 @@ rev: 2024-07-14
## Desktop
- [KDE Plasma](./desktops/KDE%20Plasma.md)
- [SDDM](./desktops/SDDM.md)
- [dwm](./desktops/dwm.md)
- [picom](./desktops/picom.md)
- [Hyprland](./desktops/hyprland.md)
@ -53,10 +54,12 @@ rev: 2024-07-14
- [HTTPie](./development/HTTPie.md)
- [MongoDB Compass](./development/MongoDB%20Compass.md)
- [MongoDB](./development/MongoDB.md)
- [Postgres](./development/Postgres.md)
- [Podman Desktop](./development/Podman%20Desktop.md)
- [Visual Studio Code](./development/Visual%20Studio%20Code.md)
- [continue](./development/continue.md)
- [psequel](development/psequel.md)
- [PostgreSQL](development/Postgres.md)
## Documents
- [Tachiyomi](./documents/Tachiyomi.md)
@ -101,7 +104,7 @@ rev: 2024-07-14
- [LocalSend](./network/LocalSend.md)
- [SnapDrop](./network/SnapDrop.md)
- [OnionShare](./network/OnionShare.md)
- [qBittorrent](./network/qBittorent.md)
- [qBittorrent](./network/qBittorrent.md)
## Utilities
- [Bottles](./utilities/Bottles.md)
@ -128,13 +131,16 @@ rev: 2024-07-14
- [Google Maps](./mobile/Google%20Maps.md)
- [Google Calendar](./office/Google%20Calendar.md)
- [Google Contacts](./office/Google%20Contacts.md)
- [OwnTracks](./mobile/OwnTracks.md)
# Web
- [Authelia](./web/Authelia.md)
- [Authentik](./web/Authentik.md)
- [Bitwarden](./web/Bitwarden.md)
- [AdGuard](./web/AdGuard.md)
- [Gitea](./web/Gitea.md)
- [Forgejo](./web/Forgejo.md)
- [Woodpecker CI](./web/WoodpeckerCI.md)
- [SearXNG](./web/Searxng.md)
- [Grocy](./web/Grocy.md)
- [Guacamole](./web/Guacamole.md)
@ -160,6 +166,7 @@ rev: 2024-07-14
- [traefik](./web/traefik.md)
- [Caddy](./web/Caddy.md)
- [zigbee2MQTT](./web/zigbee2mqtt.md)
- [dawarich](./web/dawarich.md)
# CLI
## Terminal
@ -226,9 +233,12 @@ rev: 2024-07-14
- [yazi](./cli/yazi.md)
- [GPG](../cryptography/GPG.md)
- [OpenSSL](../cryptography/OpenSSL.md)
- [age](../cryptography/age.md)
- [tomb](./cli/tomb.md)
- [dysk](./cli/dysk.md)
- [pass](./cli/pass.md)
- [ocrs](./cli/ocrs.md)
- [stew](./cli/stew.md)
## System
- [Core Utils](./cli/system/Core%20Utils.md)
@ -241,6 +251,9 @@ rev: 2024-07-14
- [mergerfs](../linux/filesystems/MergerFS.md)
- [sshfs](../linux/filesystems/SSHFS.md)
- [wine](../windows/Wine.md)
- [sbctl](../linux/sbctl.md)
- [systemd-cryptenroll](../linux/systemd/systemd-cryptenroll.md)
- [bubblewrap](./utilities/bubblewrap.md)
## Development
- [act](./development/act.md)
@ -253,6 +266,8 @@ rev: 2024-07-14
- [Ansible](../tools/Ansible/Ansible.md)
- [Docker](../tools/Docker.md)
- [Podman](../tools/Podman.md)
- [serie](./cli/serie.md)
- [usql](./cli/usql.md)
## Media
- [yt-dlp](./media/yt-dlp.md)
@ -282,6 +297,8 @@ rev: 2024-07-14
- [pop](./cli/pop.md)
- [intermodal](./cli/intermodal.md)
- [socat](./cli/network/socat.md)
- [lychee](./cli/network/lychee.md)
- [oha](./cli/network/oha.md)
## Backup
- [borg](./backup/borg.md)

View file

@ -1,38 +1,71 @@
---
obj: application
repo: https://github.com/casey/intermodal
website: imdl.io
rev: 2025-01-28
---
# Intermodal
[Repo](https://github.com/casey/intermodal)
Intermodal is a user-friendly and featureful command-line [BitTorrent](../../internet/BitTorrent.md) metainfo utility. The binary is called `imdl` and runs on [Linux](../../linux/Linux.md), [Windows](../../windows/Windows.md), and [macOS](../../macos/macOS.md).
## Usage
### Create torrent file:
```shell
imdl torrent create file
imdl torrent create [OPTIONS] <FILES>
```
Flags:
```shell
-N, --name <TEXT> Set name of torrent
-i, --input <INPUT> Torrent Files
-c, --comment <TEXT> Torrent Comment
-a, --announce <URL> Torrent Tracker
```
| Option | Description |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| `-F, --follow-symlinks` | Follow symlinks in torrent input (default: no) |
| `-f, --force` | Overwrite destination `.torrent` file if it exists |
| `--ignore` | Skip files listed in `.gitignore`, `.ignore`, `.git/info/exclude`, and `git config --get core.excludesFile` |
| `-h, --include-hidden` | Include hidden files that would otherwise be skipped |
| `-j, --include-junk` | Include junk files that would otherwise be skipped |
| `-M, --md5` | Include MD5 checksum of each file in the torrent ( warning: MD5 is broken) |
| `--no-created-by` | Do not populate `created by` key with imdl version information |
| `--no-creation-date` | Do not populate `creation date` key with current time |
| `-O, --open` | Open `.torrent` file after creation (uses platform-specific opener) |
| `--link` | Print created torrent `magnet:` URL to standard output |
| `-P, --private` | Set private flag, restricting peer discovery |
| `-S, --show` | Display information about the created torrent file |
| `-V, --version` | Print version number |
| `-A, --allow <LINT>` | Allow specific lint (e.g., `small-piece-length`, `private-trackerless`) |
| `-a, --announce <URL>` | Use primary tracker announce URL for the torrent |
| `-t, --announce-tier <URL-LIST>` | Add tiered tracker announce URLs to the torrent metadata, separate their announce URLs with commas. |
| `-c, --comment <TEXT>` | Set comment text in the generated `.torrent` file |
| `--node <NODE>` | Add DHT bootstrap node to the torrent for peer discovery |
| `-g, --glob <GLOB>` | Include or exclude files matching specific glob patterns |
| `-i, --input <INPUT>` | Read contents from input source (file, dir, or standard input) |
| `-N, --name <TEXT>` | Set name of the encoded magnet link to specific text |
| `-o, --output <TARGET>` | Save `.torrent` file to specified target or print to output |
| `--peer <PEER>` | Add peer specification to the generated magnet link |
| `-p, --piece-length <BYTES>` | Set piece length for encoding torrent metadata |
| `--sort-by <SPEC>` | Determine order of files within the encoded torrent (path, size, or both) |
| `-s, --source <TEXT>` | Set source field in encoded torrent metadata to specific text |
| `--update-url <URL>` | Set URL where revised version of metainfo can be downloaded |
### Show torrent information
```shell
imdl torrent show <torrent>
```
You can output the information as JSON using `--json`.
### Verify torrent
```shell
imdl torrent verify <torrent>
imdl torrent verify --input torr.torrent --content file
```
### Generate magnet link
### Magnet Links
```shell
imdl torrent link <torrent>
# Get magnet link from torrent file
imdl torrent link [-s, --select-only <INDICES>...] <torrent>
# Select files to download. Values are indices into the `info.files` list, e.g. `--select-only 1,2,3`.
# Get torrent file from magnet link
imdl torrent from-link [-o, --output <OUT>] <INPUT>
# Announce a torrent
imdl torrent announce <INPUT>
```

View file

@ -0,0 +1,295 @@
---
obj: application
website: https://lychee.cli.rs
repo: https://github.com/lycheeverse/lychee
rev: 2024-10-22
---
# lychee
A fast, async link checker
Finds broken URLs and mail addresses inside Markdown, HTML, `reStructuredText`, websites and more!
## Usage
Usage: `lychee [OPTIONS] <inputs>...`
The inputs (where to get links to check from). These can be: files (e.g. `README.md`), file globs (e.g. `"~/git/*/README.md"`), remote URLs (e.g. `https://example.com/README.md`) or standard input (`-`). NOTE: Use `--` to separate inputs from options that allow multiple arguments
### Options
| Option | Description |
| ----------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-c, --config <CONFIG_FILE>` | Configuration file to use [default: `lychee.toml`] |
| `-v, --verbose...` | Set verbosity level; more output per occurrence (e.g. `-v` or `-vv`) |
| `-q, --quiet...` | Less output per occurrence (e.g. `-q` or `-qq`) |
| `-n, --no-progress` | Do not show progress bar. This is recommended for non-interactive shells (e.g. for continuous integration) |
| `--cache` | Use request cache stored on disk at `.lycheecache` |
| `--max-cache-age <MAX_CACHE_AGE>` | Discard all cached requests older than this duration [default: 1d] |
| `--cache-exclude-status <CACHE_EXCLUDE_STATUS>` | A list of status codes that will be ignored from the cache |
| `--dump` | Don't perform any link checking. Instead, dump all the links extracted from inputs that would be checked |
| `--dump-inputs` | Don't perform any link extraction and checking. Instead, dump all input sources from which links would be collected |
| `--archive <ARCHIVE>` | Specify the use of a specific web archive. Can be used in combination with `--suggest` [possible values: wayback] |
| `--suggest` | Suggest link replacements for broken links, using a web archive. The web archive can be specified with `--archive` |
| `-m, --max-redirects <MAX_REDIRECTS>` | Maximum number of allowed redirects [default: 5] |
| `--max-retries <MAX_RETRIES>` | Maximum number of retries per request [default: 3] |
| `--max-concurrency <MAX_CONCURRENCY>` | Maximum number of concurrent network requests [default: 128] |
| `-T, --threads <THREADS>` | Number of threads to utilize. Defaults to number of cores available to the system |
| `-u, --user-agent <USER_AGENT>` | User agent [default: `lychee/0.16.1`] |
| `-i, --insecure` | Proceed for server connections considered insecure (invalid TLS) |
| `-s, --scheme <SCHEME>` | Only test links with the given schemes (e.g. https). Omit to check links with any other scheme. At the moment, we support http, https, file, and mailto |
| `--offline` | Only check local files and block network requests |
| `--include <INCLUDE>` | URLs to check (supports regex). Has preference over all excludes |
| `--exclude <EXCLUDE>` | Exclude URLs and mail addresses from checking (supports regex) |
| `--exclude-file <EXCLUDE_FILE>` | Deprecated; use `--exclude-path` instead |
| `--exclude-path <EXCLUDE_PATH>` | Exclude file path from getting checked |
| `-E, --exclude-all-private` | Exclude all private IPs from checking. Equivalent to `--exclude-private --exclude-link-local --exclude-loopback` |
| `--exclude-private` | Exclude private IP address ranges from checking |
| `--exclude-link-local` | Exclude link-local IP address range from checking |
| `--exclude-loopback` | Exclude loopback IP address range and localhost from checking |
| `--exclude-mail` | Exclude all mail addresses from checking (deprecated; excluded by default) |
| `--include-mail` | Also check email addresses |
| `--remap <REMAP>` | Remap URI matching pattern to different URI |
| `--header <HEADER>` | Custom request header |
| `-a, --accept <ACCEPT>` | A List of accepted status codes for valid links |
| `--include-fragments` | Enable the checking of fragments in links |
| `-t, --timeout <TIMEOUT>` | Website timeout in seconds from connect to response finished [default: 20] |
| `-r, --retry-wait-time <RETRY_WAIT_TIME>` | Minimum wait time in seconds between retries of failed requests [default: 1] |
| `-X, --method <METHOD>` | Request method [default: get] |
| `-b, --base <BASE>` | Base URL or website root directory to check relative URLs e.g. <https://example.com> or `/path/to/public` |
| `--basic-auth <BASIC_AUTH>` | Basic authentication support. E.g. `http://example.com username:password` |
| `--github-token <GITHUB_TOKEN>` | GitHub API token to use when checking github.com links, to avoid rate limiting [env: `$GITHUB_TOKEN`] |
| `--skip-missing` | Skip missing input files (default is to error if they don't exist) |
| `--no-ignore` | Do not skip files that would otherwise be ignored by '.gitignore', '.ignore', or the global ignore file |
| `--hidden` | Do not skip hidden directories and files |
| `--include-verbatim` | Find links in verbatim sections like `pre`- and `code` blocks |
| `--glob-ignore-case` | Ignore case when expanding filesystem path glob inputs |
| `-o, --output <OUTPUT>` | Output file of status report |
| `--mode <MODE>` | Set the output display mode. Determines how results are presented in the terminal [default: color] [possible values: plain, color, emoji] |
| `-f, --format <FORMAT>` | Output format of final status report [default: compact] [possible values: compact, detailed, json, markdown, raw] |
| `--require-https` | When HTTPS is available, treat HTTP links as errors |
| `--cookie-jar <COOKIE_JAR>` | Tell lychee to read cookies from the given file. Cookies will be stored in the cookie jar and sent with requests. New cookies will be stored in the cookie jar and existing cookies will be updated |
## Configuration
The configuration file is a TOML file that can be used to specify the options that are also available on the command line. It comes in handy when you want to specify a lot of options, or when you want to configure lychee for continuous integration as part of a repository (configuration as code).
`./lychee.toml` (in the current working directory) is used if no other configuration file is specified. Here is an example of a configuration file. Please find the latest version on Github.
```ini
############################# Display #############################
# Verbose program output
# Accepts log level: "error", "warn", "info", "debug", "trace"
verbose = "info"
# Don't show interactive progress bar while checking links.
no_progress = false
# Path to summary output file.
output = ".config.dummy.report.md"
############################# Cache ###############################
# Enable link caching. This can be helpful to avoid checking the same links on
# multiple runs.
cache = true
# Discard all cached requests older than this duration.
max_cache_age = "2d"
############################# Runtime #############################
# Number of threads to utilize.
# Defaults to number of cores available to the system if omitted.
threads = 2
# Maximum number of allowed redirects.
max_redirects = 10
# Maximum number of allowed retries before a link is declared dead.
max_retries = 2
# Maximum number of concurrent link checks.
max_concurrency = 14
############################# Requests ############################
# User agent to send with each request.
user_agent = "curl/7.83. 1"
# Website timeout from connect to response finished.
timeout = 20
# Minimum wait time in seconds between retries of failed requests.
retry_wait_time = 2
# Comma-separated list of accepted status codes for valid links.
# Supported values are:
#
# accept = ["200..=204", "429"]
# accept = "200..=204, 429"
# accept = ["200", "429"]
# accept = "200, 429"
accept = ["200", "429"]
# Proceed for server connections considered insecure (invalid TLS).
insecure = false
# Only test links with the given schemes (e.g. https).
# Omit to check links with any other scheme.
# At the moment, we support http, https, file, and mailto.
scheme = ["https"]
# When links are available using HTTPS, treat HTTP links as errors.
require_https = false
# Request method
method = "get"
# Custom request headers
headers = []
# Remap URI matching pattern to different URI.
remap = ["https://example.com http://example.invalid"]
# Base URL or website root directory to check relative URLs.
base = "https://example.com"
# HTTP basic auth support. This will be the username and password passed to the
# authorization HTTP header. See
# <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization>
basic_auth = ["example.com user:pwd"]
############################# Exclusions ##########################
# Skip missing input files (default is to error if they don't exist).
skip_missing = false
# Check links inside `<code>` and `<pre>` blocks as well as Markdown code
# blocks.
include_verbatim = false
# Ignore case of paths when matching glob patterns.
glob_ignore_case = false
# Exclude URLs and mail addresses from checking (supports regex).
exclude = ['^https://www\.linkedin\.com', '^https://web\.archive\.org/web/']
# Exclude these filesystem paths from getting checked.
exclude_path = ["file/path/to/Ignore", "./other/file/path/to/Ignore"]
# URLs to check (supports regex). Has preference over all excludes.
include = ['gist\.github\.com.*']
# Exclude all private IPs from checking.
# Equivalent to setting `exclude_private`, `exclude_link_local`, and
# `exclude_loopback` to true.
exclude_all_private = false
# Exclude private IP address ranges from checking.
exclude_private = false
# Exclude link-local IP address range from checking.
exclude_link_local = false
# Exclude loopback IP address range and localhost from checking.
exclude_loopback = false
# Check mail addresses
include_mail = true
```
## GitHub Action
lychee is also available as a [GitHub Action](https://github.com/lycheeverse/lychee-action/). This way you can set up a job which regularly checks all links in your repository. If you like, it can open an issue when lychee finds problems with your links.
Here is a full example of a GitHub workflow file:
It will check all repository links once per day and create an issue in case of errors. Save this under `.github/workflows/links.yml`:
```yml
name: Links
on:
repository_dispatch:
workflow_dispatch:
schedule:
- cron: "00 18 * * *"
jobs:
linkChecker:
runs-on: ubuntu-latest
permissions:
issues: write # required for peter-evans/create-issue-from-file
steps:
- uses: actions/checkout@v4
- name: Link Checker
id: lychee
uses: lycheeverse/lychee-action@v2
- name: Create Issue From File
if: env.exit_code != 0
uses: peter-evans/create-issue-from-file@v5
with:
title: Link Checker Report
content-filepath: ./lychee/out.md
labels: report, automated issue
```
Here is how to pass the arguments.
```yml
- name: Link Checker
uses: lycheeverse/lychee-action@v2
with:
# Check all markdown, html and reStructuredText files in repo (default)
args: --base . --verbose --no-progress './**/*.md' './**/*.html' './**/*.rst'
# Use json as output format (instead of markdown)
format: json
# Use different output file path
output: /tmp/foo.txt
# Use a custom GitHub token, which
token: ${{ secrets.CUSTOM_TOKEN }}
# Don't fail action on broken links
fail: false
```
## Examples
**Check All Links In Current Directory**:
The following command recursively checks all links in all supported files inside the current directory.
```sh
lychee .
```
**Check All Links On A Website**:
```sh
lychee https://example.com
```
**Check Only Specific Files**:
```sh
lychee README.md
lychee test.html info.txt
lychee test.html info.txt https://example.com
```
**Check Links In Directories, But Block All Network Requests**:
```sh
lychee --offline path/to/directory
```
**Check Links In A Remote File**:
```sh
lychee https://raw.githubusercontent.com/lycheeverse/lychee/master/README.md
```
**Check links from stdin**:
```sh
cat test.md | lychee -
echo 'https://example.com' | lychee -
```

View file

@ -0,0 +1,54 @@
---
obj: application
repo: https://github.com/hatoo/oha
rev: 2024-10-22
---
# Ohayou (おはよう)
Ohayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.
## Usage
Usage: `oha [FLAGS] [OPTIONS] <url>`
### Options
| Option | Description |
| -------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-n <N_REQUESTS>` | Number of requests to run. [default: 200] |
| `-c <N_CONNECTIONS>` | Number of connections to run concurrently. You may should increase limit to number of open files for larger `-c`. [default: 50] |
| `-p <N_HTTP2_PARALLEL>` | Number of parallel requests to send on HTTP/2. `oha` will run $c * p$ concurrent workers in total. [default: 1] |
| `-z <DURATION>` | Duration of application to send requests. If duration is specified, n is ignored. On HTTP/1, When the duration is reached, ongoing requests are aborted and counted as "aborted due to deadline". You can change this behavior with `-w` option. Currently, on HTTP/2, When the duration is reached, ongoing requests are waited. `-w` option is ignored. |
| `-w, --wait-ongoing-requests-after-deadline` | When the duration is reached, ongoing requests are waited |
| `-q <QUERY_PER_SECOND>` | Rate limit for all, in queries per second (QPS) |
| `--burst-delay <BURST_DURATION>` | Introduce delay between a predefined number of requests. Note: If qps is specified, burst will be ignored |
| `--burst-rate <BURST_REQUESTS>` | Rates of requests for burst. Default is 1. Note: If qps is specified, burst will be ignored |
| `--rand-regex-url` | Generate URL by `rand_regex` crate but dot is disabled for each query e.g. `http://127.0.0.1/[a-z][a-z][0-9]`. Currently dynamic scheme, host and port with keep-alive are not works well. |
| `--max-repeat <MAX_REPEAT>` | A parameter for the '--rand-regex-url'. The `max_repeat` parameter gives the maximum extra repeat counts the x*, x+ and x{n,} operators will become. [default: 4] |
| `--dump-urls <DUMP_URLS>` | Dump target Urls `<DUMP_URLS>` times to debug `--rand-regex-url` |
| `--latency-correction` | Correct latency to avoid coordinated omission problem. It's ignored if `-q` is not set. |
| `--no-tui` | No realtime tui |
| `-j, --json` | Print results as JSON |
| `--fps <FPS>` | Frame per second for tui. [default: 16] |
| `-m, --method <METHOD>` | HTTP method [default: `GET`] |
| `-H <HEADERS>` | Custom HTTP header. Examples: `-H "foo: bar"` |
| `-t <TIMEOUT>` | Timeout for each request. Default to infinite. |
| `-A <ACCEPT_HEADER>` | HTTP Accept Header. |
| `-d <BODY_STRING>` | HTTP request body. |
| `-D <BODY_PATH>` | HTTP request body from file. |
| `-T <CONTENT_TYPE>` | Content-Type |
| `-a <BASIC_AUTH>` | Basic authentication, `username:password` |
| `--http-version <HTTP_VERSION>` | HTTP version |
| `--http2` | Use HTTP/2. Shorthand for `--http-version=2` |
| `--host <HOST>` | HTTP Host header |
| `--disable-compression` | Disable compression. |
| `-r, --redirect <REDIRECT>` | Limit for number of Redirect. Set 0 for no redirection. Redirection isn't supported for HTTP/2. [default: 10] |
| `--disable-keepalive` | Disable keep-alive, prevents re-use of TCP connections between different HTTP requests. This isn't supported for HTTP/2. |
| `--no-pre-lookup` | *Not* perform a DNS lookup at beginning to cache it |
| `--ipv6` | Lookup only ipv6. |
| `--ipv4` | Lookup only ipv4. |
| `--insecure` | Accept invalid certs. |
| `--connect-to <CONNECT_TO>` | Override DNS resolution and default port numbers with strings like 'example.org:443:localhost:8443' |
| `--disable-color` | Disable the color scheme. |
| `--unix-socket <UNIX_SOCKET>` | Connect to a unix socket instead of the domain in the URL. Only for non-HTTPS URLs. |
| `--stats-success-breakdown` | Include a response status code successful or not successful breakdown for the time histogram and distribution statistics |
| `--db-url <DB_URL>` | Write succeeded requests to sqlite database url E.G test.db |

View file

@ -0,0 +1,29 @@
---
obj: application
repo: https://github.com/robertknight/ocrs
rev: 2024-09-19
---
# ocrs
ocrs is a Rust library and CLI tool for extracting text from images, also known as OCR (Optical Character Recognition).
The goal is to create a modern OCR engine that:
- Works well on a wide variety of images (scanned documents, photos containing text, screenshots etc.) with zero or much less preprocessing effort compared to earlier engines like Tesseract. This is achieved by using machine learning more extensively in the pipeline.
- Is easy to compile and run across a variety of platforms, including WebAssembly
- Is trained on open and liberally licensed datasets
- Has a codebase that is easy to understand and modify
## Usage
ocrs can be used as a binary or embedded as a [rust](../../dev/programming/languages/Rust.md) crate.
Usage: `ocrs [OPTIONS] <image>`
### Options
| Option | Description |
| ----------------------- | -------------------------------------------------- |
| `--detect-model <path>` | Use a custom text detection model |
| `--rec-model <path>` | Use a custom text recognition model |
| `-j, --json` | Output text and structure in JSON format |
| `-o, --output <path>` | Output file path (defaults to stdout) |
| `-p, --png` | Output annotated copy of input image in PNG format |

View file

@ -1,6 +1,7 @@
---
obj: application
website: https://rsync.samba.org/
website: https://rsync.samba.org
arch-wiki: https://wiki.archlinux.org/title/Rsync
repo: https://github.com/WayneD/rsync
---
@ -44,4 +45,3 @@ Either `source` or `destination` can be a local folder or a remote path (`user@h
| --log-file=FILE | log what we're doing to the specified FILE |
| --partial | keep partially transferred files |
| -P | same as --partial --progress |

View file

@ -0,0 +1,18 @@
---
obj: application
repo: https://github.com/lusingander/serie
rev: 2024-10-25
---
# serie
A rich git commit graph in your terminal
## Keybinds
- `?` - Open help
- `Enter` - Show commit details
- `Tab` - Open refs list
- `/` - Start search
- `ESC` - Cancel
- `n/N` - Go to next/previous search match
- `c/C` - Copy commit short/full hash

View file

@ -0,0 +1,104 @@
---
obj: application
repo: https://github.com/marwanhawari/stew
rev: 2024-10-21
---
# stew
🥘 An independent package manager for compiled binaries.
## Features
* Install binaries from GitHub releases or directly from URLs.
* Easily distribute binaries across teams and private repositories.
* Get the latest releases ahead of other package managers.
* Rapidly browse, install, and experiment with different projects.
* Configure where to install binaries.
* No need for `sudo`.
* Just a single binary with 0 dependencies.
* Portable `Stewfile` with optional pinned versioning.
* Headless batch installs from a `Stewfile.lock.json` file.
## Usage
### Install
```sh
# Install from GitHub releases
stew install junegunn/fzf # Install the latest release
stew install junegunn/fzf@0.27.1 # Install a specific, tagged version
# Install directly from a URL
stew install https://github.com/cli/cli/releases/download/v2.4.0/gh_2.4.0_macOS_amd64.tar.gz
# Install from an Stewfile
stew install Stewfile
# Install headlessly from a Stewfile.lock.json
stew install Stewfile.lock.json
```
### Search
```sh
# Search for a GitHub repo and browse its contents with a terminal UI
stew search ripgrep
```
### Browse
```sh
# Browse a specific GitHub repo's releases and assets with a terminal UI
stew browse sharkdp/hyperfine
```
### Upgrade
```sh
# Upgrade a binary to its latest version. Not for binaries installed from a URL.
stew upgrade rg # Upgrade using the name of the binary directly
stew upgrade --all # Upgrade all binaries
```
### Uninstall
```sh
# Uninstall a binary
stew uninstall rg # Uninstall using the name of the binary directly
stew uninstall --all # Uninstall all binaries
```
### Rename
```sh
# Rename an installed binary using an interactive UI
stew rename rg # Rename using the name of the binary directly
```
### List
```sh
# List installed binaries
stew list # Print to console
stew list > Stewfile # Create an Stewfile without pinned tags
stew list --tags > Stewfile # Pin tags
```
### Config
```sh
# Configure the stew file paths using an interactive UI
stew config # Automatically updates the stew.config.json
```
## Configuration
`stew` can be configured with a `stew.config.json` file. The location of this file will depend on your OS:
|Linux/macOS | Windows |
| ------------ | ---------- |
| `$XDG_CONFIG_HOME/stew` or `~/.config/stew` | `~/AppData/Local/stew/Config` |
You can configure 2 aspects of `stew`:
1. The `stewPath`: this is where `stew` data is stored.
2. The `stewBinPath`: this is where `stew` installs binaries
The default locations for these are:
| | Linux/macOS | Windows |
| ------------ | ------------ | ---------- |
| `stewPath` | `$XDG_DATA_HOME/stew` or `~/.local/share/stew` | `~/AppData/Local/stew` |
| `stewBinPath` | `~/.local/bin` | `~/AppData/Local/stew/bin` |
There are multiple ways to configure these:
* When you first run `stew`, it will look for a `stew.config.json` file. If it cannot find one, then you will be prompted to set the configuration values.
* After `stew` is installed, you can use the `stew config` command to set the configuration values.
* At any time, you can manually create or edit the `stew.config.json` file. It should have values for `stewPath` and `stewBinPath`.

View file

@ -3,7 +3,7 @@ obj: application
repo: https://github.com/tmux/tmux
arch-wiki: https://wiki.archlinux.org/title/tmux
wiki: https://en.wikipedia.org/wiki/Tmux
rev: 2024-01-15
rev: 2024-12-16
---
# tmux
@ -12,7 +12,9 @@ tmux is a terminal multiplexer: it enables a number of terminals to be created,
# Usage
**New tmux session:**
```shell
tmux
tmux new -s name
tmux new -s mysession -n mywindow
```
**List existing sessions:**
@ -23,6 +25,7 @@ tmux ls
**Attach to a named session:**
```shell
tmux attach -t name
tmux a -t name
```
**Kill a session:**
@ -31,14 +34,30 @@ tmux kill-session -t name
```
# Keybinds
- Vertical Split: `Ctrl-b %`
- Horizontal Split: `Ctrl-b "`
- Select Pane: `Ctrl-b q [num]`
- Change Pane Size: `Ctrl-b Ctrl [Down/Up/Left/Right]`
- Switch sessions: `Ctrl-b s`
- Show the time: `Ctrl-b + t`
## Sessions
- Rename current session: `Ctrl-b + $`
- Detach from a running session: `Ctrl-b + d`
- Create a new window inside session: `Ctrl-b c`
- Go to next window: `Ctrl-b n`
- Switch sessions and windows: `Ctrl-B w`
- Go to window: `Ctrl-b [0-9]`
- Kill a window: `Ctrl-b x`
- Sessions and windows overview: `Ctrl-b + w`
- Move to previous session: `Ctrl-b + (`
- Move to next session: `Ctrl-b + )`
- Switch sessions: `Ctrl-b + s`
## Windows
- Create a new window: `Ctrl-b + c`
- Rename current window: `Ctrl-b + ,`
- Go to previous window: `Ctrl-b + p`
- Go to next window: `Ctrl-b + n`
- Go to window: `Ctrl-b + [0-9]`
## Panes
- Vertical Split: `Ctrl-b + %`
- Horizontal Split: `Ctrl-b + "`
- Select Pane: `Ctrl-b + q + [num]`
- Change Pane Size: `Ctrl-b + Ctrl + [Down/Up/Left/Right]`
- Move current pane left: `Ctrl-b + {`
- Move current pane right: `Ctrl-b + }`
- Close current pane: `Ctrl-b + x`
- Switch to the next pane: `Ctrl-b + o`
- Convert pane into a window: `Ctrl-b + !`

View file

@ -0,0 +1,229 @@
---
obj: application
repo: https://github.com/xo/usql
rev: 2024-12-10
---
# usql
usql is a universal command-line interface for PostgreSQL, MySQL, Oracle Database, SQLite3, Microsoft SQL Server, and many other databases including NoSQL and non-relational databases!
usql provides a simple way to work with SQL and NoSQL databases via a command-line inspired by PostgreSQL's psql. usql supports most of the core psql features, such as variables, backticks, backslash commands and has additional features that psql does not, such as multiple database support, copying between databases, syntax highlighting, context-based completion, and terminal graphics.
## Usage
```sh
usql [options]... [DSN]
```
DSN can be any database connection string like `sqlite:///path/to/my/file` or `postgres://user:pass@host:port/db`.
### Options
| Option | Description |
| ----------------------------------------- | -------------------------------------------------------------------------------------- |
| `-c, --command COMMAND` | run only single command (SQL or internal) and exit |
| `-f, --file FILE` | execute commands from file and exit |
| `-w, --no-password` | never prompt for password |
| `-X, --no-init` | do not execute initialization scripts (aliases: `--no-rc` `--no-psqlrc` `--no-usqlrc`) |
| `-o, --out FILE` | output file |
| `-W, --password` | force password prompt (should happen automatically) |
| `-1, --single-transaction` | execute as a single transaction (if non-interactive) |
| `-v, --set NAME=VALUE` | set variable NAME to VALUE (see \set command, aliases: --var --variable) |
| `-N, --cset NAME=DSN` | set named connection NAME to DSN (see \cset command) |
| `-P, --pset VAR=ARG` | set printing option VAR to ARG (see \pset command) |
| `-F, --field-separator FIELD-SEPARATOR` | field separator for unaligned and CSV output |
| `-R, --record-separator RECORD-SEPARATOR` | record separator for unaligned and CSV output (default \n) |
| `-T, --table-attr TABLE-ATTR` | set HTML table tag attributes (e.g., width, border) |
| `-A, --no-align` | unaligned table output mode |
| `-H, --html` | HTML table output mode |
| `-t, --tuples-only` | print rows only |
| `-x, --expanded` | turn on expanded table output |
| `-z, --field-separator-zero` | set field separator for unaligned and CSV output to zero byte |
| `-0, --record-separator-zero` | set record separator for unaligned and CSV output to zero byte |
| `-J, --json` | JSON output mode |
| `-C, --csv` | CSV output mode |
| `-G, --vertical` | vertical output mode |
| `-q, --quiet` | run quietly (no messages, only query output) |
| `--config string` | config file |
## Commands
| Command | Description |
| ---------------------------------- | ----------------------------------------------------------------------------- |
| **General:** | |
| `\q` | quit usql |
| `\quit` | alias for `\q` |
| `\drivers` | show database drivers available to usql |
| **Connection:** | |
| `\c DSN` | connect to database url |
| `\c DRIVER PARAMS...` | connect to database with driver and parameters |
| `\cset [NAME [DSN]]` | set named connection, or list all if no parameters |
| `\cset NAME DRIVER PARAMS...` | define named connection for database driver |
| `\Z` | close database connection |
| `\password [USERNAME]` | change the password for a user |
| `\conninfo` | display information about the current database connection |
| **Operating System:** | |
| `\cd [DIR]` | change the current working directory |
| `\getenv VARNAME ENVVAR` | fetch environment variable |
| `\setenv NAME [VALUE]` | set or unset environment variable |
| `\! [COMMAND]` | execute command in shell or start interactive shell |
| `\timing [on/off]` | toggle timing of commands |
| **Variables:** | |
| `\prompt [-TYPE] VAR [PROMPT]` | prompt user to set variable |
| `\set [NAME [VALUE]]` | set internal variable, or list all if no parameters |
| `\unset NAME` | unset (delete) internal variable |
| **Query Execute:** | |
| `\g [(OPTIONS)] [FILE] or ;` | execute query (and send results to file or pipe) |
| `\G [(OPTIONS)] [FILE]` | as \g, but forces vertical output mode |
| `\gx [(OPTIONS)] [FILE]` | as \g, but forces expanded output mode |
| `\gexec` | execute query and execute each value of the result |
| `\gset [PREFIX]` | execute query and store results in usql variables |
| **Query Buffer:** | |
| `\e [FILE] [LINE]` | edit the query buffer (or file) with external editor |
| `\p` | show the contents of the query buffer |
| `\raw` | show the raw (non-interpolated) contents of the query buffer |
| `\r` | reset (clear) the query buffer |
| **Input/Output:** | |
| `\copy SRC DST QUERY TABLE` | copy query from source url to table on destination url |
| `\copy SRC DST QUERY TABLE(A,...)` | copy query from source url to columns of table on destination url |
| `\echo [-n] [STRING]` | write string to standard output (-n for no newline) |
| `\qecho [-n] [STRING]` | write string to \o output stream (-n for no newline) |
| `\warn [-n] [STRING]` | write string to standard error (-n for no newline) |
| `\o [FILE]` | send all query results to file or pipe |
| **Informational:** | |
| `\d[S+] [NAME]` | list tables, views, and sequences or describe table, view, sequence, or index |
| `\da[S+] [PATTERN]` | list aggregates |
| `\df[S+] [PATTERN]` | list functions |
| `\di[S+] [PATTERN]` | list indexes |
| `\dm[S+] [PATTERN]` | list materialized views |
| `\dn[S+] [PATTERN]` | list schemas |
| `\dp[S] [PATTERN]` | list table, view, and sequence access privileges |
| `\ds[S+] [PATTERN]` | list sequences |
| `\dt[S+] [PATTERN]` | list tables |
| `\dv[S+] [PATTERN]` | list views |
| `\l[+]` | list databases |
| `\ss[+] [TABLE/QUERY] [k]` | show stats for a table or a query |
| **Formatting:** | |
| `\pset [NAME [VALUE]]` | Set table output option |
| `\a` | Toggle between unaligned and aligned output mode |
| `\C [STRING]` | Set table title, or unset if none |
| `\f [STRING]` | Show or set field separator for unaligned query output |
| `\H` | Toggle HTML output mode |
| `\T [STRING]` | Set HTML <table> tag attributes, or unset if none |
| `\t [on/off]` | Show only rows |
| `\x [on/off/auto]` | Toggle expanded output |
| **Transaction:** | |
| `\\begin` | Begin a transaction |
| `\\begin [-read-only] [ISOLATION]` | Begin a transaction with isolation level |
| `\\commit` | Commit current transaction |
| `\\rollback` | Rollback (abort) current transaction |
## Configuration
During its initialization phase, usql reads a standard YAML configuration file `config.yaml`. On Windows this is `%AppData%/usql/config.yaml`, on macOS this is `$HOME/Library/Application Support/usql/config.yaml`, and on Linux and other Unix systems this is normally `$HOME/.config/usql/config.yaml`.
```yml
# named connections
# name can be used instead of database url
connections:
my_couchbase_conn: couchbase://Administrator:P4ssw0rd@localhost
my_clickhouse_conn: clickhouse://clickhouse:P4ssw0rd@localhost
css: cassandra://cassandra:cassandra@localhost
fsl: flightsql://flight_username:P4ssw0rd@localhost
gdr:
protocol: godror
username: system
password: P4ssw0rd
hostname: localhost
port: 1521
database: free
ign: ignite://ignite:ignite@localhost
mss: sqlserver://sa:Adm1nP@ssw0rd@localhost
mym: mysql://root:P4ssw0rd@localhost
myz: mymysql://root:P4ssw0rd@localhost
ora: oracle://system:P4ssw0rd@localhost/free
ore: oracle://system:P4ssw0rd@localhost:1522/db1
pgs: postgres://postgres:P4ssw0rd@localhost
pgx: pgx://postgres:P4ssw0rd@localhost
vrt:
proto: vertica
user: vertica
pass: vertica
host: localhost
sll:
file: /path/to/mydb.sqlite3
mdc: modernsqlite:test.db
dkd: test.duckdb
zzz: ["databricks", "token:dapi*****@adb-*************.azuredatabricks.net:443/sql/protocolv1/o/*********/*******"]
zz2:
proto: mysql
user: "my username"
pass: "my password!"
host: localhost
opts:
opt1: "😀"
# init script
init: |
\echo welcome to the jungle `date`
\set SYNTAX_HL_STYLE paraiso-dark
\set PROMPT1 '\033[32m%S%M%/%R%#\033[0m '
\set bar test
\set foo test
-- \set SHOW_HOST_INFORMATION false
-- \set SYNTAX_HL false
\set 型示師 '本門台初埼本門台初埼'
# charts path
charts_path: charts
# defined queries
queries:
q1:
```
### Time Formatting
Some databases support time/date columns that support formatting. By default, usql formats time/date columns as RFC3339Nano, and can be set using `\pset time FORMAT`:
```
$ usql pg://
Connected with driver postgres (PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1))
Type "help" for help.
pg:postgres@=> \pset
time RFC3339Nano
pg:postgres@=> select now();
now
-----------------------------
2021-05-01T22:21:44.710385Z
(1 row)
pg:postgres@=> \pset time Kitchen
Time display is "Kitchen" ("3:04PM").
pg:postgres@=> select now();
now
---------
10:22PM
(1 row)
```
usql's time format supports any Go supported time format, or can be any standard Go const name, such as Kitchen above. See below for an overview of the available time constants.
#### Time Constants
The following are the time constant names available in `usql`, corresponding time format value, and example display output:
| Constant | Format | Display |
| ----------- | ------------------------------------: | ----------------------------------: |
| ANSIC | `Mon Jan _2 15:04:05 2006` | `Wed Aug 3 20:12:48 2022` |
| UnixDate | `Mon Jan _2 15:04:05 MST 2006` | `Wed Aug 3 20:12:48 UTC 2022` |
| RubyDate | `Mon Jan 02 15:04:05 -0700 2006` | `Wed Aug 03 20:12:48 +0000 2022` |
| RFC822 | `02 Jan 06 15:04 MST` | `03 Aug 22 20:12 UTC` |
| RFC822Z | `02 Jan 06 15:04 -0700` | `03 Aug 22 20:12 +0000` |
| RFC850 | `Monday, 02-Jan-06 15:04:05 MST` | `Wednesday, 03-Aug-22 20:12:48 UTC` |
| RFC1123 | `Mon, 02 Jan 2006 15:04:05 MST` | `Wed, 03 Aug 2022 20:12:48 UTC` |
| RFC1123Z | `Mon, 02 Jan 2006 15:04:05 -0700` | `Wed, 03 Aug 2022 20:12:48 +0000` |
| RFC3339 | `2006-01-02T15:04:05Z07:00` | `2022-08-03T20:12:48Z` |
| RFC3339Nano | `2006-01-02T15:04:05.999999999Z07:00` | `2022-08-03T20:12:48.693257Z` |
| Kitchen | `3:04PM` | `8:12PM` |
| Stamp | `Jan _2 15:04:05` | `Aug 3 20:12:48` |
| StampMilli | `Jan _2 15:04:05.000` | `Aug 3 20:12:48.693` |
| StampMicro | `Jan _2 15:04:05.000000` | `Aug 3 20:12:48.693257` |
| StampNano | `Jan _2 15:04:05.000000000` | `Aug 3 20:12:48.693257000` |

View file

@ -0,0 +1,80 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/SDDM
wiki: https://en.wikipedia.org/wiki/Simple_Desktop_Display_Manager
repo: https://github.com/sddm/sddm
rev: 2024-12-18
---
# SDDM
The Simple Desktop Display Manager (SDDM) is a display manager. It is the recommended display manager for the KDE Plasma and LXQt desktop environments.
## Configuration
The default configuration file for SDDM can be found at `/usr/lib/sddm/sddm.conf.d/default.conf`. For any changes, create configuration file(s) in `/etc/sddm.conf.d/`.
Everything should work out of the box, since Arch Linux uses systemd and SDDM defaults to using `systemd-logind` for session management.
### Autologin
SDDM supports automatic login through its configuration file, for example (`/etc/sddm.conf.d/autologin.conf`):
```ini
[Autologin]
User=john
Session=plasma
# Optionally always relogin the user on logout
Relogin=true
```
This configuration causes a KDE Plasma session to be started for user `john` when the system is booted. Available session types can be found in `/usr/share/xsessions/` for X and in `/usr/share/wayland-sessions/` for Wayland.
To autologin into KDE Plasma while simultaneously locking the session (e.g. to allow autostarted apps to warm up), create a systemd user unit drop in to pass `--lockscreen` in `plasma-ksmserver.service` (`~/.config/systemd/user/plasma-ksmserver.service.d/override.conf`):
```ini
[Service]
ExecStart=
ExecStart=/usr/bin/ksmserver --lockscreen
```
### Theme settings
Theme settings can be changed in the `[Theme]` section. If you use Plasma's system settings, themes may show previews.
Set to `breeze` for the default Plasma theme.
#### Current theme
Set the current theme through the Current value, e.g. `Current=archlinux-simplyblack`.
#### Editing themes
The default SDDM theme directory is `/usr/share/sddm/themes/`. You can add your custom made themes to that directory under a separate subdirectory. Note that SDDM requires these subdirectory names to be the same as the theme names. Study the files installed to modify or create your own theme.
#### Customizing a theme
To override settings in the `theme.conf` configuration file, create a custom `theme.conf.user` file in the same directory. For example, to change the theme's background (`/usr/share/sddm/themes/name/theme.conf.user`):
```ini
[General]
background=/path/to/background.png
```
#### Testing (previewing) a theme
You can preview an SDDM theme if needed. This is especially helpful if you are not sure how the theme would look if selected or just edited a theme and want to see how it would look without logging out. You can run something like this:
```sh
sddm-greeter-qt6 --test-mode --theme /usr/share/sddm/themes/breeze
```
This should open a new window for every monitor you have connected and show a preview of the theme.
#### Mouse cursor
To set the mouse cursor theme, set `CursorTheme` to your preferred cursor theme.
Valid Plasma mouse cursor theme names are `breeze_cursors`, `Breeze_Snow` and `breeze-dark`.
### Keyboard Layout
To set the keyboard layout with SDDM, edit ` /usr/share/sddm/scripts/Xsetup`:
```
#!/bin/sh
# Xsetup - run as root before the login dialog appears
setxkbmap de,us
```

View file

@ -0,0 +1,103 @@
---
obj: application
wiki: https://en.wikipedia.org/wiki/PostGIS
repo: https://git.osgeo.org/gitea/postgis/postgis
website: https://postgis.net
rev: 2024-09-30
---
# PostGIS
PostGIS is a spatial database extender for PostgreSQL. It adds support for geographic objects allowing it to be used as a spatial database for geographic information systems (GIS). With PostGIS, PostgreSQL becomes a powerful database for managing spatial data and performing complex geographic operations.
PostGIS offers the following key features:
- **Geometry and Geography Types**: PostGIS supports two primary types of spatial objects: `Geometry` (for Cartesian coordinates) and `Geography` (for geodetic coordinates).
- **Spatial Indexing**: Support for R-tree-based spatial indexing using GiST (Generalized Search Tree) indexes.
- **Spatial Relationships and Measurements**: Functions to perform spatial analysis, including distance calculations, intersections, unions, and more.
- **3D and 4D Coordinates**: Support for 3D geometries (with Z values) and 4D (with M values for measures).
- **Raster and Vector Data**: PostGIS allows for the handling of both raster (pixel-based) and vector (coordinate-based) spatial data.
- **WKT, WKB, GeoJSON Support**: PostGIS supports common geographic data formats like Well-Known Text (WKT), Well-Known Binary (WKB), and GeoJSON.
## Enable PostGIS in a PostgreSQL Database
After installation, to enable PostGIS on a specific database, run the following SQL commands:
```sql
CREATE EXTENSION postgis;
CREATE EXTENSION postgis_topology;
```
## Spatial Data Types
PostGIS introduces several spatial data types. The two most commonly used types are:
### 1. `Geometry`
Represents geometric shapes in a Cartesian (planar) coordinate system.
```sql
CREATE TABLE my_table (
id SERIAL PRIMARY KEY,
geom GEOMETRY(Point, 4326)
);
```
### 2. `Geography`
Represents geographic shapes in a spherical coordinate system (uses latitudes and longitudes).
```sql
CREATE TABLE my_geo_table (
id SERIAL PRIMARY KEY,
geom GEOGRAPHY(POINT, 4326)
);
```
PostGIS also supports other geometry types, such as:
- `POINT`
- `LINESTRING`
- `POLYGON`
- `MULTIPOINT`
- `MULTILINESTRING`
- `MULTIPOLYGON`
Each of these types can be used in both `GEOMETRY` and `GEOGRAPHY` contexts.
## Spatial Functions
PostGIS provides a vast library of spatial functions for querying and manipulating spatial data. Some important functions include:
### Distance
Calculates the distance between two geometries.
```sql
SELECT ST_Distance(
ST_GeomFromText('POINT(0 0)', 4326),
ST_GeomFromText('POINT(1 1)', 4326)
);
```
### Intersection
Returns the intersection of two geometries.
```sql
SELECT ST_Intersection(
ST_GeomFromText('LINESTRING(0 0, 2 2)', 4326),
ST_GeomFromText('LINESTRING(0 2, 2 0)', 4326)
);
```
### Contains
Checks if one geometry contains another.
```sql
SELECT ST_Contains(
ST_GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))', 4326),
ST_GeomFromText('POINT(1 1)', 4326)
);
```
### Area
Calculates the area of a polygon.
```sql
SELECT ST_Area(
ST_GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))', 4326)
);
```

View file

@ -0,0 +1,286 @@
---
obj: application
website: https://www.postgresql.org
repo: https://git.postgresql.org/gitweb/?p=postgresql.git
---
# Postgres
PostgreSQL is an advanced, open-source, object-relational database management system. It is renowned for its scalability, reliability, and compliance with the SQL standard. PostgreSQL supports both SQL (relational) and JSON (non-relational) querying, making it highly versatile.
## Extensions
PostgreSQL can be extended via extensions:
- [TimescaleDB](./TimescaleDB.md) - Time-series data
- [pgVector](./pgvector.md) - Vector database functions
- [PostGIS](./PostGIS.md) - Spatial data
## psql
**psql** is a terminal-based front end to PostgreSQL. It allows users to interact with PostgreSQL databases by executing SQL queries, managing database objects, and performing administrative tasks.
To start psql, open your terminal or command prompt and type:
```bash
psql
```
### Connecting to a Database
You can specify the database name, user, host, and port when launching psql:
```bash
psql -d database_name -U username -h hostname -p port
```
Alternatively, you can use environment variables:
```bash
export PGDATABASE=mydb
export PGUSER=myuser
export PGPASSWORD=mypassword
export PGHOST=localhost
export PGPORT=5432
psql
```
### Listing Databases and Tables
- **List Databases:**
```sql
\l
```
- **List Tables in the Current Database:**
```sql
\dt
```
- **List All Schemas:**
```sql
\dn
```
### Creating and Dropping Databases/Tables
- **Create a Database:**
```sql
CREATE DATABASE mydb;
```
- **Drop a Database:**
```sql
DROP DATABASE mydb;
```
- **Create a Table:**
```sql
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) NOT NULL
);
```
- **Drop a Table:**
```sql
DROP TABLE users;
```
### Running SQL Queries
Execute standard SQL commands to interact with your data.
**Example: Inserting Data**
```sql
INSERT INTO users (username, email) VALUES ('john_doe', 'john@example.com');
```
**Example: Querying Data**
```sql
SELECT * FROM users;
```
**Example: Updating Data**
```sql
UPDATE users SET email = 'john.doe@example.com' WHERE username = 'john_doe';
```
**Example: Deleting Data**
```sql
DELETE FROM users WHERE username = 'john_doe';
```
## Meta-Commands
psql provides a set of meta-commands (prefixed with `\`) that facilitate various tasks.
### Common Meta-Commands
- **Help on Meta-Commands:**
```sql
\?
```
- **Help on SQL Commands:**
```sql
\h
```
- **Describe a Table:**
```sql
\d table_name
```
- **List All Tables, Views, and Sequences:**
```sql
\dt
```
- **List All Indexes:**
```sql
\di
```
- **Exit psql:**
```sql
\q
```
## Data Types
### 1. **Numeric Types**
- **Small Integer Types**
- `SMALLINT` (2 bytes): Range from -32,768 to +32,767
- **Integer Types**
- `INTEGER` or `INT` (4 bytes): Range from -2,147,483,648 to +2,147,483,647
- **Big Integer Types**
- `BIGINT` (8 bytes): Range from -9,223,372,036,854,775,808 to +9,223,372,036,854,775,807
- **Decimal/Exact Types**
- `DECIMAL` or `NUMERIC` (variable size): User-defined precision and scale
- **Floating-Point Types**
- `REAL` (4 bytes): Single precision floating-point number
- `DOUBLE PRECISION` (8 bytes): Double precision floating-point number
- **Serial Types (Auto-Incrementing)**
- `SERIAL` (4 bytes): Auto-incrementing integer (small range)
- `BIGSERIAL` (8 bytes): Auto-incrementing integer (large range)
- `SMALLSERIAL` (2 bytes): Auto-incrementing integer (smaller range)
### 2. **Monetary Type**
- `MONEY`: Stores currency amounts with a fixed fractional precision
### 3. **Character Types**
- **Fixed-Length Strings**
- `CHAR(n)` or `CHARACTER(n)`: Fixed length (padded with spaces)
- **Variable-Length Strings**
- `VARCHAR(n)` or `CHARACTER VARYING(n)`: Variable length with a limit
- **Text**
- `TEXT`: Variable length with no specific limit
### 4. **Binary Data Types**
- **Binary Large Object**
- `BYTEA`: Stores binary strings (byte arrays)
### 5. **Date/Time Types**
- **Date and Time**
- `DATE`: Calendar date (year, month, day)
- `TIME` (no time zone): Time of day (without time zone)
- `TIMETZ` (with time zone): Time of day (with time zone)
- `TIMESTAMP` (no time zone): Date and time without time zone
- `TIMESTAMPTZ` (with time zone): Date and time with time zone
- **Intervals**
- `INTERVAL`: Time span (e.g., days, months, hours)
### 6. **Boolean Type**
- `BOOLEAN`: Stores `TRUE`, `FALSE`, or `NULL`
### 7. **UUID Type**
- `UUID`: Stores Universally Unique Identifiers (128-bit values)
### 8. **Enumerated Types**
- `ENUM`: User-defined enumerated type (a static set of values)
### 9. **Geometric Types**
- `POINT`: Stores a geometric point (x, y)
- `LINE`: Infinite line
- `LSEG`: Line segment
- `BOX`: Rectangular box
- `PATH`: Geometric path (multiple points)
- `POLYGON`: Closed geometric figure
- `CIRCLE`: Circle
### 10. **Network Address Types**
- `CIDR`: IPv4 or IPv6 network block
- `INET`: IPv4 or IPv6 address
- `MACADDR`: MAC address
- `MACADDR8`: MAC address (EUI-64 format)
### 11. **Bit String Types**
- **Fixed-Length Bit Strings**
- `BIT(n)`: Fixed-length bit string
- **Variable-Length Bit Strings**
- `BIT VARYING(n)`: Variable-length bit string
### 12. **Text Search Types**
- `TSVECTOR`: Text search document
- `TSQUERY`: Text search query
### 13. **JSON Types**
- `JSON`: Textual JSON data
- `JSONB`: Binary JSON data (more efficient for indexing)
### 14. **Array Types**
- `ARRAY`: Allows any data type to be stored as an array (e.g., `INTEGER[]`, `TEXT[]`)
### 15. **Range Types**
- `INT4RANGE`: Range of `INTEGER`
- `INT8RANGE`: Range of `BIGINT`
- `NUMRANGE`: Range of `NUMERIC`
- `TSRANGE`: Range of `TIMESTAMP WITHOUT TIME ZONE`
- `TSTZRANGE`: Range of `TIMESTAMP WITH TIME ZONE`
- `DATERANGE`: Range of `DATE`
### 16. **Composite Types**
- User-defined types that consist of multiple fields of various types
### 17. **Object Identifier Types (OID)**
- `OID`: Object identifier (used internally by PostgreSQL)
- `REGCLASS`, `REGPROC`, `REGTYPE`: Special types for referencing classes, procedures, and types by OID or name
### 18. **Pseudo-Types**
- `ANY`: Accepts any data type
- `ANYARRAY`: Accepts any array data type
- `ANYELEMENT`: Represents any type of element
- `ANYENUM`: Accepts any `ENUM` type
- `ANYNONARRAY`: Any non-array type
- `VOID`: No data (used with functions that return no value)
- `TRIGGER`: Used in triggers
- `LANGUAGE_HANDLER`: Used internally for language support
## Docker-Compose
```yml
services:
postgres:
image: postgres:17
container_name: postgres
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
POSTGRES_DB: mydb
ports:
- "5432:5432"
volumes:
- ./postgres:/var/lib/postgresql/data
restart: always
```

View file

@ -0,0 +1,121 @@
---
obj: application
repo: https://github.com/timescale/timescaledb
website: https://www.timescale.com
rev: 2024-09-30
---
# TimescaleDB
TimescaleDB is an open-source time-series database built on [PostgreSQL](./Postgres.md), designed to handle large volumes of time-series data efficiently. It provides powerful data management features, making it suitable for applications in various domains such as IoT, finance, and analytics.
Features:
- Hypertables: The backbone of TimescaleDB, hypertables, facilitate automatic data partitioning across time, streamlining the management of vast datasets.
- Continuous Aggregates: This feature enables the pre-computation and storage of aggregate data, significantly speeding up query times for common analytical operations.
- Data Compression: TimescaleDB employs sophisticated compression techniques to reduce storage footprint without compromising query performance.
- Optimized Indexing: With its advanced indexing strategies, including multi-dimensional and time-based indexing, TimescaleDB ensures rapid query responses, making it highly efficient for time-series data.
## Installation
**Create the extension in your database**:
```sql
CREATE EXTENSION IF NOT EXISTS timescaledb;
```
## Hypertables
Hypertables are PostgreSQL tables that automatically partition your data by time. You interact with hypertables in the same way as regular PostgreSQL tables, but with extra features that makes managing your time-series data much easier.
In Timescale, hypertables exist alongside regular PostgreSQL tables. Use hypertables to store time-series data. This gives you improved insert and query performance, and access to useful time-series features. Use regular PostgreSQL tables for other relational data.
With hypertables, Timescale makes it easy to improve insert and query performance by partitioning time-series data on its time parameter. Behind the scenes, the database performs the work of setting up and maintaining the hypertable's partitions. Meanwhile, you insert and query your data as if it all lives in a single, regular PostgreSQL table.
**Create a hypertable:**
- Create a standard PostgreSQL table:
```sql
CREATE TABLE conditions (
time TIMESTAMPTZ NOT NULL,
location TEXT NOT NULL,
device TEXT NOT NULL,
temperature DOUBLE PRECISION NULL,
humidity DOUBLE PRECISION NULL
);
```
- Convert the table to a hypertable. Specify the name of the table you want to convert, and the column that holds its time values.
```sql
SELECT create_hypertable('conditions', by_range('time'));
```
## Hyperfunctions
Hyprfunctions allow you to query and aggregate your time data.
### delta
The `delta` function computes the change in a value over time. It helps in understanding how a metric (e.g., temperature, stock price, etc.) changes between readings.
Example: Calculate Temperature Change Over a Day
```sql
SELECT
delta(temperature) AS temp_change
FROM temperature_readings
WHERE time BETWEEN '2023-09-01' AND '2023-09-02';
```
### derivative
The `derivative` function calculates the rate of change (derivative) of a series over time.
Example: Calculate the Rate of Temperature Change Per Hour
```sql
SELECT
derivative(avg(temperature), '1 hour') AS temp_rate_change
FROM temperature_readings
GROUP BY time_bucket('1 hour', time);
```
### first & last
The `first` and `last` hyperfunctions return the first and last recorded values within a specified period.
```sql
SELECT
time_bucket('1 day', time) AS day,
first(stock_price, time) AS opening_price,
last(stock_price, time) AS closing_price
FROM stock_prices
GROUP BY day
ORDER BY day;
```
### locf
The `locf` (Last Observation Carried Forward) function fills missing data by carrying the last known observation forward to the missing timestamps.
```sql
SELECT
time_bucket('1 hour', time) AS hour,
locf(last(temperature, time)) AS filled_temperature
FROM temperature_readings
GROUP BY hour
ORDER BY hour;
```
### interpolated_avg
The `interpolated_avg` hyperfunction computes the average of a series with values interpolated at regular time intervals.
```sql
SELECT
time_bucket('1 hour', time) AS hour,
interpolated_avg('linear', time, power_usage) AS interpolated_power
FROM power_data
WHERE time BETWEEN '2023-09-01' AND '2023-09-07'
GROUP BY hour;
```
### time_bucket
The `time_bucket` hyperfunction is essential when you want to analyze or summarize data over time-based intervals, such as calculating daily averages, hourly sums, or other time-bound statistics.
```sql
SELECT
time_bucket('1 hour', time) AS bucketed_time,
avg(cpu_usage) AS avg_cpu_usage
FROM server_metrics
WHERE time BETWEEN '2023-09-01' AND '2023-09-30'
GROUP BY bucketed_time
ORDER BY bucketed_time;
```

View file

@ -2,6 +2,7 @@
website: ["https://code.visualstudio.com/", "https://vscodium.com/"]
obj: application
flatpak-id: com.vscodium.codium
rev: 2024-09-19
---
# Visual Studio Code
@ -37,3 +38,9 @@ VSCode provides built-in debugging support for multiple languages and environmen
- [Foam](https://open-vsx.org/extension/foam/foam-vscode): Foam is a note-taking tool that lives within VS Code, which means you can pair it with your favorite extensions for a great editing experience.
- [JSON Schema Store Catalog](https://open-vsx.org/extension/remcohaszing/schemastore): This extension provides all JSON schemas from the [JSON Schema Store](https://www.schemastore.org) catalog.
- [night blossom](https://open-vsx.org/extension/RustedTurnip/night-blossom): VSCode Theme.
## Keyboard Shortcuts
- `Ctrl + -`: Comment / Uncomment line
- `Ctrl + P`: Quick Open / Commands
- `Alt + ↓ / ↑`: Move line down/up
- `Ctrl + Shift + K`: Delete line

View file

@ -0,0 +1,99 @@
---
obj: application
repo: https://github.com/pgvector/pgvector
rev: 2024-09-30
---
# pgVector
**pgvector** is a [PostgreSQL](./Postgres.md) extension designed to support vector similarity search. With the rise of machine learning models like those in natural language processing (NLP), computer vision, and recommendation systems, the need to efficiently store and query high-dimensional vectors (embeddings) has grown significantly. pgvector provides a solution by enabling PostgreSQL to handle these vector operations, making it possible to search for similar items using vector distance metrics directly in SQL.
## Installation
1. Install pgvector using `git` and `make`:
```bash
git clone https://github.com/pgvector/pgvector.git
cd pgvector
make && make install
```
2. Add the extension to your PostgreSQL database:
```sql
CREATE EXTENSION IF NOT EXISTS vector;
```
## Data Types
pgvector introduces a new data type called `vector`. It is used to store fixed-length vectors, and the size must be specified during table creation.
```sql
CREATE TABLE items (
id serial PRIMARY KEY,
embedding vector(3) -- a 3-dimensional vector
);
```
## Functions and Operators
pgvector provides several functions and operators for vector similarity and distance calculation.
### Distance Metrics
- **Euclidean Distance** (`<->`): Measures the straight-line distance between two vectors.
```sql
SELECT * FROM items ORDER BY embedding <-> '[1, 0, 0]' LIMIT 5;
```
- **Cosine Similarity** (`<=>`): Measures the cosine of the angle between two vectors.
```sql
SELECT * FROM items ORDER BY embedding <=> '[1, 0, 0]' LIMIT 5;
```
- **Inner Product** (`<#>`): Measures the dot product between two vectors.
```sql
SELECT * FROM items ORDER BY embedding <#> '[1, 0, 0]' LIMIT 5;
```
### Basic Operations
- **Set a Vector Value**:
```sql
INSERT INTO items (embedding) VALUES ('[1, 0, 0]');
```
- **Retrieve All Vectors**:
```sql
SELECT * FROM items;
```
## Indexing
To enhance performance for similarity search, pgvector supports indexing. The recommended index types depend on the distance metric you plan to use:
- **Euclidean Distance** (L2):
```sql
CREATE INDEX ON items USING ivfflat (embedding vector_l2_ops) WITH (lists = 100);
```
- **Cosine Similarity**:
```sql
CREATE INDEX ON items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
```
- **Inner Product**:
```sql
CREATE INDEX ON items USING ivfflat (embedding vector_ip_ops) WITH (lists = 100);
```
### Index Parameters
- **Lists**: Defines the number of centroids to use in the IVF (Inverted File) index. Higher values of `lists` improve recall but may increase query time.
## Use Cases
1. **Recommendation Systems**: Store user and item embeddings and use similarity search to recommend items based on user preferences.
2. **Search Engines**: Search for semantically similar documents or images using vector embeddings.
3. **NLP Applications**: Store word, sentence, or document embeddings to perform similarity search or clustering of textual data.
4. **Image Recognition**: Query for similar images based on embeddings generated by deep learning models.

View file

@ -0,0 +1,11 @@
---
obj: application
website: https://owntracks.org
repo: https://github.com/owntracks/android
---
# OwnTracks
OwnTracks allows you to keep track of your own location.
You can build your private location diary or share it with your family and friends.
OwnTracks is open-source and uses open protocols for communication so you can be sure your data stays secure and private.
It can either publish your location using [MQTT](../../internet/MQTT.md) or over [HTTP](../../internet/HTTP.md) with something like [dawarich](../web/dawarich.md).

View file

@ -1,5 +1,7 @@
---
obj: application
repo: https://git.launchpad.net/ufw/
arch-wiki: https://wiki.archlinux.org/title/Uncomplicated_Firewall
---
# ufw
@ -17,19 +19,134 @@ The next line is only needed _once_ the first time you install the package:
ufw enable
```
See status:
**See status:**
```shell
ufw status
```
Enable/Disable
**Enable/Disable:**
```shell
ufw enable
ufw disable
```
Allow/Deny ports
**Allow/Deny:**
```shell
ufw allow <app|port>
ufw deny <app|port>
ufw allow from <CIDR>
ufw deny from <CIDR>
```
## Forward policy
Users needing to run a VPN such as OpenVPN or WireGuard can adjust the `DEFAULT_FORWARD_POLICY` variable in `/etc/default/ufw` from a value of `DROP` to `ACCEPT` to forward all packets regardless of the settings of the user interface. To forward for a specific interface like `wg0`, user can add the following line in the filter block
```sh
# /etc/ufw/before.rules
-A ufw-before-forward -i wg0 -j ACCEPT
-A ufw-before-forward -o wg0 -j ACCEPT
```
You may also need to uncomment
```sh
# /etc/ufw/sysctl.conf
net/ipv4/ip_forward=1
net/ipv6/conf/default/forwarding=1
net/ipv6/conf/all/forwarding=1
```
## Adding other applications
The PKG comes with some defaults based on the default ports of many common daemons and programs. Inspect the options by looking in the `/etc/ufw/applications.d` directory or by listing them in the program itself:
```sh
ufw app list
```
If users are running any of the applications on a non-standard port, it is recommended to simply make `/etc/ufw/applications.d/custom` containing the needed data using the defaults as a guide.
> **Warning**: If users modify any of the PKG provided rule sets, these will be overwritten the first time the ufw package is updated. This is why custom app definitions need to reside in a non-PKG file as recommended above!
Example, deluge with custom tcp ports that range from 20202-20205:
```ini
[Deluge-my]
title=Deluge
description=Deluge BitTorrent client
ports=20202:20205/tcp
```
Should you require to define both tcp and udp ports for the same application, simply separate them with a pipe as shown: this app opens tcp ports 10000-10002 and udp port 10003:
```ini
ports=10000:10002/tcp|10003/udp
```
One can also use a comma to define ports if a range is not desired. This example opens tcp ports 10000-10002 (inclusive) and udp ports 10003 and 10009
```ini
ports=10000:10002/tcp|10003,10009/udp
```
## Deleting applications
Drawing on the Deluge/Deluge-my example above, the following will remove the standard Deluge rules and replace them with the Deluge-my rules from the above example:
```sh
ufw delete allow Deluge
ufw allow Deluge-my
```
## Black listing IP addresses
It might be desirable to add ip addresses to a blacklist which is easily achieved simply by editing `/etc/ufw/before.rules` and inserting an `iptables DROP` line at the bottom of the file right above the "COMMIT" word.
```sh
# /etc/ufw/before.rules
...
## blacklist section
# block just 199.115.117.99
-A ufw-before-input -s 199.115.117.99 -j DROP
# block 184.105.*.*
-A ufw-before-input -s 184.105.0.0/16 -j DROP
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT
```
## Rate limiting with ufw
ufw has the ability to deny connections from an IP address that has attempted to initiate 6 or more connections in the last 30 seconds. Users should consider using this option for services such as SSH.
Using the above basic configuration, to enable rate limiting we would simply replace the allow parameter with the limit parameter. The new rule will then replace the previous.
```sh
ufw limit SSH
```
## Disable remote ping
Change `ACCEPT` to `DROP` in the following lines:
```sh
/etc/ufw/before.rules
# ok icmp codes
...
-A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT
```
If you use IPv6, related rules are in `/etc/ufw/before6.rules`.
## Disable UFW logging
Disabling logging may be useful to stop UFW filling up the kernel (dmesg) and message logs:
```sh
ufw logging off
```
## UFW and Docker
Docker in standard mode writes its own iptables rules and ignores ufw ones, which could lead to security issues. A solution can be found at https://github.com/chaifeng/ufw-docker.
## GUI frontends
If you are using KDE Plasma, you can just go to `Wi-Fi & Networking > Firewall` to access and adjust firewall configurations given `plasma-firewall` is installed.

View file

@ -1,17 +1,18 @@
---
arch-wiki: https://wiki.archlinux.org/title/PKGBUILD
obj: concept
rev: 2024-12-19
---
# PKGBUILD
A `PKGBUILD` is a shell script containing the build information required by [Arch Linux](../../../linux/Arch%20Linux.md) packages. [Arch Wiki](https://wiki.archlinux.org/title/PKGBUILD)
Packages in [Arch Linux](../../../linux/Arch%20Linux.md) are built using the [makepkg](makepkg.md) utility. When [makepkg](makepkg.md) is run, it searches for a PKGBUILD file in the current directory and follows the instructions therein to either compile or otherwise acquire the files to build a package archive (pkgname.pkg.tar.zst). The resulting package contains binary files and installation instructions, readily installable with [pacman](Pacman.md).
Packages in [Arch Linux](../../../linux/Arch%20Linux.md) are built using the [makepkg](makepkg.md) utility. When [makepkg](makepkg.md) is run, it searches for a `PKGBUILD` file in the current directory and follows the instructions therein to either compile or otherwise acquire the files to build a package archive (`pkgname.pkg.tar.zst`). The resulting package contains binary files and installation instructions, readily installable with [pacman](Pacman.md).
Mandatory variables are `pkgname`, `pkgver`, `pkgrel`, and `arch`. `license` is not strictly necessary to build a package, but is recommended for any PKGBUILD shared with others, as [makepkg](makepkg.md) will produce a warning if not present.
Mandatory variables are `pkgname`, `pkgver`, `pkgrel`, and `arch`. `license` is not strictly necessary to build a package, but is recommended for any `PKGBUILD` shared with others, as [makepkg](makepkg.md) will produce a warning if not present.
# Example
## Example
PKGBUILD:
```sh
# Maintainer: User <mail>
@ -49,3 +50,186 @@ package() {
install -Dm755 ./app "$pkgdir/usr/bin/app"
}
```
## Directives
The following is a list of standard options and directives available for use in a `PKGBUILD`. These are all understood and interpreted by `makepkg`, and most of them will be directly transferred to the built package.
If you need to create any custom variables for use in your build process, it is recommended to prefix their name with an `_` (underscore). This will prevent any possible name clashes with internal `makepkg` variables. For example, to store the base kernel version in a variable, use something similar to `$_basekernver`.
### Name and Version
#### `pkgname`
Either the name of the package or an array of names for split packages.
Valid characters for members of this array are alphanumerics, and any of the following characters: `@ . _ + -`. Additionally, names are not allowed to start with hyphens or dots.
#### `pkgver`
The version of the software as released from the author (e.g., `2.7.1`). The variable is not allowed to contain colons, forward slashes, hyphens or whitespace.
The pkgver variable can be automatically updated by providing a `pkgver()` function in the `PKGBUILD` that outputs the new package version. This is run after downloading and extracting the sources and running the `prepare()` function (if present), so it can use those files in determining the new `pkgver`. This is most useful when used with sources from version control systems.
#### `pkgrel`
This is the release number specific to the distribution. This allows package maintainers to make updates to the packages configure flags, for example. This is typically set to `1` for each new upstream software release and incremented for intermediate `PKGBUILD` updates. The variable is a positive integer, with an optional subrelease level specified by adding another positive integer separated by a period (i.e. in the form `x.y`).
#### `epoch`
Used to force the package to be seen as newer than any previous versions with a lower epoch, even if the version number would normally not trigger such an upgrade. This value is required to be a positive integer; the default value if left unspecified is 0. This is useful when the version numbering scheme of a package changes (or is alphanumeric), breaking normal version comparison logic.
### Generic
#### `pkgdesc`
This should be a brief description of the package and its functionality. Try to keep the description to one line of text and to not use the packages name.
#### `url`
This field contains a URL that is associated with the software being packaged. This is typically the projects web site.
#### `license` (array)
This field specifies the license(s) that apply to the package. If multiple licenses are applicable, list all of them: `license=('GPL' 'FDL')`.
#### `arch` (array)
Defines on which architectures the given package is available (e.g., `arch=('i686' 'x86_64')`). Packages that contain no architecture specific files should use `arch=('any')`. Valid characters for members of this array are alphanumerics and `_`.
#### `groups` (array)
An array of symbolic names that represent groups of packages, allowing you to install multiple packages by requesting a single target. For example, one could install all KDE packages by installing the kde group.
### Dependencies
#### `depends` (array)
An array of packages this package depends on to run. Entries in this list should be surrounded with single quotes and contain at least the package name. Entries can also include a version requirement of the form `name<>version`, where `<>` is one of five comparisons: `>=` (greater than or equal to), `<=` (less than or equal to), `=` (equal to), `>` (greater than), or `<` (less than).
If the dependency name appears to be a library (ends with `.so`), `makepkg` will try to find a binary that depends on the library in the built package and append the version needed by the binary. Appending the version yourself disables automatic detection.
Additional architecture-specific depends can be added by appending an underscore and the architecture name e.g., `depends_x86_64=()`.
#### `makedepends` (array)
An array of packages this package depends on to build but are not needed at runtime. Packages in this list follow the same format as `depends`.
Additional architecture-specific `makedepends` can be added by appending an underscore and the architecture name e.g., `makedepends_x86_64=()`.
#### `checkdepends` (array)
An array of packages this package depends on to run its test suite but are not needed at runtime. Packages in this list follow the same format as depends. These dependencies are only considered when the `check()` function is present and is to be run by `makepkg`.
Additional architecture-specific checkdepends can be added by appending an underscore and the architecture name e.g., `checkdepends_x86_64=()`
#### `optdepends` (array)
An array of packages (and accompanying reasons) that are not essential for base functionality, but may be necessary to make full use of the contents of this package. optdepends are currently for informational purposes only and are not utilized by pacman during dependency resolution. Packages in this list follow the same format as depends, with an optional description appended. The format for specifying optdepends descriptions is:
```shell
optdepends=('python: for library bindings')
```
Additional architecture-specific optdepends can be added by appending an underscore and the architecture name e.g., `optdepends_x86_64=()`.
### Package Relations
#### `provides` (array)
An array of “virtual provisions” this package provides. This allows a package to provide dependencies other than its own package name. For example, the `dcron` package can provide `cron`, which allows packages to depend on `cron` rather than `dcron` OR `fcron`.
Versioned provisions are also possible, in the `name=version` format. For example, `dcron` can provide `cron=2.0` to satisfy the `cron>=2.0` dependency of other packages. Provisions involving the `>` and `<` operators are invalid as only specific versions of a package may be provided.
If the provision name appears to be a library (ends with `.so`), makepkg will try to find the library in the built package and append the correct version. Appending the version yourself disables automatic detection.
Additional architecture-specific provides can be added by appending an underscore and the architecture name e.g., `provides_x86_64=()`.
#### `conflicts` (array)
An array of packages that will conflict with this package (i.e. they cannot both be installed at the same time). This directive follows the same format as `depends`. Versioned conflicts are supported using the operators as described in `depends`.
Additional architecture-specific conflicts can be added by appending an underscore and the architecture name e.g., `conflicts_x86_64=()`.
#### `replaces` (array)
An array of packages this package should replace. This can be used to handle renamed/combined packages. For example, if the `j2re` package is renamed to `jre`, this directive allows future upgrades to continue as expected even though the package has moved. Versioned replaces are supported using the operators as described in `depends`.
Sysupgrade is currently the only pacman operation that utilizes this field. A normal sync or upgrade will not use its value.
Additional architecture-specific replaces can be added by appending an underscore and the architecture name e.g., `replaces_x86_64=()`.
### Other
#### `backup` (array)
An array of file names, without preceding slashes, that should be backed up if the package is removed or upgraded. This is commonly used for packages placing configuration files in `/etc`.
#### `options` (array)
This array allows you to override some of makepkgs default behavior when building packages. To set an option, just include the option name in the `options` array. To reverse the default behavior, place an `!` at the front of the option. Only specify the options you specifically want to override, the rest will be taken from `makepkg.conf`
| Option | Description |
| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `strip` | Strip symbols from binaries and libraries. If you frequently use a debugger on programs or libraries, it may be helpful to disable this option. |
| `docs` | Save doc directories. If you wish to delete doc directories, specify `!docs` in the array. |
| `libtool` | Leave libtool (`.la`) files in packages. Specify `!libtool` to remove them. |
| `staticlibs` | Leave static library (`.a`) files in packages. Specify `!staticlibs` to remove them (if they have a shared counterpart). |
| `emptydirs` | Leave empty directories in packages. |
| `zipman` | Compress man and info pages with gzip. |
| `ccache` | Allow the use of ccache during `build()`. More useful in its negative form `!ccache` with select packages that have problems building with ccache. |
| `distcc` | Allow the use of distcc during `build()`. More useful in its negative form `!distcc` with select packages that have problems building with distcc. |
| `buildflags` | Allow the use of user-specific buildflags (`CPPFLAGS`, `CFLAGS`, `CXXFLAGS`, `LDFLAGS`) during `build()` as specified in `makepkg.conf`. More useful in its negative form `!buildflags` with select packages that have problems building with custom buildflags. |
| `makeflags` | Allow the use of user-specific makeflags during `build()` as specified in `makepkg.conf`. More useful in its negative form `!makeflags` with select packages that have problems building with custom makeflags such as `-j2`. |
| `debug` | Add the user-specified debug flags (`DEBUG_CFLAGS`, `DEBUG_CXXFLAGS`) to their counterpart buildflags as specified in `makepkg.conf`. When used in combination with the `strip` option, a separate package containing the debug symbols is created. |
| `lto` | Enable building packages using link time optimization. Adds `-flto` to both `CFLAGS` and `CXXFLAGS`. |
#### `install`
Specifies a special install script that is to be included in the package. This file should reside in the same directory as the `PKGBUILD` and will be copied into the package by `makepkg`. It does not need to be included in the source array (e.g., `install=$pkgname.install`).
Pacman has the ability to store and execute a package-specific script when it installs, removes, or upgrades a package. This allows a package to configure itself after installation and perform an opposite action upon removal.
The exact time the script is run varies with each operation, and should be self-explanatory. Note that during an upgrade operation, none of the install or remove functions will be called.
Scripts are passed either one or two “full version strings”, where a full version string is either `pkgver-pkgrel` or `epoch:pkgver-pkgrel`, if `epoch` is non-zero.
- `pre_install`: Run right before files are extracted. One argument is passed: new package full version string.
- `post_install`: Run right after files are extracted. One argument is passed: new package full version string.
- `pre_upgrade`: Run right before files are extracted. Two arguments are passed in this order: new package full version string, old package full version string.
- `post_upgrade`: Run after files are extracted. Two arguments are passed in this order: new package full version string, old package full version string.
- `pre_remove`: Run right before files are removed. One argument is passed: old package full version string.
- `post_remove`: Run right after files are removed. One argument is passed: old package full version string.
To use this feature, create a file such as `pkgname.install` and put it in the same directory as the `PKGBUILD` script. Then use the install directive: `install=pkgname.install`
#### `changelog`
Specifies a changelog file that is to be included in the package. The changelog file should end in a single newline. This file should reside in the same directory as the `PKGBUILD` and will be copied into the package by `makepkg`. It does not need to be included in the source array (e.g., `changelog=$pkgname.changelog`).
### Sources
#### `source` (array)
An array of source files required to build the package. Source files must either reside in the same directory as the `PKGBUILD`, or be a fully-qualified URL that `makepkg` can use to download the file. To simplify the maintenance of `PKGBUILDs`, use the `$pkgname` and `$pkgver` variables when specifying the download location, if possible. Compressed files will be extracted automatically unless found in the `noextract` array described below.
Additional architecture-specific sources can be added by appending an underscore and the architecture name e.g., `source_x86_64=()`. There must be a corresponding integrity array with checksums, e.g. `cksums_x86_64=()`.
It is also possible to change the name of the downloaded file, which is helpful with weird URLs and for handling multiple source files with the same name. The syntax is: `source=('filename::url')`.
Files in the source array with extensions `.sig`, `.sign` or, `.asc` are recognized by makepkg as PGP signatures and will be automatically used to verify the integrity of the corresponding source file.
#### `noextract` (array)
An array of file names corresponding to those from the source array. Files listed here will not be extracted with the rest of the source files. This is useful for packages that use compressed data directly.
#### `validpgpkeys` (array)
An array of PGP fingerprints. If this array is non-empty, `makepkg` will only accept signatures from the keys listed here and will ignore the trust values from the keyring. If the source file was signed with a subkey, `makepkg` will still use the primary key for comparison.
Only full fingerprints are accepted. They must be uppercase and must not contain whitespace characters.
### Integrity
#### `cksums` (array)
This array contains CRC checksums for every source file specified in the source array (in the same order). `makepkg` will use this to verify source file integrity during subsequent builds. If `SKIP` is put in the array in place of a normal hash, the integrity check for that source file will be skipped. To easily generate cksums, run `makepkg -g >> PKGBUILD`. If desired, move the cksums line to an appropriate location. Note that checksums generated by `makepkg -g` should be verified using checksum values provided by the software developer.
#### `md5sums`, `sha1sums`, `sha224sums`, `sha256sums`, `sha384sums`, `sha512sums`, `b2sums` (arrays)
Alternative integrity checks that `makepkg` supports; these all behave similar to the cksums option described above. To enable use and generation of these checksums, be sure to set up the `INTEGRITY_CHECK` option in `makepkg.conf`.
## Packaging Functions
In addition to the above directives, `PKGBUILDs` require a set of functions that provide instructions to build and install the package. As a minimum, the `PKGBUILD` must contain a `package()` function which installs all the packages files into the packaging directory, with optional `prepare()`, `build()`, and `check()` functions being used to create those files from source.
This is directly sourced and executed by `makepkg`, so anything that Bash or the system has available is available for use here. Be sure any exotic commands used are covered by the `makedepends` array.
If you create any variables of your own in any of these functions, it is recommended to use the Bash `local` keyword to scope the variable to inside the function.
### `package()` Function
The `package()` function is used to install files into the directory that will become the root directory of the built package and is run after all the optional functions listed below. The packaging stage is run using `fakeroot` to ensure correct file permissions in the resulting package. All other functions will be run as the user calling `makepkg`. This function is run inside `$srcdir`.
### `verify()` Function
An optional `verify()` function can be specified to implement arbitrary source authentication. The function should return a non-zero exit code when verification fails. This function is run before sources are extracted. This function is run inside `$startdir`.
### `prepare()` Function
An optional `prepare()` function can be specified in which operations to prepare the sources for building, such as patching, are performed. This function is run after the source extraction and before the `build()` function. The `prepare()` function is skipped when source extraction is skipped. This function is run inside `$srcdir`.
### `build()` Function
The optional `build()` function is used to compile and/or adjust the source files in preparation to be installed by the `package()` function. This function is run inside `$srcdir`.
### `check()` Function
An optional `check()` function can be specified in which a packages test-suite may be run. This function is run between the `build()` and `package()` functions. Be sure any exotic commands used are covered by the `checkdepends` array. This function is run inside `$srcdir`.

View file

@ -1,6 +1,9 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Pacman
rev: 2025-01-08
---
# Pacman
Pacman is the default [Arch Linux](../../../linux/Arch%20Linux.md) Package Manager
@ -45,6 +48,11 @@ List explicitly installed packages:
pacman -Qe
```
List of packages owning a file/dir:
```shell
pacman -Qo /path/to/file
```
List orphan packages (installed as dependencies and not required anymore):
```shell
pacman -Qdt
@ -56,6 +64,363 @@ pacman -Q
```
Empty the entire pacman cache:
```shell
```shel
pacman -Scc
```
Read changelog of package:
```shell
pacman -Qc pkgname
```
### File Conflicts
When pacman removes a package that has a configuration file, it normally creates a backup copy of that configuration file and appends `.pacsave` to the name of the file. Likewise, when pacman upgrades a package which includes a new configuration file created by the maintainer differing from the currently installed file, it saves a `.pacnew` file with the new configuration. pacman provides notice when these files are written.
## Configuration
Pacman, using libalpm, will attempt to read `pacman.conf` each time it is invoked. This configuration file is divided into sections or repositories. Each section defines a package repository that pacman can use when searching for packages in `--sync` mode. The exception to this is the `[options]` section, which defines global options.
```ini
# /etc/pacman.conf
[options]
# Set the default root directory for pacman to install to.
# This option is used if you want to install a package on a temporary mounted partition which is "owned" by another system, or for a chroot install.
# NOTE: If database path or log file are not specified on either the command line or in pacman.conf(5), their default location will be inside this root path.
RootDir = /path/to/root/dir
# Overrides the default location of the toplevel database directory.
# The default is /var/lib/pacman/.
# Most users will not need to set this option.
# NOTE: if specified, this is an absolute path and the root path is not automatically prepended.
DBPath = /path/to/db/dir
# Overrides the default location of the package cache directory.
# The default is /var/cache/pacman/pkg/.
# Multiple cache directories can be specified, and they are tried in the order they are listed in the config file.
# If a file is not found in any cache directory, it will be downloaded to the first cache directory with write access.
# NOTE: this is an absolute path, the root path is not automatically prepended.
CacheDir = /path/to/cache/dir
# Add directories to search for alpm hooks in addition to the system hook directory (/usr/share/libalpm/hooks/).
# The default is /etc/pacman.d/hooks.
# Multiple directories can be specified with hooks in later directories taking precedence over hooks in earlier directories.
# NOTE: this is an absolute path, the root path is not automatically prepended. For more information on the alpm hooks, see alpm-hooks(5).
HookDir = /path/to/hook/dir
# Overrides the default location of the directory containing configuration files for GnuPG.
# The default is /etc/pacman.d/gnupg/.
# This directory should contain two files: pubring.gpg and trustdb.gpg.
# pubring.gpg holds the public keys of all packagers. trustdb.gpg contains a so-called trust database, which specifies that the keys are authentic and trusted.
# NOTE: this is an absolute path, the root path is not automatically prepended.
GPGDir = /path/to/gpg/dir
# Overrides the default location of the pacman log file.
# The default is /var/log/pacman.log.
# This is an absolute path and the root directory is not prepended.
LogFile = /path/to/log/file
# If a user tries to --remove a package thats listed in HoldPkg, pacman will ask for confirmation before proceeding. Shell-style glob patterns are allowed.
HoldPkg = package ...
# Instructs pacman to ignore any upgrades for this package when performing a --sysupgrade. Shell-style glob patterns are allowed.
IgnorePkg = package ...
# Instructs pacman to ignore any upgrades for all packages in this group when performing a --sysupgrade. Shell-style glob patterns are allowed.
IgnoreGroup = group ...
# Include another configuration file.
# This file can include repositories or general configuration options.
# Wildcards in the specified paths will get expanded based on glob rules.
Include = /path/to/config/file
# If set, pacman will only allow installation of packages with the given architectures (e.g. i686, x86_64, etc).
# The special value auto will use the system architecture, provided via “uname -m”.
# If unset, no architecture checks are made.
# NOTE: Packages with the special architecture any can always be installed, as they are meant to be architecture independent.
Architecture = auto &| i686 &| x86_64 | ...
# If set, an external program will be used to download all remote files.
# All instances of %u will be replaced with the download URL.
# If present, instances of %o will be replaced with the local filename, plus a “.part” extension, which allows programs like wget to do file resumes properly.
XferCommand = /path/to/command %u [%o]
# All files listed with a NoUpgrade directive will never be touched during a package install/upgrade, and the new files will be installed with a .pacnew extension.
# These files refer to files in the package archive, so do not include the leading slash (the RootDir) when specifying them.
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a file with an exclamation mark.
# Inverted files will result in previously blacklisted files being whitelisted again. Subsequent matches will override previous ones.
# A leading literal exclamation mark or backslash needs to be escaped.
NoUpgrade = file ...
# All files listed with a NoExtract directive will never be extracted from a package into the filesystem.
# This can be useful when you dont want part of a package to be installed.
# For example, if your httpd root uses an index.php, then you would not want the index.html file to be extracted from the apache package.
# These files refer to files in the package archive, so do not include the leading slash (the RootDir) when specifying them.
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a file with an exclamation mark.
# Inverted files will result in previously blacklisted files being whitelisted again. Subsequent matches will override previous ones.
# A leading literal exclamation mark or backslash needs to be escaped.
NoExtract = file ...
# If set to KeepInstalled (the default), the -Sc operation will clean packages that are no longer installed (not present in the local database).
# If set to KeepCurrent, -Sc will clean outdated packages (not present in any sync database).
# The second behavior is useful when the package cache is shared among multiple machines, where the local databases are usually different, but the sync databases in use could be the same.
# If both values are specified, packages are only cleaned if not installed locally and not present in any known sync database.
CleanMethod = KeepInstalled &| KeepCurrent
# Set the default signature verification level. For more information, see Package and Database Signature Checking below.
SigLevel = ...
# Set the signature verification level for installing packages using the "-U" operation on a local file. Uses the value from SigLevel as the default.
LocalFileSigLevel = ...
# Set the signature verification level for installing packages using the "-U" operation on a remote file URL. Uses the value from SigLevel as the default.
RemoteFileSigLevel = ...
# Log action messages through syslog().
# This will insert log entries into /var/log/messages or equivalent.
UseSyslog
# Automatically enable colors only when pacmans output is on a tty.
Color
# Disables progress bars. This is useful for terminals which do not support escape characters.
NoProgressBar
# Performs an approximate check for adequate available disk space before installing packages.
CheckSpace
# Displays name, version and size of target packages formatted as a table for upgrade, sync and remove operations.
VerbosePkgLists
# Disable defaults for low speed limit and timeout on downloads.
# Use this if you have issues downloading files with proxy and/or security gateway.
DisableDownloadTimeout
# Specifies number of concurrent download streams.
# The value needs to be a positive integer.
# If this config option is not set then only one download stream is used (i.e. downloads happen sequentially).
ParallelDownloads = ...
# Specifies the user to switch to for downloading files.
# If this config option is not set then the downloads are done as the user running pacman.
DownloadUser = username
# Disable the default sandbox applied to the process downloading files on Linux systems.
# Useful if experiencing landlock related failures while downloading files when running a Linux kernel that does not support this feature.
DisableSandbox
```
### Repository Sections
Each repository section defines a section name and at least one location where the packages can be found. The section name is defined by the string within square brackets (the two above are core and custom). Repository names must be unique and the name local is reserved for the database of installed packages. Locations are defined with the Server directive and follow a URL naming structure. If you want to use a local directory, you can specify the full path with a `file://` prefix, as shown above.
A common way to define DB locations utilizes the Include directive. For each repository defined in the configuration file, a single Include directive can contain a file that lists the servers for that repository.
```ini
[core]
# use this server first
Server = ftp://ftp.archlinux.org/$repo/os/$arch
# next use servers as defined in the mirrorlist below
Include = {sysconfdir}/pacman.d/mirrorlist
# Include another config file.
Include = path
# A full URL to a location where the packages, and signatures (if available) for this repository can be found.
# Cache servers will be tried before any non-cache servers, will not be removed from the server pool for 404 download errors, and will not be used for database files.
CacheServer = url
# A full URL to a location where the database, packages, and signatures (if available) for this repository can be found.
Server = url
# Set the signature verification level for this repository. For more information, see Package and Database Signature Checking below.
SigLevel = ...
# Set the usage level for this repository. This option takes a list of tokens which must be at least one of the following:
# Sync : Enables refreshes for this repository.
# Search : Enables searching for this repository.
# Install : Enables installation of packages from this repository during a --sync operation.
# Upgrade : Allows this repository to be a valid source of packages when performing a --sysupgrade.
# All : Enables all of the above features for the repository. This is the default if not specified.
# Note that an enabled repository can be operated on explicitly, regardless of the Usage level set.
Usage = ...
```
### Signature Checking
The `SigLevel` directive is valid in both the `[options]` and repository sections. If used in `[options]`, it sets a default value for any repository that does not provide the setting.
- If set to `Never`, no signature checking will take place.
- If set to `Optional` , signatures will be checked when present, but unsigned databases and packages will also be accepted.
- If set to `Required`, signatures will be required on all packages and databases.
### Hooks
libalpm provides the ability to specify hooks to run before or after transactions based on the packages and/or files being modified. Hooks consist of a single `[Action]` section describing the action to be run and one or more `[Trigger]` section describing which transactions it should be run for.
Hooks are read from files located in the system hook directory `/usr/share/libalpm/hooks`, and additional custom directories specified in pacman.conf (the default is `/etc/pacman.d/hooks`). The file names are required to have the suffix `.hook`. Hooks are run in alphabetical order of their file name, where the ordering ignores the suffix.
Hooks may be overridden by placing a file with the same name in a higher priority hook directory. Hooks may be disabled by overriding them with a symlink to `/dev/null`.
Hooks must contain at least one `[Trigger]` section that determines which transactions will cause the hook to run. If multiple trigger sections are defined the hook will run if the transaction matches any of the triggers.
```ini
# Example: Force disks to sync to reduce the risk of data corruption
[Trigger]
# Select the type of operation to match targets against.
# May be specified multiple times.
# Installations are considered an upgrade if the package or file is already present on the system regardless of whether the new package version is actually greater than the currently installed version. For Path triggers, this is true even if the file changes ownership from one package to another.
# Operation = Install | Upgrade | Remove
Operation = Install
Operation = Upgrade
Operation = Remove
# Select whether targets are matched against transaction packages or files.
# Type = Path|Package
Type = Package
# The path or package name to match against the active transaction.
# Paths refer to the files in the package archive; the installation root should not be included in the path.
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a target with an exclamation mark. May be specified multiple times.
# Target = <path|package>
Target = *
[Action]
# An optional description that describes the action being taken by the hook for use in front-end output.
# Description = ...
# Packages that must be installed for the hook to run. May be specified multiple times.
# Depends = <package>
Depends = coreutils
# When to run the hook. Required.
# When = PreTransaction | PostTransaction
When = PostTransaction
# Command to run.
# Command arguments are split on whitespace. Values containing whitespace should be enclosed in quotes.
# Exec = <command>
Exec = /usr/bin/sync
# Causes the transaction to be aborted if the hook exits non-zero. Only applies to PreTransaction hooks.
# AbortOnFail
# Causes the list of matched trigger targets to be passed to the running hook on stdin.
# NeedsTargets
```
## Repositories
You can create your own package repository.
A repository essentially consists of:
- the packages (`.tar.zst`) and their signatures (`.tar.zst.sig`)
- a package index (`.db.tar.gz`)
### Adding a repo
To use a repo, add it to your `pacman.conf`:
```ini
# Local Repository
[myrepo]
SigLevel = Optional TrustAll
Server = file:///path/to/myrepo
# Remote Repository
[myrepo]
SigLevel = Optional
Server = http://yourserver.com/myrepo
```
### Package Database
To manage the package data (index) use the `repo-add` and `repo-remove` commands.
`repo-add` will update a package database by reading a built package file. Multiple packages to add can be specified on the command line.
If a matching `.sig` file is found alongside a package file, the signature will automatically be embedded into the database.
`repo-remove` will update a package database by removing the package name specified on the command line. Multiple packages to remove can be specified on the command line.
```sh
repo-add [options] <path-to-db> <package> [<package> ...]
repo-remove [options] <path-to-db> <packagename> [<packagename> ...]
```
| Option | Description |
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-q, --quiet` | Force this program to keep quiet and run silently except for warning and error messages. |
| `-s, --sign` | Generate a PGP signature file using GnuPG. This will execute `gpg --detach-sign` on the generated database to generate a detached signature file, using the GPG agent if it is available. |
| `-k, --key <key>` | Specify a key to use when signing packages. Can also be specified using the `GPGKEY` environment variable. If not specified in either location, the default key from the keyring will be used. |
| `-v, --verify` | Verify the PGP signature of the database before updating the database. If the signature is invalid, an error is produced and the update does not proceed. |
| `--nocolor` | Remove color from repo-add and repo-remove output. |
| **`repo-add` ONLY OPTIONS:** | - |
| `-n, --new` | Only add packages that are not already in the database. Warnings will be printed upon detection of existing packages, but they will not be re-added. |
| `-R, --remove` | Remove old package files from the disk when updating their entry in the database. |
| `--include-sigs` | Include package PGP signatures in the repository database (if available) |
## Package Signing
To determine if packages are authentic, pacman uses OpenPGP keys in a web of trust model. Each user also has a unique OpenPGP key, which is generated when you configure `pacman-key`.
Examples of webs of trust:
- Custom packages: Packages made and signed with a local key.
- Unofficial packages: Packages made and signed by a developer. Then, a local key was used to sign the developer's key.
- Official packages: Packages made and signed by a developer. The developer's key was signed by the Arch Linux master keys. You used your key to sign the master keys, and you trust them to vouch for developers.
### Setup
The `SigLevel` option in `/etc/pacman.conf` determines the level of trust required to install a package with `pacman -S`. One can set signature checking globally, or per repository. If `SigLevel` is set globally in the `[options]` section, all packages installed with `pacman -S` will require signing. With the `LocalFileSigLevel` setting from the default `pacman.conf`, any packages you build, and install with `pacman -U`, will not need to be signed using `makepkg`.
For remote packages, the default configuration will only support the installation of packages signed by trusted keys:
```
# /etc/pacman.conf
SigLevel = Required DatabaseOptional TrustedOnly
```
To initialize the pacman keyring run:
```sh
pacman-key --init
```
### Keyring Management
#### Verifying the master keys
The initial setup of keys is achieved using:
```sh
pacman-key --populate
```
OpenPGP keys are too large (2048 bits or more) for humans to work with, so they are usually hashed to create a 40-hex-digit fingerprint which can be used to check by hand that two keys are the same. The last eight digits of the fingerprint serve as a name for the key known as the '(short) key ID' (the last sixteen digits of the fingerprint would be the 'long key ID').
#### Adding developer keys
The official Developers' and Package Maintainers' keys are signed by the master keys, so you do not need to use `pacman-key` to sign them yourself. Whenever pacman encounters a key it does not recognize, it will prompt you to download it from a keyserver configured in `/etc/pacman.d/gnupg/gpg.conf` (or by using the `--keyserver` option on the command line).
Once you have downloaded a developer key, you will not have to download it again, and it can be used to verify any other packages signed by that developer.
> **Note**: The `archlinux-keyring` package, which is a dependency of base, contains the latest keys. However keys can also be updated manually using `pacman-key --refresh-keys` (as root). While doing `--refresh-keys`, your local key will also be looked up on the remote keyserver, and you will receive a message about it not being found. This is nothing to be concerned about.
#### Adding unofficial keys
This method can be utilized to add a key to the pacman keyring, or to enable signed unofficial user repositories.
First, get the key ID (keyid) from its owner. Then add it to the keyring using one of the two methods:
If the key is found on a keyserver, import it with:
```sh
pacman-key --recv-keys keyid
```
If otherwise a link to a keyfile is provided, download it and then run:
```sh
pacman-key --add /path/to/downloaded/keyfile
```
It is recommended to verify the fingerprint, as with any master key or any other key you are going to sign:
```sh
pacman-key --finger keyid
```
Finally, you must locally sign the imported key:
```sh
pacman-key --lsign-key keyid
```
You now trust this key to sign packages.

View file

@ -1,11 +1,190 @@
---
arch-wiki: https://wiki.archlinux.org/title/Makepkg
obj: application
rev: 2024-12-19
---
# makepkg
makepkg is a tool for creating [pacman](Pacman.md) packages based on [PKGBUILD](PKGBUILD.md) files.
## Configuration
The system configuration is available in `/etc/makepkg.conf`, but user-specific changes can be made in `$XDG_CONFIG_HOME/pacman/makepkg.conf` or `~/.makepkg.conf`. Also, system wide changes can be made with a drop-in file `/etc/makepkg.conf.d/makepkg.conf`. It is recommended to review the configuration prior to building packages.
> **Tip**: devtools helper scripts for building packages in a clean chroot use the `/usr/share/devtools/makepkg.conf.d/arch.conf` configuration file instead.
```sh
#!/hint/bash
# shellcheck disable=2034
#
# /etc/makepkg.conf
#
#########################################################################
# SOURCE ACQUISITION
#########################################################################
#
#-- The download utilities that makepkg should use to acquire sources
# Format: 'protocol::agent'
DLAGENTS=('file::/usr/bin/curl -qgC - -o %o %u'
'ftp::/usr/bin/curl -qgfC - --ftp-pasv --retry 3 --retry-delay 3 -o %o %u'
'http::/usr/bin/curl -qgb "" -fLC - --retry 3 --retry-delay 3 -o %o %u'
'https::/usr/bin/curl -qgb "" -fLC - --retry 3 --retry-delay 3 -o %o %u'
'rsync::/usr/bin/rsync --no-motd -z %u %o'
'scp::/usr/bin/scp -C %u %o')
# Other common tools:
# /usr/bin/snarf
# /usr/bin/lftpget -c
# /usr/bin/wget
#-- The package required by makepkg to download VCS sources
# Format: 'protocol::package'
VCSCLIENTS=('bzr::breezy'
'fossil::fossil'
'git::git'
'hg::mercurial'
'svn::subversion')
#########################################################################
# ARCHITECTURE, COMPILE FLAGS
#########################################################################
#
CARCH="x86_64"
CHOST="x86_64-pc-linux-gnu"
#-- Compiler and Linker Flags
#CPPFLAGS=""
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \
-Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security \
-fstack-clash-protection -fcf-protection \
-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer"
CXXFLAGS="$CFLAGS -Wp,-D_GLIBCXX_ASSERTIONS"
LDFLAGS="-Wl,-O1 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now \
-Wl,-z,pack-relative-relocs"
LTOFLAGS="-flto=auto"
#-- Make Flags: change this for DistCC/SMP systems
MAKEFLAGS="-j8"
#-- Debugging flags
DEBUG_CFLAGS="-g"
DEBUG_CXXFLAGS="$DEBUG_CFLAGS"
#########################################################################
# BUILD ENVIRONMENT
#########################################################################
#
# Makepkg defaults: BUILDENV=(!distcc !color !ccache check !sign)
# A negated environment option will do the opposite of the comments below.
#
#-- distcc: Use the Distributed C/C++/ObjC compiler
#-- color: Colorize output messages
#-- ccache: Use ccache to cache compilation
#-- check: Run the check() function if present in the PKGBUILD
#-- sign: Generate PGP signature file
#
BUILDENV=(!distcc color !ccache check !sign)
#
#-- If using DistCC, your MAKEFLAGS will also need modification. In addition,
#-- specify a space-delimited list of hosts running in the DistCC cluster.
#DISTCC_HOSTS=""
#-- Specify a directory for package building.
BUILDDIR=/tmp/makepkg
#########################################################################
# GLOBAL PACKAGE OPTIONS
# These are default values for the options=() settings
#########################################################################
#
# Makepkg defaults: OPTIONS=(!strip docs libtool staticlibs emptydirs !zipman !purge !debug !lto !autodeps)
# A negated option will do the opposite of the comments below.
#
#-- strip: Strip symbols from binaries/libraries
#-- docs: Save doc directories specified by DOC_DIRS
#-- libtool: Leave libtool (.la) files in packages
#-- staticlibs: Leave static library (.a) files in packages
#-- emptydirs: Leave empty directories in packages
#-- zipman: Compress manual (man and info) pages in MAN_DIRS with gzip
#-- purge: Remove files specified by PURGE_TARGETS
#-- debug: Add debugging flags as specified in DEBUG_* variables
#-- lto: Add compile flags for building with link time optimization
#-- autodeps: Automatically add depends/provides
#
OPTIONS=(strip docs !libtool !staticlibs emptydirs zipman purge !debug lto)
#-- File integrity checks to use. Valid: md5, sha1, sha224, sha256, sha384, sha512, b2
INTEGRITY_CHECK=(sha256)
#-- Options to be used when stripping binaries. See `man strip' for details.
STRIP_BINARIES="--strip-all"
#-- Options to be used when stripping shared libraries. See `man strip' for details.
STRIP_SHARED="--strip-unneeded"
#-- Options to be used when stripping static libraries. See `man strip' for details.
STRIP_STATIC="--strip-debug"
#-- Manual (man and info) directories to compress (if zipman is specified)
MAN_DIRS=({usr{,/local}{,/share},opt/*}/{man,info})
#-- Doc directories to remove (if !docs is specified)
DOC_DIRS=(usr/{,local/}{,share/}{doc,gtk-doc} opt/*/{doc,gtk-doc})
#-- Files to be removed from all packages (if purge is specified)
PURGE_TARGETS=(usr/{,share}/info/dir .packlist *.pod)
#-- Directory to store source code in for debug packages
DBGSRCDIR="/usr/src/debug"
#-- Prefix and directories for library autodeps
LIB_DIRS=('lib:usr/lib' 'lib32:usr/lib32')
#########################################################################
# PACKAGE OUTPUT
#########################################################################
#
# Default: put built package and cached source in build directory
#
#-- Destination: specify a fixed directory where all packages will be placed
PKGDEST=/home/packages
#-- Source cache: specify a fixed directory where source files will be cached
SRCDEST=/home/sources
#-- Source packages: specify a fixed directory where all src packages will be placed
SRCPKGDEST=/home/srcpackages
#-- Log files: specify a fixed directory where all log files will be placed
#LOGDEST=/home/makepkglogs
#-- Packager: name/email of the person or organization building packages
PACKAGER="John Doe <john@doe.com>"
#-- Specify a key to use for package signing
GPGKEY=""
#########################################################################
# COMPRESSION DEFAULTS
#########################################################################
#
COMPRESSGZ=(gzip -c -f -n)
COMPRESSBZ2=(bzip2 -c -f)
COMPRESSXZ=(xz -c -z -)
COMPRESSZST=(zstd -c -T0 -)
COMPRESSLRZ=(lrzip -q)
COMPRESSLZO=(lzop -q)
COMPRESSZ=(compress -c -f)
COMPRESSLZ4=(lz4 -q)
COMPRESSLZ=(lzip -c -f)
#########################################################################
# EXTENSION DEFAULTS
#########################################################################
#
PKGEXT='.pkg.tar.zst'
SRCEXT='.src.tar.gz'
#########################################################################
# OTHER
#########################################################################
#
#-- Command used to run pacman as root, instead of trying sudo and su
#PACMAN_AUTH=()
# vim: set ft=sh ts=2 sw=2 et:
```
## Usage
Make a package:
```shell
@ -39,7 +218,7 @@ makepkg --verifysource
## Options
| Option | Description |
| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-A, --ignorearch` | Ignore a missing or incomplete arch field in the build script |
| `-c, --clean` | Clean up leftover work files and directories after a successful build |
| `-d, --nodeps` | Do not perform any dependency checks. This will let you override and ignore any dependencies required. There is a good chance this option will break the build process if all of the dependencies are not installed |
@ -52,8 +231,88 @@ makepkg --verifysource
| `-r, --rmdeps` | Upon successful build, remove any dependencies installed by makepkg during dependency auto-resolution and installation |
| `-s, --syncdeps` | Install missing dependencies using [pacman](Pacman.md). When build-time or run-time dependencies are not found, [pacman](Pacman.md) will try to resolve them. If successful, the missing packages will be downloaded and installed |
| `-C, --cleanbuild` | Remove the $srcdir before building the package |
| `-f, --force` | Overwrite package if it already exists |
| `--noarchive` | Do not create the archive at the end of the build process. This can be useful to test the package() function or if your target distribution does not use [pacman](Pacman.md) |
| `--sign` | Sign the resulting package with [gpg](../../../cryptography/GPG.md) |
| `--nosign` | Do not create a signature for the built package |
| `--key <key>` | Specify a key to use when signing packages |
| `--noconfirm` | (Passed to [pacman](Pacman.md)) Prevent [pacman](Pacman.md) from waiting for user input before proceeding with operations |
## Misc
### Using mold linker
[mold](../../development/mold.md) is a drop-in replacement for ld/lld linkers, which claims to be significantly faster.
To use mold, append `-fuse-ld=mold` to `LDFLAGS`. For example:
```sh
# /etc/makepkg.conf
LDFLAGS="... -fuse-ld=mold"
```
To pass extra options to mold, additionally add those to `LDFLAGS`. For example:
```sh
# /etc/makepkg.conf
LDFLAGS="... -fuse-ld=mold -Wl,--separate-debug-file"
```
To use mold for Rust packages, append `-C link-arg=-fuse-ld=mold` to `RUSTFLAGS`. For example:
```sh
# /etc/makepkg.conf.d/rust.conf
RUSTFLAGS="... -C link-arg=-fuse-ld=mold"
```
### Parallel compilation
The make build system uses the `MAKEFLAGS` environment variable to specify additional options for make. The variable can also be set in the `makepkg.conf` file.
Users with multi-core/multi-processor systems can specify the number of jobs to run simultaneously. This can be accomplished with the use of `nproc` to determine the number of available processors, e.g.
```sh
MAKEFLAGS="--jobs=$(nproc)".
```
Some `PKGBUILDs` specifically override this with `-j1`, because of race conditions in certain versions or simply because it is not supported in the first place.
### Building from files in memory
As compiling requires many I/O operations and handling of small files, moving the working directory to a [tmpfs](../../../linux/filesystems/tmpFS.md) may bring improvements in build times.
The `BUILDDIR` variable can be temporarily exported to makepkg to set the build directory to an existing tmpfs. For example:
```sh
BUILDDIR=/tmp/makepkg makepkg
```
Persistent configuration can be done in `makepkg.conf` by uncommenting the `BUILDDIR` option, which is found at the end of the BUILD ENVIRONMENT section in the default `/etc/makepkg.conf` file. Setting its value to e.g. `BUILDDIR=/tmp/makepkg` will make use of the Arch's default `/tmp` temporary file system.
> **Note:**
> - Avoid compiling larger packages in tmpfs to prevent running out of memory.
> - The tmpfs directory must be mounted without the `noexec` option, otherwise it will prevent built binaries from being executed.
> - Keep in mind that packages compiled in tmpfs will not persist across reboot. Consider setting the `PKGDEST` option appropriately to move the built package automatically to a persistent directory.
### Generate new checksums
Install `pacman-contrib` and run the following command in the same directory as the [PKGBUILD](./PKGBUILD.md) file to generate new checksums:
```sh
updpkgsums
```
`updpkgsums` uses `makepkg --geninteg` to generate the checksums.
The checksums can also be obtained with e.g `sha256sum` and added to the `sha256sums` array by hand.
### Build from local source files
If you want to make changes to the source code you can download the source code without building the package by using the `-o, --nobuild` Download and extract files only option.
```sh
makepkg -o
```
You can now make changes to the sources and then build the package by using the `-e, --noextract` Do not extract source files (use existing `$srcdir/` dir) option. Use the `-f` option to overwrite already built and existing packages.
```sh
makepkg -ef
```

View file

@ -0,0 +1,103 @@
---
obj: application
repo: https://github.com/containers/bubblewrap
arch-wiki: https://wiki.archlinux.org//title/Bubblewrap
rev: 2025-01-09
---
# Bubblewrap
Bubblewrap is a lightweight sandbox application used by Flatpak and other container tools. It has a small installation footprint and minimal resource requirements. Notable features include support for cgroup/IPC/mount/network/PID/user/UTS namespaces and seccomp filtering. Note that bubblewrap drops all capabilities within a sandbox and that child tasks cannot gain greater privileges than its parent.
## Configuration
Bubblewrap can be called directly from the command-line and/or within shell scripts as part of a complex wrapper.
A no-op bubblewrap invocation is as follows:
```sh
bwrap --dev-bind / / bash
```
This will spawn a Bash process which should behave exactly as outside a sandbox in most cases. If a sandboxed program misbehaves, you may want to start from the above no-op invocation, and work your way towards a more secure configuration step-by-step.
### Desktop entries
Leverage Bubblewrap within desktop entries:
- Bind as read-write the entire host `/` directory to `/` in the sandbox
- Re-bind as read-only the `/var` and `/etc` directories in the sandbox
- Mount a new devtmpfs filesystem to `/dev` in the sandbox
- Create a tmpfs filesystem over the sandboxed `/run` directory
- Disable network access by creating new network namespace
```ini
[Desktop Entry]
Name=nano Editor
Exec=bwrap --bind / / --dev /dev --tmpfs /run --unshare-net st -e nano -o . %f
Type=Application
MimeType=text/plain;
```
> **Note**: `--dev /dev` is required to write to `/dev/pty`
## Options
Usage: `bwrap [optiosn] [command]`
| Option | Description |
| ------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `--args FD` | Parse nul-separated arguments from the given file descriptor. This option can be used multiple times to parse options from multiple sources. |
| `--argv0 VALUE` | Set `argv[0]` to the value VALUE before running the program |
| `--unshare-user` | Create a new user namespace |
| `--unshare-user-try` | Create a new user namespace if possible else skip it |
| `--unshare-ipc` | Create a new ipc namespace |
| `--unshare-pid` | Create a new pid namespace |
| `--unshare-net` | Create a new network namespace |
| `--unshare-uts` | Create a new uts namespace |
| `--unshare-cgroup` | Create a new cgroup namespace |
| `--unshare-cgroup-try` | Create a new cgroup namespace if possible else skip it |
| `--unshare-all` | Unshare all possible namespaces. Currently equivalent with: `--unshare-user-try --unshare-ipc --unshare-pid --unshare-net --unshare-uts --unshare-cgroup-try` |
| `--share-net` | Retain the network namespace, overriding an earlier `--unshare-all` or `--unshare-net` |
| `--userns FD` | Use an existing user namespace instead of creating a new one. The namespace must fulfil the permission requirements for `setns()`, which generally means that it must be a descendant of the currently active user namespace, owned by the same user. |
| `--disable-userns` | Prevent the process in the sandbox from creating further user namespaces, so that it cannot rearrange the filesystem namespace or do other more complex namespace modification. |
| `--assert-userns-disabled` | Confirm that the process in the sandbox has been prevented from creating further user namespaces, but without taking any particular action to prevent that. For example, this can be combined with --userns to check that the given user namespace has already been set up to prevent the creation of further user namespaces. |
| `--pidns FD` | Use an existing pid namespace instead of creating one. This is often used with `--userns`, because the pid namespace must be owned by the same user namespace that bwrap uses. |
| `--uid UID` | Use a custom user id in the sandbox (requires `--unshare-user`) |
| `--gid GID` | Use a custom group id in the sandbox (requires `--unshare-user`) |
| `--hostname HOSTNAME` | Use a custom hostname in the sandbox (requires `--unshare-uts`) |
| `--chdir DIR` | Change directory to DIR |
| `--setenv VAR VALUE` | Set an environment variable |
| `--unsetenv VAR` | Unset an environment variable |
| `--clearenv` | Unset all environment variables, except for PWD and any that are subsequently set by `--setenv` |
| `--lock-file DEST` | Take a lock on DEST while the sandbox is running. This option can be used multiple times to take locks on multiple files. |
| `--sync-fd FD` | Keep this file descriptor open while the sandbox is running |
| `--perms OCTAL` | This option does nothing on its own, and must be followed by one of the options that it affects. It sets the permissions for the next operation to OCTAL. Subsequent operations are not affected: for example, `--perms 0700 --tmpfs /a --tmpfs /b` will mount `/a` with permissions `0700`, then return to the default permissions for `/b`. Note that `--perms` and `--size` can be combined: `--perms 0700 --size 10485760 --tmpfs /s` will apply permissions as well as a maximum size to the created tmpfs. |
| `--size BYTES` | This option does nothing on its own, and must be followed by `--tmpfs`. It sets the size in bytes for the next tmpfs. For example, `--size 10485760 --tmpfs /tmp` will create a tmpfs at `/tmp` of size 10MiB. Subsequent operations are not affected. |
| `--bind SRC DEST` | Bind mount the host path SRC on DEST |
| `--bind-try SRC DEST` | Equal to `--bind` but ignores non-existent SRC |
| `--dev-bind SRC DEST` | Bind mount the host path SRC on DEST, allowing device access |
| `--dev-bind-try SRC DEST` | Equal to `--dev-bind` but ignores non-existent SRC |
| `--ro-bind SRC DEST` | Bind mount the host path SRC readonly on DEST |
| `--ro-bind-try SRC DEST` | Equal to `--ro-bind` but ignores non-existent SRC |
| `--remount-ro DEST` | Remount the path DEST as readonly. It works only on the specified mount point, without changing any other mount point under the specified path |
| `--overlay-src SRC` | This option does nothing on its own, and must be followed by one of the other overlay options. It specifies a host path from which files should be read if they aren't present in a higher layer. |
| `--overlay RWSRC WORKDIR DEST`, `--tmp-overlay DEST`, `--ro-overlay DEST` | Use overlayfs to mount the host paths specified by `RWSRC` and all immediately preceding `--overlay-src` on `DEST`. `DEST` will contain the union of all the files in all the layers. With `--overlay` all writes will go to `RWSRC`. Reads will come preferentially from `RWSRC`, and then from any `--overlay-src` paths. `WORKDIR` must be an empty directory on the same filesystem as `RWSRC`, and is used internally by the kernel. With `--tmp-overlay` all writes will go to the tmpfs that hosts the sandbox root, in a location not accessible from either the host or the child process. Writes will therefore not be persisted across multiple runs. With `--ro-overlay` the filesystem will be mounted read-only. This option requires at least two `--overlay-src` to precede it. |
| `--proc DEST` | Mount procfs on DEST |
| `--dev DEST` | Mount new devtmpfs on DEST |
| `--tmpfs DEST` | Mount new tmpfs on DEST. If the previous option was `--perms`, it sets the mode of the tmpfs. Otherwise, the tmpfs has mode `0755`. If the previous option was `--size`, it sets the size in bytes of the tmpfs. Otherwise, the tmpfs has the default size. |
| `--mqueue DEST` | Mount new mqueue on DEST |
| `--dir DEST` | Create a directory at DEST. If the directory already exists, its permissions are unmodified, ignoring `--perms` (use `--chmod` if the permissions of an existing directory need to be changed). If the directory is newly created and the previous option was `--perms`, it sets the mode of the directory. Otherwise, newly-created directories have mode `0755`. |
| `--file FD DEST` | Copy from the file descriptor FD to DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0666` (note that this is not the same as `--bind-data`). |
| `--bind-data FD DEST` | Copy from the file descriptor FD to a file which is bind-mounted on DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0600` (note that this is not the same as `--file`). |
| `--ro-bind-data FD DEST` | Copy from the file descriptor FD to a file which is bind-mounted read-only on DEST. If the previous option was `--perms`, it sets the mode of the new file. Otherwise, the file has mode `0600` (note that this is not the same as `--file`). |
| `--symlink SRC DEST` | Create a symlink at DEST with target SRC. |
| `--chmod OCTAL PATH` | Set the permissions of PATH, which must already exist, to OCTAL. |
| `--seccomp FD` | Load and use seccomp rules from FD. The rules need to be in the form of a compiled cBPF program, as generated by seccomp_export_bpf. If this option is given more than once, only the last one is used. Use `--add-seccomp-fd` if multiple seccomp programs are needed. |
| `--add-seccomp-fd FD` | Load and use seccomp rules from FD. The rules need to be in the form of a compiled cBPF program, as generated by seccomp_export_bpf. This option can be repeated, in which case all the seccomp programs will be loaded in the order given (note that the kernel will evaluate them in reverse order, so the last program on the bwrap command-line is evaluated first). All of them, except possibly the last, must allow use of the PR_SET_SECCOMP prctl. This option cannot be combined with `--seccomp`. |
| `--exec-label LABEL` | Exec Label from the sandbox. On an SELinux system you can specify the SELinux context for the sandbox process(s). |
| `--file-label LABEL` | File label for temporary sandbox content. On an SELinux system you can specify the SELinux context for the sandbox content. |
| `--block-fd FD` | Block the sandbox on reading from FD until some data is available. |
| `--userns-block-fd FD` | Do not initialize the user namespace but wait on FD until it is ready. This allow external processes (like newuidmap/newgidmap) to setup the user namespace before it is used by the sandbox process. |
| `--info-fd FD` | Write information in JSON format about the sandbox to FD. |
| `--json-status-fd FD` | Multiple JSON documents are written to FD, one per line. |
| `--new-session` | Create a new terminal session for the sandbox (calls `setsid()`). This disconnects the sandbox from the controlling terminal which means the sandbox can't for instance inject input into the terminal. Note: In a general sandbox, if you don't use `--new-session`, it is recommended to use seccomp to disallow the `TIOCSTI` ioctl, otherwise the application can feed keyboard input to the terminal which can e.g. lead to out-of-sandbox command execution. |
| `--die-with-parent` | Ensures child process (COMMAND) dies when bwrap's parent dies. Kills (SIGKILL) all bwrap sandbox processes in sequence from parent to child including COMMAND process when bwrap or bwrap's parent dies. |
| `--as-pid-1` | Do not create a process with PID=1 in the sandbox to reap child processes. |
| `--cap-add CAP` | Add the specified capability CAP, e.g. `CAP_DAC_READ_SEARCH`, when running as privileged user. It accepts the special value `ALL` to add all the permitted caps. |
| `--cap-drop CAP` | Drop the specified capability when running as privileged user. It accepts the special value `ALL` to drop all the caps. By default no caps are left in the sandboxed process. The `--cap-add` and `--cap-drop` options are processed in the order they are specified on the command line. Please be careful to the order they are specified. |

View file

@ -5,4 +5,131 @@ website: https://goauthentik.io
---
# Authentik
#wip
Authentik is an open-source Identity Provider (IDP) that aims to unify your identity needs into a single platform. It can replace Okta, Active Directory, and Auth0, offering a comprehensive solution for managing user identities. Built with the public benefit in mind, Authentik Security Inc. has developed this product on top of the open-source project.
## Features
- **Self-host anywhere**: Authentik allows you to self-host your identity provider, giving you complete control over your data and infrastructure.
- **Multi-Factor Authentication (MFA)**: This feature helps ensure the security of your user accounts by requiring multiple forms of identification before granting access.
- **Conditional Access**: Authentik enables you to set conditions for accessing specific resources based on factors such as location, device, or time.
- **Open-source and source available**: The project is fully open-source, with its source code available for anyone to inspect and contribute to.
- **Application Proxy**: This feature allows you to securely connect your applications to Authentik without the need to modify them.
- **Enterprise support**: Authentik offers dedicated enterprise-level support to ensure smooth deployment and operation of the product within your organization.
### Supported Protocols
- **SAML 2.0**
- **OAuth 2.0 and OIDC**
- **SCIM**
- **LDAP**
- **RADIUS**
## Docker-Compose
`docker-compose.yml`:
```yml
---
services:
postgresql:
image: docker.io/library/postgres:16-alpine
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
volumes:
- ./db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
POSTGRES_USER: ${PG_USER:-authentik}
POSTGRES_DB: ${PG_DB:-authentik}
env_file:
- .env
redis:
image: docker.io/library/redis:alpine
command: --save 60 1 --loglevel warning
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 20s
interval: 30s
retries: 5
timeout: 3s
volumes:
- ./redis:/data
deploy:
resources:
limits:
memory: 512M
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.8}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
volumes:
- ./media:/media
- ./custom-templates:/templates
env_file:
- .env
ports:
- "${COMPOSE_PORT_HTTP:-9000}:9000"
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
depends_on:
- postgresql
- redis
worker:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2024.6.4}
restart: unless-stopped
command: worker
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
# `user: root` and the docker socket volume are optional.
# See more for the docker socket integration here:
# https://goauthentik.io/docs/outposts/integrations/docker
# Removing `user: root` also prevents the worker from fixing the permissions
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
# (1000:1000 by default)
user: root
volumes:
# - /var/run/docker.sock:/var/run/docker.sock
- ./media:/media
- ./certs:/certs
- ./custom-templates:/templates
env_file:
- .env
depends_on:
- postgresql
- redis
```
`.env`:
````
PG_PASS=<PASSWORD>
AUTHENTIK_SECRET_KEY=<SECRET>
# SMTP Host Emails are sent to
AUTHENTIK_EMAIL__HOST=<HOST>
AUTHENTIK_EMAIL__PORT=465
# Optionally authenticate (don't add quotation marks to your password)
AUTHENTIK_EMAIL__USERNAME=<USER>
AUTHENTIK_EMAIL__PASSWORD=<PASS>
# Use StartTLS
AUTHENTIK_EMAIL__USE_TLS=false
# Use SSL
AUTHENTIK_EMAIL__USE_SSL=true
AUTHENTIK_EMAIL__TIMEOUT=10
# Email address authentik will send from, should have a correct @domain
AUTHENTIK_EMAIL__FROM=<FROM>
COMPOSE_PORT_HTTP=9020
COMPOSE_PORT_HTTPS=9021
```

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
---
obj: application
website: https://dawarich.app
repo: https://github.com/Freika/dawarich
---
# dawarich
Dawarich is a platform that allows you to track your location history and view maps with visualizations of your data.
You can set up your own location tracking system by combining it with Overland or [OwnTracks](../mobile/OwnTracks.md).

View file

@ -0,0 +1,123 @@
---
obj: application
repo: https://github.com/FiloSottile/age
source: https://age-encryption.org/v1
rev: 2025-01-09
---
# age
age is a simple, modern and secure file encryption tool, format, and Go library.
It features small explicit keys, no config options, and UNIX-style composability.
```sh
$ age-keygen -o key.txt
Public key: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
$ PUBLIC_KEY=$(age-keygen -y key.txt)
$ tar cvz ~/data | age -r $PUBLIC_KEY > data.tar.gz.age
$ age --decrypt -i key.txt data.tar.gz.age > data.tar.gz
```
## Usage
For the full documentation, read [the age(1) man page](https://filippo.io/age/age.1).
```
Usage:
age [--encrypt] (-r RECIPIENT | -R PATH)... [--armor] [-o OUTPUT] [INPUT]
age [--encrypt] --passphrase [--armor] [-o OUTPUT] [INPUT]
age --decrypt [-i PATH]... [-o OUTPUT] [INPUT]
Options:
-e, --encrypt Encrypt the input to the output. Default if omitted.
-d, --decrypt Decrypt the input to the output.
-o, --output OUTPUT Write the result to the file at path OUTPUT.
-a, --armor Encrypt to a PEM encoded format.
-p, --passphrase Encrypt with a passphrase.
-r, --recipient RECIPIENT Encrypt to the specified RECIPIENT. Can be repeated.
-R, --recipients-file PATH Encrypt to recipients listed at PATH. Can be repeated.
-i, --identity PATH Use the identity file at PATH. Can be repeated.
INPUT defaults to standard input, and OUTPUT defaults to standard output.
If OUTPUT exists, it will be overwritten.
RECIPIENT can be an age public key generated by age-keygen ("age1...")
or an SSH public key ("ssh-ed25519 AAAA...", "ssh-rsa AAAA...").
Recipient files contain one or more recipients, one per line. Empty lines
and lines starting with "#" are ignored as comments. "-" may be used to
read recipients from standard input.
Identity files contain one or more secret keys ("AGE-SECRET-KEY-1..."),
one per line, or an SSH key. Empty lines and lines starting with "#" are
ignored as comments. Passphrase encrypted age files can be used as
identity files. Multiple key files can be provided, and any unused ones
will be ignored. "-" may be used to read identities from standard input.
When --encrypt is specified explicitly, -i can also be used to encrypt to an
identity file symmetrically, instead or in addition to normal recipients.
```
### Multiple recipients
Files can be encrypted to multiple recipients by repeating `-r/--recipient`. Every recipient will be able to decrypt the file.
```
$ age -o example.jpg.age -r age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p \
-r age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg example.jpg
```
#### Recipient files
Multiple recipients can also be listed one per line in one or more files passed with the `-R/--recipients-file` flag.
```
$ cat recipients.txt
# Alice
age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
# Bob
age1lggyhqrw2nlhcxprm67z43rta597azn8gknawjehu9d9dl0jq3yqqvfafg
$ age -R recipients.txt example.jpg > example.jpg.age
```
If the argument to `-R` (or `-i`) is `-`, the file is read from standard input.
### Passphrases
Files can be encrypted with a passphrase by using `-p/--passphrase`. By default age will automatically generate a secure passphrase. Passphrase protected files are automatically detected at decrypt time.
```
$ age -p secrets.txt > secrets.txt.age
Enter passphrase (leave empty to autogenerate a secure one):
Using the autogenerated passphrase "release-response-step-brand-wrap-ankle-pair-unusual-sword-train".
$ age -d secrets.txt.age > secrets.txt
Enter passphrase:
```
### Passphrase-protected key files
If an identity file passed to `-i` is a passphrase encrypted age file, it will be automatically decrypted.
```
$ age-keygen | age -p > key.age
Public key: age1yhm4gctwfmrpz87tdslm550wrx6m79y9f2hdzt0lndjnehwj0ukqrjpyx5
Enter passphrase (leave empty to autogenerate a secure one):
Using the autogenerated passphrase "hip-roast-boring-snake-mention-east-wasp-honey-input-actress".
$ age -r age1yhm4gctwfmrpz87tdslm550wrx6m79y9f2hdzt0lndjnehwj0ukqrjpyx5 secrets.txt > secrets.txt.age
$ age -d -i key.age secrets.txt.age > secrets.txt
Enter passphrase for identity file "key.age":
```
Passphrase-protected identity files are not necessary for most use cases, where access to the encrypted identity file implies access to the whole system. However, they can be useful if the identity file is stored remotely.
### SSH keys
As a convenience feature, age also supports encrypting to `ssh-rsa` and `ssh-ed25519` SSH public keys, and decrypting with the respective private key file. (`ssh-agent` is not supported.)
```
$ age -R ~/.ssh/id_ed25519.pub example.jpg > example.jpg.age
$ age -d -i ~/.ssh/id_ed25519 example.jpg.age > example.jpg
```
Note that SSH key support employs more complex cryptography, and embeds a public key tag in the encrypted file, making it possible to track files that are encrypted to a specific public key.
#### Encrypting to a GitHub user
Combining SSH key support and `-R`, you can easily encrypt a file to the SSH keys listed on a GitHub profile.
```
$ curl https://github.com/benjojo.keys | age -R - example.jpg > example.jpg.age
```

View file

@ -3,7 +3,7 @@ obj: application
wiki: https://en.wikipedia.org/wiki/Git
repo: https://github.com/git/git
website: https://git-scm.com
rev: 2024-04-15
rev: 2024-12-04
---
# Git
@ -287,3 +287,18 @@ git am --abort < patch
## .gitignore
A `.gitignore` file specifies intentionally untracked files that Git should ignore. Files already tracked by Git are not affected.
This file contains pattern on each line which exclude files from git versioning.
## Git Hooks
Git hooks are custom scripts that run automatically in response to certain Git events or actions. These hooks are useful for automating tasks like code quality checks, running tests, enforcing commit message conventions, and more. Git hooks can be executed at different points in the Git workflow, such as before or after a commit, push, or merge.
Git hooks are stored in the `.git/hooks` directory of your repository. By default, this directory contains example scripts with the `.sample` extension. You can customize these scripts by removing the `.sample` extension and editing them as needed.
Hooks only apply to your local repository. If a hook script fails it prevents the associated action as well.
### Common Git Hooks
- pre-commit
- prepare-commit-msg
- commit-msg
- post-commit
- post-checkout
- pre-rebase

View file

@ -405,7 +405,7 @@ Dart supports single-line comments, multi-line comments, and documentation comme
A single-line comment begins with `//`. Everything between `//` and the end of line is ignored by the Dart compiler.
```dart
void main() {
// TODO: refactor into an AbstractLlamaGreetingFactory?
// refactor into an AbstractLlamaGreetingFactory?
print('Welcome to my Llama farm!');
}
```

View file

@ -2,6 +2,9 @@
obj: meta/collection
---
# Best Practices
- [URL Suffix API](./URL%20Suffix%20API.md)
# Creational Patterns
- [Abstract Factory](creational/Abstract%20Factory%20Pattern.md)
- [Builder](creational/Builder%20Pattern.md)

View file

@ -0,0 +1,15 @@
# URL Suffix API
When designing a website, consider leveraging URL suffixes to indicate the format of the resource being accessed, similar to how file extensions are used in operating systems.
For example, a webpage located at `/blog/post/id` that renders human-readable content could have its machine-readable data served by appending a format-specific suffix to the same URL, such as `/blog/post/id.json`.
#### Benefits:
1. **Intuitive API from Website Usage**
Users can easily derive API endpoints from existing website URLs by appending the desired format suffix.
2. **Interchangeable Formats**
The same approach allows for multiple formats (e.g., `.json`, `.msgpack`, `.protobuf`) to be supported seamlessly, improving flexibility and usability.
This method simplifies the architecture, enhances consistency, and provides an elegant mechanism to serve both human-readable and machine-readable content from the same base URL.

View file

@ -0,0 +1,84 @@
---
obj: format
website: https://jsonlines.org
extension: "jsonl"
mime: "application/jsonl"
rev: 2024-12-02
---
# JSON Lines
This page describes the JSON Lines text format, also called newline-delimited JSON. JSON Lines is a convenient format for storing structured data that may be processed one record at a time. It works well with unix-style text processing tools and shell pipelines. It's a great format for log files. It's also a flexible format for passing messages between cooperating processes.
The JSON Lines format has three requirements:
- **UTF-8 Encoding**: JSON allows encoding Unicode strings with only ASCII escape sequences, however those escapes will be hard to read when viewed in a text editor. The author of the JSON Lines file may choose to escape characters to work with plain ASCII files. Encodings other than UTF-8 are very unlikely to be valid when decoded as UTF-8 so the chance of accidentally misinterpreting characters in JSON Lines files is low.
- **Each Line is a Valid JSON Value**: The most common values will be objects or arrays, but any JSON value is permitted.
- **Line Separator is `\n`**: This means `\r\n` is also supported because surrounding white space is implicitly ignored when parsing JSON values.
## Better than CSV
```json
["Name", "Session", "Score", "Completed"]
["Gilbert", "2013", 24, true]
["Alexa", "2013", 29, true]
["May", "2012B", 14, false]
["Deloise", "2012A", 19, true]
```
CSV seems so easy that many programmers have written code to generate it themselves, and almost every implementation is different. Handling broken CSV files is a common and frustrating task. CSV has no standard encoding, no standard column separator and multiple character escaping standards. String is the only type supported for cell values, so some programs attempt to guess the correct types.
JSON Lines handles tabular data cleanly and without ambiguity. Cells may use the standard JSON types.
The biggest missing piece is an import/export filter for popular spreadsheet programs so that non-programmers can use this format.
## Self-describing data
```json
{"name": "Gilbert", "session": "2013", "score": 24, "completed": true}
{"name": "Alexa", "session": "2013", "score": 29, "completed": true}
{"name": "May", "session": "2012B", "score": 14, "completed": false}
{"name": "Deloise", "session": "2012A", "score": 19, "completed": true}
```
JSON Lines enables applications to read objects line-by-line, with each line fully describing a JSON object. The example above contains the same data as the tabular example above, but allows applications to split files on newline boundaries for parallel loading, and eliminates any ambiguity if fields are omitted or re-ordered.
## Easy Nested Data
```json
{"name": "Gilbert", "wins": [["straight", "7♣"], ["one pair", "10♥"]]}
{"name": "Alexa", "wins": [["two pair", "4♠"], ["two pair", "9♠"]]}
{"name": "May", "wins": []}
{"name": "Deloise", "wins": [["three of a kind", "5♣"]]}
```
JSON Lines' biggest strength is in handling lots of similar nested data structures. One `.jsonl` file is easier to work with than a directory full of XML files.
If you have large nested structures then reading the JSON Lines text directly isn't recommended. Use the "jq" tool to make viewing large structures easier:
```
grep pair winning_hands.jsonl | jq .
{
"name": "Gilbert",
"wins": [
[
"straight",
"7♣"
],
[
"one pair",
"10♥"
]
]
}
{
"name": "Alexa",
"wins": [
[
"two pair",
"4♠"
],
[
"two pair",
"9♠"
]
]
}
```

View file

@ -0,0 +1,251 @@
---
obj: concept
website: https://ogp.me
rev: 2024-12-16
---
# The Open Graph protocol
The [Open Graph protocol](https://ogp.me/) enables any web page to become a rich object in a social graph. For instance, this is used on Facebook to allow any web page to have the same functionality as any other object on Facebook.
## Basic Metadata
To turn your web pages into graph objects, you need to add basic metadata to your page. Which means that you'll place additional `<meta>` tags in the `<head>` of your web page. The four required properties for every page are:
- `og:title` - The title of your object as it should appear within the graph, e.g., "The Rock".
- `og:type` - The type of your object, e.g., `video.movie`. Depending on the type you specify, other properties may also be required.
- `og:image` - An image URL which should represent your object within the graph.
- `og:url` - The canonical URL of your object that will be used as its permanent ID in the graph, e.g., "https://www.imdb.com/title/tt0117500/".
As an example, the following is the Open Graph protocol markup for [The Rock on IMDB](https://www.imdb.com/title/tt0117500/):
```html
<html prefix="og: https://ogp.me/ns#">
<head>
<title>The Rock (1996)</title>
<meta property="og:title" content="The Rock" />
<meta property="og:type" content="video.movie" />
<meta property="og:url" content="https://www.imdb.com/title/tt0117500/" />
<meta property="og:image" content="https://ia.media-imdb.com/images/rock.jpg" />
...
</head>
...
</html>
```
### Optional Metadata
The following properties are optional for any object and are generally recommended:
- `og:audio` - A URL to an audio file to accompany this object.
- `og:description` - A one to two sentence description of your object.
- `og:determiner` - The word that appears before this object's title in a sentence. An enum of (`a`, `an`, `the`, `""`, `auto`). If `auto` is chosen, the consumer of your data should chose between `a` or `an`. Default is `""` (blank).
- `og:locale` - The locale these tags are marked up in. Of the format `language_TERRITORY`. Default is `en_US`.
- `og:locale:alternate` - An array of other locales this page is available in.
- `og:site_name` - If your object is part of a larger web site, the name which should be displayed for the overall site. e.g., "IMDb".
- `og:video` - A URL to a video file that complements this object.
For example (line-break solely for display purposes):
```html
<meta property="og:audio" content="https://example.com/bond/theme.mp3" />
<meta property="og:description"
content="Sean Connery found fame and fortune as the
suave, sophisticated British agent, James Bond." />
<meta property="og:determiner" content="the" />
<meta property="og:locale" content="en_GB" />
<meta property="og:locale:alternate" content="fr_FR" />
<meta property="og:locale:alternate" content="es_ES" />
<meta property="og:site_name" content="IMDb" />
<meta property="og:video" content="https://example.com/bond/trailer.swf" />
```
## Structured Properties
Some properties can have extra metadata attached to them. These are specified in the same way as other metadata with `property` and `content`, but the `property` will have extra `:`.
The `og:image` property has some optional structured properties:
- `og:image:url` - Identical to `og:image`.
- `og:image:secure_url` - An alternate url to use if the webpage requires HTTPS.
- `og:image:type` - A MIME type for this image.
- `og:image:width` - The number of pixels wide.
- `og:image:height` - The number of pixels high.
- `og:image:alt` - A description of what is in the image (not a caption). If the page specifies an og:image it should specify `og:image:alt`.
A full image example:
```html
<meta property="og:image" content="https://example.com/ogp.jpg" />
<meta property="og:image:secure_url" content="https://secure.example.com/ogp.jpg" />
<meta property="og:image:type" content="image/jpeg" />
<meta property="og:image:width" content="400" />
<meta property="og:image:height" content="300" />
<meta property="og:image:alt" content="A shiny red apple with a bite taken out" />
```
The `og:video` tag has the identical tags as `og:image`. Here is an example:
```html
<meta property="og:video" content="https://example.com/movie.swf" />
<meta property="og:video:secure_url" content="https://secure.example.com/movie.swf" />
<meta property="og:video:type" content="application/x-shockwave-flash" />
<meta property="og:video:width" content="400" />
<meta property="og:video:height" content="300" />
```
The `og:audio` tag only has the first 3 properties available (since size doesn't make sense for sound):
```html
<meta property="og:audio" content="https://example.com/sound.mp3" />
<meta property="og:audio:secure_url" content="https://secure.example.com/sound.mp3" />
<meta property="og:audio:type" content="audio/mpeg" />
```
## Arrays
If a tag can have multiple values, just put multiple versions of the same `<meta>` tag on your page. The first tag (from top to bottom) is given preference during conflicts.
```html
<meta property="og:image" content="https://example.com/rock.jpg" />
<meta property="og:image" content="https://example.com/rock2.jpg" />
```
Put structured properties after you declare their root tag. Whenever another root element is parsed, that structured property is considered to be done and another one is started.
For example:
```html
<meta property="og:image" content="https://example.com/rock.jpg" />
<meta property="og:image:width" content="300" />
<meta property="og:image:height" content="300" />
<meta property="og:image" content="https://example.com/rock2.jpg" />
<meta property="og:image" content="https://example.com/rock3.jpg" />
<meta property="og:image:height" content="1000" />
```
means there are 3 images on this page, the first image is `300x300`, the middle one has unspecified dimensions, and the last one is `1000px` tall.
## Object Types
In order for your object to be represented within the graph, you need to specify its type. This is done using the `og:type` property:
```html
<meta property="og:type" content="website" />
```
When the community agrees on the schema for a type, it is added to the list of global types. All other objects in the type system are CURIEs of the form.
```html
<head prefix="my_namespace: https://example.com/ns#">
<meta property="og:type" content="my_namespace:my_type" />
```
The global types are grouped into verticals. Each vertical has its own namespace. The `og:type` values for a namespace are always prefixed with the namespace and then a period. This is to reduce confusion with user-defined namespaced types which always have colons in them.
### Music
- Namespace URI: [`https://ogp.me/ns/music#`](https://ogp.me/ns/music)
`og:type` values:
[`music.song`](https://ogp.me/#type_music.song)
- `music:duration` - [integer](https://ogp.me/#integer) >=1 - The song's length in seconds.
- `music:album` - [music.album](https://ogp.me/#type_music.album) [array](https://ogp.me/#array) - The album this song is from.
- `music:album:disc` - [integer](https://ogp.me/#integer) >=1 - Which disc of the album this song is on.
- `music:album:track` - [integer](https://ogp.me/#integer) >=1 - Which track this song is.
- `music:musician` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - The musician that made this song.
[`music.album`](https://ogp.me/#type_music.album)
- `music:song` - [music.song](https://ogp.me/#type_music.song) - The song on this album.
- `music:song:disc` - [integer](https://ogp.me/#integer) >=1 - The same as `music:album:disc` but in reverse.
- `music:song:track` - [integer](https://ogp.me/#integer) >=1 - The same as `music:album:track` but in reverse.
- `music:musician` - [profile](https://ogp.me/#type_profile) - The musician that made this song.
- `music:release_date` - [datetime](https://ogp.me/#datetime) - The date the album was released.
[`music.playlist`](https://ogp.me/#type_music.playlist)
- `music:song` - Identical to the ones on [music.album](https://ogp.me/#type_music.album)
- `music:song:disc`
- `music:song:track`
- `music:creator` - [profile](https://ogp.me/#type_profile) - The creator of this playlist.
[`music.radio_station`](https://ogp.me/#type_music.radio_station)
- `music:creator` - [profile](https://ogp.me/#type_profile) - The creator of this station.
### Video
- Namespace URI: [`https://ogp.me/ns/video#`](https://ogp.me/ns/video)
`og:type` values:
[`video.movie`](https://ogp.me/#type_video.movie)
- `video:actor` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Actors in the movie.
- `video:actor:role` - [string](https://ogp.me/#string) - The role they played.
- `video:director` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Directors of the movie.
- `video:writer` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Writers of the movie.
- `video:duration` - [integer](https://ogp.me/#integer) >=1 - The movie's length in seconds.
- `video:release_date` - [datetime](https://ogp.me/#datetime) - The date the movie was released.
- `video:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this movie.
[`video.episode`](https://ogp.me/#type_video.episode)
- `video:actor` - Identical to [video.movie](https://ogp.me/#type_video.movie)
- `video:actor:role`
- `video:director`
- `video:writer`
- `video:duration`
- `video:release_date`
- `video:tag`
- `video:series` - [video.tv_show](https://ogp.me/#type_video.tv_show) - Which series this episode belongs to.
[`video.tv_show`](https://ogp.me/#type_video.tv_show)
A multi-episode TV show. The metadata is identical to [video.movie](https://ogp.me/#type_video.movie).
[`video.other`](https://ogp.me/#type_video.other)
A video that doesn't belong in any other category. The metadata is identical to [video.movie](https://ogp.me/#type_video.movie).
### No Vertical
These are globally defined objects that just don't fit into a vertical but yet are broadly used and agreed upon.
`og:type` values:
[`article`](https://ogp.me/#type_article) - Namespace URI: [`https://ogp.me/ns/article#`](https://ogp.me/ns/article)
- `article:published_time` - [datetime](https://ogp.me/#datetime) - When the article was first published.
- `article:modified_time` - [datetime](https://ogp.me/#datetime) - When the article was last changed.
- `article:expiration_time` - [datetime](https://ogp.me/#datetime) - When the article is out of date after.
- `article:author` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Writers of the article.
- `article:section` - [string](https://ogp.me/#string) - A high-level section name. E.g. Technology
- `article:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this article.
[`book`](https://ogp.me/#type_book) - Namespace URI: [`https://ogp.me/ns/book#`](https://ogp.me/ns/book)
- `book:author` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Who wrote this book.
- `book:isbn` - [string](https://ogp.me/#string) - The [ISBN](https://en.wikipedia.org/wiki/International_Standard_Book_Number)
- `book:release_date` - [datetime](https://ogp.me/#datetime) - The date the book was released.
- `book:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this book.
[`profile`](https://ogp.me/#type_profile) - Namespace URI: [`https://ogp.me/ns/profile#`](https://ogp.me/ns/profile)
- `profile:first_name` - [string](https://ogp.me/#string) - A name normally given to an individual by a parent or self-chosen.
- `profile:last_name` - [string](https://ogp.me/#string) - A name inherited from a family or marriage and by which the individual is commonly known.
- `profile:username` - [string](https://ogp.me/#string) - A short unique string to identify them.
- `profile:gender` - [enum](https://ogp.me/#enum)(male, female) - Their gender.
[`website`](https://ogp.me/#type_website) - Namespace URI: [`https://ogp.me/ns/website#`](https://ogp.me/ns/website)
No additional properties other than the basic ones. Any non-marked up webpage should be treated as `og:type` website.
## Types
The following types are used when defining attributes in Open Graph protocol.
| **Type** | **Description** | **Literals** |
| -------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| Boolean | A Boolean represents a true or false value | true, false, 1, 0 |
| DateTime | A DateTime represents a temporal value composed of a date (year, month, day) and an optional time component (hours, minutes) | ISO 8601 |
| Enum | A type consisting of bounded set of constant string values (enumeration members). | A string value that is a member of the enumeration |
| Float | A 64-bit signed floating point number | All literals that conform to the following formats: `1.234`, `-1.234`, `1.2e3`, `-1.2e3`, `7E-10` |
| Integer | A 32-bit signed integer. | All literals that conform to the following formats: `1234`, `-123` |
| String | A sequence of Unicode characters | All literals composed of Unicode characters with no escape characters |
| URL | A sequence of Unicode characters that identify an Internet resource. | All valid URLs that utilize the `http://` or `https://` protocols |

View file

@ -12,6 +12,8 @@ Installation of Arch Linux is typically done manually following the [Wiki](https
curl -L matmoul.github.io/archfi | bash
```
You can create a (custom) ISO with [archiso](./archiso.md).
## Basic Install
```shell
# Set keyboard

View file

@ -43,3 +43,41 @@ A typical Linux system has, among others, the following directories:
| `/var` | This directory contains files which may change in size, such as spool and [log](../dev/Log.md) files. |
| `/var/cache` | Data cached for programs. |
| `/var/log` | Miscellaneous [log](../dev/Log.md) files. |
## Kernel Commandline
The kernel, the programs running in the initrd and in the host system may be configured at boot via kernel command line arguments.
The current cmdline can be seen at `/proc/cmdline`.
For setting the cmdline use `/etc/kernel/cmdline` if you use UKIs.
**Common Kernel Cmdline Arguments:**
| Argument | Description |
| ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `quiet` | Parameter understood by both the kernel and the system and service manager to control console log verbosity. |
| `splash` | Show a plymouth splash screen while booting. |
| `init=` | This sets the initial command to be executed by the kernel. If this is not set, or cannot be found, the kernel will try `/sbin/init`, then `/etc/init`, then `/bin/init`, then `/bin/sh` and panic if all of this fails. |
| `ro` and `rw` | The `ro` option tells the kernel to mount the root filesystem as 'read-only'. The `rw` option tells the kernel to mount the root filesystem read/write. This is the default. |
| `resume=...` | This tells the kernel the location of the suspend-to-disk data that you want the machine to resume from after hibernation. Usually, it is the same as your swap partition or file. Example: `resume=/dev/hda2` |
| `panic=N` | By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after `N` seconds (if `N` is greater than zero). This panic timeout can also be set by `echo N > /proc/sys/kernel/panic` |
| `plymouth.enable=` | May be used to disable the Plymouth boot splash. For details, see plymouth. |
| `vconsole.keymap=, vconsole.keymap_toggle=, vconsole.font=, vconsole.font_map=, vconsole.font_unimap=` | Parameters understood by the virtual console setup logic. For details, see `vconsole.conf` |
| `luks=, rd.luks=` | Defaults to "yes". If "no", disables the crypt mount generator entirely. `rd.luks=` is honored only in the initrd while `luks=` is honored by both the main system and in the initrd. |
| `luks.crypttab=, rd.luks.crypttab=` | Defaults to "yes". If "no", causes the generator to ignore any devices configured in `/etc/crypttab` (`luks.uuid=` will still work however). `rd.luks.crypttab=` is honored only in initrd while `luks.crypttab=` is honored by both the main system and in the initrd. |
| `luks.uuid=, rd.luks.uuid=` | Takes a LUKS superblock UUID as argument. This will activate the specified device as part of the boot process as if it was listed in `/etc/crypttab`. This option may be specified more than once in order to set up multiple devices. `rd.luks.uuid=` is honored only in the initrd, while `luks.uuid=` is honored by both the main system and in the initrd. |
| `luks.name=, rd.luks.name=` | Takes a LUKS super block UUID followed by an `=` and a name. This implies `rd.luks.uuid=` or `luks.uuid=` and will additionally make the LUKS device given by the UUID appear under the provided name. `rd.luks.name=` is honored only in the initrd, while `luks.name=` is honored by both the main system and in the initrd. |
| `luks.options=, rd.luks.options=` | Takes a LUKS super block UUID followed by an `=` and a string of options separated by commas as argument. This will override the options for the given UUID. If only a list of options, without a UUID, is specified, they apply to any UUIDs not specified elsewhere, and without an entry in `/etc/crypttab`. `rd.luks.options=` is honored only by initial RAM disk (initrd) while `luks.options=` is honored by both the main system and in the initrd. |
| `fstab=, rd.fstab=` | Defaults to "yes". If "no", causes the generator to ignore any mounts or swap devices configured in `/etc/fstab`. `rd.fstab=` is honored only in the initrd, while `fstab=` is honored by both the main system and the initrd. |
| `root=` | Configures the operating system's root filesystem to mount when running in the initrd. This accepts a device node path (usually `/dev/disk/by-uuid/...` or similar), or the special values `gpt-auto`, `fstab`, and `tmpfs`. Use `gpt-auto` to explicitly request automatic root file system discovery via `systemd-gpt-auto-generator`. Use `fstab` to explicitly request automatic root file system discovery via the initrd `/etc/fstab` rather than via kernel command line. Use `tmpfs` in order to mount a tmpfs file system as root file system of the OS. This is useful in combination with `mount.usr=` in order to combine a volatile root file system with a separate, immutable `/usr/` file system. Also see `systemd.volatile=` below. |
| `rootfstype=` | Takes the root filesystem type that will be passed to the mount command. `rootfstype=` is honored by the initrd. |
| `mount.usr=` | Takes the `/usr/` filesystem to be mounted by the initrd. If `mount.usrfstype=` or `mount.usrflags=` is set, then `mount.usr=` will default to the value set in `root=`. Otherwise, this parameter defaults to the `/usr/` entry found in `/etc/fstab` on the root filesystem. |
| `mount.usrfstype=` | Takes the `/usr` filesystem type that will be passed to the mount command. |
| `systemd.volatile=` | Controls whether the system shall boot up in volatile mode. |
| `systemd.swap=` | Takes a boolean argument or enables the option if specified without an argument. If disabled, causes the generator to ignore any swap devices configured in `/etc/fstab`. Defaults to enabled. |
## Misc
### Cause a kernel panic
To manually cause a kernel panic run:
```sh
echo c > /proc/sysrq-trigger
```

View file

@ -0,0 +1,105 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Plymouth
rev: 2024-12-20
---
# Plymouth
Plymouth is a project from Fedora providing a flicker-free graphical boot process. It relies on kernel mode setting (KMS) to set the native resolution of the display as early as possible, then provides an eye-candy splash screen leading all the way up to the login manager.
## Setup
By default, Plymouth logs the boot messages into `/var/log/boot.log`, and does not show the graphical splash screen.
- If you want to see the splash screen, append `splash` to the kernel parameters.
- If you want silent boot, append `quiet` too.
- If you want to disable the logging, append `plymouth.boot-log=/dev/null`. Alternatively, add `plymouth.nolog` which also disables console redirection.
To start Plymouth on early boot, you must configure your initramfs generator to create images including Plymouth.
For mkinitcpio, add plymouth to the `HOOKS` array in `mkinitcpio.conf`:
```sh
# /etc/mkinitcpio.conf
HOOKS=(... plymouth ...)
```
If you are using the `systemd` hook, it must be before `plymouth`.
Furthermore make sure you place `plymouth` before the `crypt` hook if your system is encrypted with dm-crypt.
## Configuration
Plymouth can be configured in file `/etc/plymouth/plymouthd.conf`. You can see the default values in `/usr/share/plymouth/plymouthd.defaults`.
### Changing the theme
Plymouth comes with a selection of themes:
- BGRT: A variation of Spinner that keeps the OEM logo if available (BGRT stands for Boot Graphics Resource Table)
- Fade-in: "Simple theme that fades in and out with shimmering stars"
- Glow: "Corporate theme with pie chart boot progress followed by a glowing emerging logo"
- Script: "Script example plugin" (Despite the description seems to be a quite nice Arch logo theme)
- Solar: "Space theme with violent flaring blue star"
- Spinner: "Simple theme with a loading spinner"
- Spinfinity: "Simple theme that shows a rotating infinity sign in the center of the screen"
- Tribar: "Text mode theme with tricolor progress bar"
- (Text: "Text mode theme with tricolor progress bar")
- (Details: "Verbose fallback theme")
The theme can be changed by editing the configuration file:
```ini
# /etc/plymouth/plymouthd.conf
[Daemon]
Theme=theme
```
or by running:
```sh
plymouth-set-default-theme -R theme
```
Every time a theme is changed, the initrd must be rebuilt. The `-R` option ensures that it is rebuilt (otherwise regenerate the initramfs manually).
### Install new themes
All currently installed themes can be listed by using this command:
```sh
plymouth-set-default-theme -l
# or:
ls /usr/share/plymouth/themes
```
### Show delay
Plymouth has a configuration option to delay the splash screen:
```ini
# /etc/plymouth/plymouthd.conf
[Daemon]
ShowDelay=5
```
On systems that boot quickly, you may only see a flicker of your splash theme before your DM or login prompt is ready. You can set `ShowDelay` to an interval (in seconds) longer than your boot time to prevent this flicker and only show a blank screen. The default is 0 seconds, so you should not need to change this to a different value to see your splash earlier during boot.
### HiDPI
Edit the configuration file:
```ini
# /etc/plymouth/plymouthd.conf
[Daemon]
DeviceScale=an-integer-scaling-factor
```
and regenerate the initramfs.
## Misc
### Show boot messages
During boot you can switch to boot messages by pressing the `Esc` key.
### Disable with kernel parameters
If you experience problems during boot, you can temporary disable Plymouth with the following kernel parameters:
```
plymouth.enable=0 disablehooks=plymouth
```

74
technology/linux/XDG.md Normal file
View file

@ -0,0 +1,74 @@
---
obj: concept
arch-wiki: https://wiki.archlinux.org/title/XDG_user_directories
rev: 2025-01-08
---
# XDG Directories
The XDG User Directories are a standardized way to define and access common user directories in Unix-like operating systems, primarily defined by the XDG Base Directory Specification from the FreeDesktop.org project.
These directories provide users and applications with predefined paths for storing specific types of files, such as documents, downloads, music, and more. By using these directories, applications can integrate better with the operating system's file structure and provide a consistent experience for users.
## Creating default directories
Creating a full suite of localized default user directories within the `$HOME` directory can be done automatically by running:
```sh
xdg-user-dirs-update
```
> **Tip**: To force the creation of English-named directories, `LC_ALL=C.UTF-8 xdg-user-dirs-update --force` can be used.
When executed, it will also automatically:
- Create a local `~/.config/user-dirs.dirs` configuration file: used by applications to find and use home directories specific to an account.
- Create a local `~/.config/user-dirs.locale` configuration file: used to set the language according to the locale in use.
The user service `xdg-user-dirs-update.service` will also be installed and enabled by default, in order to keep your directories up to date by running this command at the beginning of each login session.
## Creating custom directories
Both the local `~/.config/user-dirs.dirs` and global `/etc/xdg/user-dirs.defaults` configuration files use the following environmental variable format to point to user directories: `XDG_DIRNAME_DIR="$HOME/directory_name"` An example configuration file may likely look like this (these are all the template directories):
```sh
# ~/.config/user-dirs.dirs
XDG_DESKTOP_DIR="$HOME/Desktop"
XDG_DOCUMENTS_DIR="$HOME/Documents"
XDG_DOWNLOAD_DIR="$HOME/Downloads"
XDG_MUSIC_DIR="$HOME/Music"
XDG_PICTURES_DIR="$HOME/Pictures"
XDG_PUBLICSHARE_DIR="$HOME/Public"
XDG_TEMPLATES_DIR="$HOME/Templates"
XDG_VIDEOS_DIR="$HOME/Videos"
```
As xdg-user-dirs will source the local configuration file to point to the appropriate user directories, it is therefore possible to specify custom folders. For example, if a custom folder for the `XDG_DOWNLOAD_DIR` variable has named `$HOME/Internet` in `~/.config/user-dirs.dirs` any application that uses this variable will use this directory.
> **Note**: Like with many configuration files, local settings override global settings. It will also be necessary to create any new custom directories.
Alternatively, it is also possible to specify custom folders using the command line. For example, the following command will produce the same results as the above configuration file edit:
```sh
xdg-user-dirs-update --set DOWNLOAD ~/Internet
```
## Querying configured directories
Once set, any user directory can be viewed with xdg-user-dirs. For example, the following command will show the location of the Templates directory, which of course corresponds to the `XDG_TEMPLATES_DIR` variable in the local configuration file:
```sh
xdg-user-dir TEMPLATES
```
## Specification
Please read the full specification. This section will attempt to break down the essence of what it tries to achieve.
Only `XDG_RUNTIME_DIR` is set by default through `pam_systemd`. It is up to the user to explicitly define the other variables according to the specification.
### User directories
- `XDG_CONFIG_HOME`: Where user-specific configurations should be written (analogous to `/etc`). Should default to `$HOME/.config`.
- `XDG_CACHE_HOME`: Where user-specific non-essential (cached) data should be written (analogous to `/var/cache`). Should default to `$HOME/.cache`.
- `XDG_DATA_HOME`: Where user-specific data files should be written (analogous to `/usr/share`). Should default to `$HOME/.local/share`.
- `XDG_STATE_HOME`: Where user-specific state files should be written (analogous to `/var/lib`). Should default to `$HOME/.local/state`.
- `XDG_RUNTIME_DIR`: Used for non-essential, user-specific data files such as sockets, named pipes, etc. Not required to have a default value; warnings should be issued if not set or equivalents provided. Must be owned by the user with an access mode of `0700`. Filesystem fully featured by standards of OS. Must be on the local filesystem. May be subject to periodic cleanup. Modified every 6 hours or set sticky bit if persistence is desired. Can only exist for the duration of the user's login. Should not store large files as it may be mounted as a tmpfs.`pam_systemd` sets this to `/run/user/$UID`.
### System directories
- `XDG_DATA_DIRS`: List of directories separated by `:` (analogous to `PATH`). Should default to `/usr/local/share:/usr/share`.
- `XDG_CONFIG_DIRS`: List of directories separated by `:` (analogous to `PATH`). Should default to `/etc/xdg`.

202
technology/linux/Zram.md Normal file
View file

@ -0,0 +1,202 @@
---
obj: concept
arch-wiki: https://wiki.archlinux.org/title/Zram
source: https://docs.kernel.org/admin-guide/blockdev/zram.html
wiki: https://en.wikipedia.org/wiki/Zram
rev: 2024-12-20
---
# Zram
zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as a general-purpose RAM disk. The two most common uses for zram are for the storage of temporary files (`/tmp`) and as a swap device. Initially, zram had only the latter function, hence the original name "compcache" ("compressed cache").
## Usage as swap
Initially the created zram block device does not reserve or use any RAM. Only as files need or want to be swapped out, they will be compressed and moved into the zram block device. The zram block device will then dynamically grow or shrink as required.
Even when assuming that zstd only achieves a conservative 1:2 compression ratio (real world data shows a common ratio of 1:3), zram will offer the advantage of being able to store more content in RAM than without memory compression.
### Manually
To set up one zstd compressed zram device with half the system memory capacity and a higher-than-normal priority (only for the current session):
```sh
modprobe zram
zramctl /dev/zram0 --algorithm zstd --size "$(($(grep -Po 'MemTotal:\s*\K\d+' /proc/meminfo)/2))KiB"
mkswap -U clear /dev/zram0
swapon --discard --priority 100 /dev/zram0
```
To disable it again, either reboot or run:
```sh
swapoff /dev/zram0
modprobe -r zram
echo 1 > /sys/module/zswap/parameters/enabled
```
For a permanent solution, use a method from one of the following sections.
### Using a udev rule
The example below describes how to set up swap on zram automatically at boot with a single udev rule. No extra package should be needed to make this work.
Explicitly load the module at boot:
```ini
# /etc/modules-load.d/zram.conf
zram
```
Create the following udev rule adjusting the disksize attribute as necessary:
```
# /etc/udev/rules.d/99-zram.rules
ACTION=="add", KERNEL=="zram0", ATTR{comp_algorithm}="zstd", ATTR{disksize}="4G", RUN="/usr/bin/mkswap -U clear /dev/%k", TAG+="systemd"
```
Add `/dev/zram` to your fstab with a higher than default priority:
```
# /etc/fstab
/dev/zram0 none swap defaults,discard,pri=100 0 0
```
### Using zram-generator
`zram-generator` provides `systemd-zram-setup@zramN.service` units to automatically initialize zram devices without users needing to enable/start the template or its instances.
To use it, install `zram-generator`, and create `/etc/systemd/zram-generator.conf` with the following:
```ini
# /etc/systemd/zram-generator.conf
[zram0]
zram-size = min(ram / 2, 4096)
compression-algorithm = zstd
```
`zram-size` is the size (in MiB) of zram device, you can use ram to represent the total memory.
`compression-algorithm` specifies the algorithm used to compress in zram device.
`cat /sys/block/zram0/comp_algorithm` gives the available compression algorithm (as well as the current one included in brackets).
Then run `daemon-reload`, start your configured `systemd-zram-setup@zramN.service` instance (`N` matching the numerical instance-ID, in the example it is `systemd-zram-setup@zram0.service`).
You can check the swap status of your configured `/dev/zramN` device(s) by reading the unit status of your `systemd-zram-setup@zramN.service` instance(s), by using `zramctl`, or by using `swapon`.
## zramctl
zramctl is used to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices.
Usage:
```sh
# Get info:
# If no option is given, all non-zero size zram devices are shown.
zramctl [options]
# Reset zram:
zramctl -r zramdev...
# Print name of first unused zram device:
zramctl -f
# Set up a zram device:
zramctl [-f | zramdev] [-s size] [-t number] [-a algorithm]
```
### Options
| Option | Description |
| ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-a, --algorithm lzo/lz4/lz4hc/deflate/842/zstd` | Set the compression algorithm to be used for compressing data in the zram device. The list of supported algorithms could be inaccurate as it depends on the current kernel configuration. A basic overview can be obtained by using the command `cat /sys/block/zram0/comp_algorithm`; |
| `-f, --find` | Find the first unused zram device. If a `--size` argument is present, then initialize the device. |
| `-n, --noheadings` | Do not print a header line in status output. ` |
| `-o, --output list` | Define the status output columns to be used. If no output arrangement is specified, then a default set is used. See below for list of all supported columns. |
| `--output-all` | Output all available columns. |
| `--raw` | Use the raw format for status output. |
| `-r, --reset` | Reset the options of the specified zram device(s). Zram device settings can be changed only after a reset. |
| `-s, --size size` | Create a zram device of the specified size. Zram devices are aligned to memory pages; when the requested size is not a multiple of the page size, it will be rounded up to the next multiple. When not otherwise specified, the unit of the size parameter is bytes. |
| `-t, --streams number` | Set the maximum number of compression streams that can be used for the device. The default is use all CPUs and one stream for kernels older than 4.6. |
### Output Columns
| Output | Description |
| ------------ | ------------------------------------------------------------------ |
| `NAME` | zram device name |
| `DISKSIZE` | limit on the uncompressed amount of data |
| `DATA` | uncompressed size of stored data |
| `COMPR` | compressed size of stored data |
| `ALGORITHM` | the selected compression algorithm |
| `STREAMS` | number of concurrent compress operations |
| `ZERO-PAGES` | empty pages with no allocated memory |
| `TOTAL` | all memory including allocator fragmentation and metadata overhead |
| `MEM-LIMIT` | memory limit used to store compressed data |
| `MEM-USED` | memory zram has consumed to store compressed data |
| `MIGRATED` | number of objects migrated by compaction |
| `COMP-RATIO` | compression ratio: DATA/TOTAL |
| `MOUNTPOINT` | where the device is mounted |
## Misc
### Checking zram statistics
Use zramctl. Example:
```
$ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 zstd 32G 1.9G 318.6M 424.9M 16 [SWAP]
DISKSIZE = 32G: this zram device will store up to 32 GiB of uncompressed data.
DATA = 1.9G: currently, 1.9 GiB (uncompressed) of data is being stored in this zram device
COMPR = 318.6M: the 1.9 GiB uncompressed data was compressed to 318.6 MiB
TOTAL = 424.9M: including metadata, the 1.9 GiB of uncompressed data is using up 424.9 MiB of physical RAM
```
### Multiple zram devices
By default, loading the zram module creates a single `/dev/zram0` device.
If you need more than one `/dev/zram` device, specify the amount using the `num_devices` kernel module parameter or add them as needed afterwards.
### Optimizing swap on zram
Since zram behaves differently than disk swap, we can configure the system's swap to take full potential of the zram advantages:
```ini
# /etc/sysctl.d/99-vm-zram-parameters.conf
vm.swappiness = 180
vm.watermark_boost_factor = 0
vm.watermark_scale_factor = 125
vm.page-cluster = 0
```
### Enabling a backing device for a zram block
zram can be configured to push incompressible pages to a specified block device when under memory pressure.
To add a backing device manually:
```sh
echo /dev/sdX > /sys/block/zram0/backing_dev
```
To add a backing device to your zram block device using `zram-generator`, update `/etc/systemd/zram-generator.conf` with the following under your `[zramX]` device you want the backing device added to:
```ini
# /etc/systemd/zram-generator.conf
writeback-device=/dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
```
### Using zram for non-swap purposes
zram can also be used as a generic RAM-backed block device, e.g. a `/dev/ram` with less physical memory usage, but slightly lower performance. However there are some caveats:
- There is no partition table support (no automatic creation of `/dev/zramxpy`).
- The block size is fixed to 4 kiB.
The obvious way around this is to stack a loop device on-top the zram, using [losetup](../applications/cli/system/losetup.md), specifying the desired block size using the `-b` option and the `-P` option to process partition tables and automatic creation of the partition loop devices.
```sh
zramctl -f -s <SIZE>G
```
Copy the disk image to the new `/dev/zramx`, then create a loop device. If the disk image has a partition table, the block size of the loop device must match the block size used by the partition table, which is typically 512 or 4096 bytes.
```sh
losetup -f -b 512 -P /dev/zramx
mount /dev/loop0p1 /mnt/boot
mount /dev/loop0p2 /mnt/root
```

426
technology/linux/archiso.md Normal file
View file

@ -0,0 +1,426 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Archiso
repo: https://gitlab.archlinux.org/archlinux/archiso
rev: 2024-12-17
---
# archiso
Archiso is a highly-customizable tool for building Arch Linux live CD/USB ISO images. The official images are built with archiso and include the following packages. It can be used as the basis for rescue systems, linux installers or other systems. This wiki article explains how to install archiso, and how to configure it to control aspects of the resulting ISO image such as included packages and files. Technical requirements and build steps can be found in the official project documentation. Archiso is implemented with a number of bash scripts. The core component of archiso is the mkarchiso command. Its options are documented in mkarchiso -h and not covered here.
## Prepare a custom profile
Archiso comes with two profiles, `releng` and `baseline`.
- `releng` is used to create the official monthly installation ISO. It can be used as a starting point for creating a customized ISO image.
- `baseline` is a minimal configuration, that includes only the bare minimum packages required to boot the live environment from the medium.
If you wish to adapt or customize one of archiso's shipped profiles, copy it from `/usr/share/archiso/configs/profile-name/` to a writable directory with a name of your choice. For example:
```sh
cp -r /usr/share/archiso/configs/releng/ archlive
```
## Profile structure
An archiso profile contains configuration that defines the resulting ISO image. The profile structure is documented in `/usr/share/doc/archiso/README.profile.rst`.
An archiso profile consists of several configuration files and a directory for files to be added to the resulting image.
```
profile/
├── airootfs/
├── efiboot/
├── syslinux/
├── grub/
├── bootstrap_packages.arch
├── packages.arch
├── pacman.conf
└── profiledef.sh
```
The required files and directories are explained in the following sections.
### profiledef.sh
This file describes several attributes of the resulting image and is a place for customization to the general behavior of the image.
The image file is constructed from some of the variables in ``profiledef.sh``: ``<iso_name>-<iso_version>-<arch>.iso``
(e.g. ``archlinux-202010-x86_64.iso``).
* ``iso_name``: The first part of the name of the resulting image (defaults to ``mkarchiso``)
* ``iso_label``: The ISO's volume label (defaults to ``MKARCHISO``)
* ``iso_publisher``: A free-form string that states the publisher of the resulting image (defaults to ``mkarchiso``)
* ``iso_application``: A free-form string that states the application (i.e. its use-case) of the resulting image (defaults
to ``mkarchiso iso``)
* ``iso_version``: A string that states the version of the resulting image (defaults to ``""``)
* ``install_dir``: A string (maximum eight characters long, which **must** consist of ``[a-z0-9]``) that states the
directory on the resulting image into which all files will be installed (defaults to ``mkarchiso``)
* ``buildmodes``: An optional list of strings, that state the build modes that the profile uses. Only the following are
understood:
- ``bootstrap``: Build a compressed file containing a minimal system to bootstrap from
- ``iso``: Build a bootable ISO image (implicit default, if no ``buildmodes`` are set)
- ``netboot``: Build artifacts required for netboot using iPXE
* ``bootmodes``: A list of strings, that state the supported boot modes of the resulting image. Only the following are
understood:
- ``bios.syslinux.mbr``: Syslinux for x86 BIOS booting from a disk
- ``bios.syslinux.eltorito``: Syslinux for x86 BIOS booting from an optical disc
- ``uefi-ia32.grub.esp``: GRUB for IA32 UEFI booting from a disk
- ``uefi-ia32.grub.eltorito``: GRUB for IA32 UEFI booting from an optical disc
- ``uefi-x64.grub.esp``: GRUB for x64 UEFI booting from a disk
- ``uefi-x64.grub.eltorito``: GRUB for x64 UEFI booting from an optical disc
- ``uefi-ia32.systemd-boot.esp``: systemd-boot for IA32 UEFI booting from a disk
- ``uefi-ia32.systemd-boot.eltorito``: systemd-boot for IA32UEFI booting from an optical disc
- ``uefi-x64.systemd-boot.esp``: systemd-boot for x64 UEFI booting from a disk
- ``uefi-x64.systemd-boot.eltorito``: systemd-boot for x64 UEFI booting from an optical disc
Note that BIOS El Torito boot mode must always be listed before UEFI El Torito boot mode.
* ``arch``: The architecture (e.g. ``x86_64``) to build the image for. This is also used to resolve the name of the packages
file (e.g. ``packages.x86_64``)
* ``pacman_conf``: The ``pacman.conf`` to use to install packages to the work directory when creating the image (defaults to
the host's ``/etc/pacman.conf``)
* ``airootfs_image_type``: The image type to create. The following options are understood (defaults to ``squashfs``):
- ``squashfs``: Create a squashfs image directly from the airootfs work directory
- ``ext4+squashfs``: Create an ext4 partition, copy the airootfs work directory to it and create a squashfs image from it
- ``erofs``: Create an EROFS image for the airootfs work directory
* ``airootfs_image_tool_options``: An array of options to pass to the tool to create the airootfs image. ``mksquashfs`` and
``mkfs.erofs`` are supported. See ``mksquashfs --help`` or ``mkfs.erofs --help`` for all possible options
* ``bootstrap_tarball_compression``: An array containing the compression program and arguments passed to it for
compressing the bootstrap tarball (defaults to ``cat``). For example: ``bootstrap_tarball_compression=(zstd -c -T0 --long -19)``.
* ``file_permissions``: An associative array that lists files and/or directories who need specific ownership or
permissions. The array's keys contain the path and the value is a colon separated list of owner UID, owner GID and
access mode. E.g. ``file_permissions=(["/etc/shadow"]="0:0:400")``. When directories are listed with a trailing backslash (``/``) **all** files and directories contained within the listed directory will have the same owner UID, owner GID, and access mode applied recursively.
### bootstrap_packages.arch
All packages to be installed into the environment of a bootstrap image have to be listed in an architecture specific file (e.g. ``bootstrap_packages.x86_64``), which resides top-level in the profile.
Packages have to be listed one per line. Lines starting with a ``#`` and blank lines are ignored.
This file is required when generating bootstrap images using the ``bootstrap`` build mode.
### packages.arch
All packages to be installed into the environment of an ISO image have to be listed in an architecture specific file (e.g. ``packages.x86_64``), which resides top-level in the profile.
Packages have to be listed one per line. Lines starting with a ``#`` and blank lines are ignored.
This file is required when generating ISO images using the ``iso`` or ``netboot`` build modes.
### pacman.conf
A configuration for pacman is required per profile.
Some configuration options will not be used or will be modified:
* ``CacheDir``: the profile's option is **only** used if it is not the default (i.e. ``/var/cache/pacman/pkg``) and if it is
not the same as the system's option. In all other cases the system's pacman cache is used.
* ``HookDir``: it is **always** set to the ``/etc/pacman.d/hooks`` directory in the work directory's airootfs to allow
modification via the profile and ensure interoparability with hosts using dracut
* ``RootDir``: it is **always** removed, as setting it explicitely otherwise refers to the host's root filesystem (see
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
* ``LogFile``: it is **always** removed, as setting it explicitely otherwise refers to the host's pacman log file (see
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
* ``DBPath``: it is **always** removed, as setting it explicitely otherwise refers to the host's pacman database (see
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
### airootfs
This optional directory may contain files and directories that will be copied to the work directory of the resulting image's root filesystem.
The files are copied before packages are being installed to work directory location.
Ownership and permissions of files and directories from the profile's ``airootfs`` directory are not preserved. The mode will be ``644`` for files and ``755`` for directories, all of them will be owned by root. To set custom ownership and/or permissions, use ``file_permissions`` in ``profiledef.sh``.
With this overlay structure it is possible to e.g. create users and set passwords for them, by providing ``airootfs/etc/passwd``, ``airootfs/etc/shadow``, ``airootfs/etc/gshadow`` (see ``man 5 passwd``, ``man 5 shadow`` and ``man 5 gshadow`` respectively).
If user home directories exist in the profile's ``airootfs``, their ownership and (and top-level) permissions will be altered according to the provided information in the password file.
### Boot loader configuration
A profile may contain configuration for several boot loaders. These reside in specific top-level directories, which are explained in the following subsections.
The following *custom template identifiers* are understood and will be replaced according to the assignments of the respective variables in ``profiledef.sh``:
* ``%ARCHISO_LABEL%``: Set this using the ``iso_label`` variable in ``profiledef.sh``.
* ``%INSTALL_DIR%``: Set this using the ``install_dir`` variable in ``profiledef.sh``.
* ``%ARCH%``: Set this using the ``arch`` variable in ``profiledef.sh``.
Additionally there are also *custom template identifiers* have harcoded values set by ``mkarchiso`` that cannot be overridden:
* ``%ARCHISO_UUID%``: the ISO 9660 modification date in UTC, i.e. its "UUID",
* ``%ARCHISO_SEARCH_FILENAME%``: file path on ISO 9660 that can be used by GRUB to find the ISO volume
(**for GRUB ``.cfg`` files only**).
### efiboot
This directory is mandatory when the ``uefi-x64.systemd-boot.esp`` or ``uefi-x64.systemd-boot.eltorito`` bootmodes are selected in ``profiledef.sh``. It contains configuration for `systemd-boot`.
> **Note:** The directory is a top-level representation of the systemd-boot configuration directories and files found in the root of an EFI system partition.
The *custom template identifiers* are **only** understood in the boot loader entry `.conf` files (i.e. **not** in ``loader.conf``).
### syslinux
This directory is mandatory when the ``bios.syslinux.mbr`` or the ``bios.syslinux.eltorito`` bootmodes are selected in ``profiledef.sh``.
It contains configuration files for `syslinux` or `isolinux` , or `pxelinux` used in the resulting image.
The *custom template identifiers* are understood in all `.cfg` files in this directory.
### grub
This directory is mandatory when any of the following bootmodes is used in ``profiledef.sh``:
- ``uefi-ia32.grub.esp`` or
- ``uefi-ia32.grub.eltorito`` or
- ``uefi-x64.grub.esp`` or
- ``uefi-x64.grub.eltorito``
It contains configuration files for `GRUB` used in the resulting image.
## Customization
### Selecting packages
Edit `packages.x86_64` to select which packages are to be installed on the live system image, listing packages line by line.
### Custom local repository
To add packages not located in standard Arch repositories (e.g. packages from the AUR or customized with the ABS), set up a custom local repository and add your custom packages to it. Then add your repository to `pacman.conf` as follows:
```ini
[customrepo]
SigLevel = Optional TrustAll
Server = file:///path/to/customrepo
```
> **Note**: The ordering within `pacman.conf` matters. To give top priority to your custom repository, place it above the other repository entries.
> This `pacman.conf` is only used for building the image. It will not be used in the live environment.
> Ensure that the repository is located in a directory accessible by the chrooted mkarchiso process, such as `/tmp`, to ensure the repository is read correctly during the image building process.
### Packages from multilib
To install packages from the multilib repository, simply uncomment that repository in `pacman.conf`.
### Adding files to image
The `airootfs` directory is used as the starting point for the root directory (`/`) of the live system on the image. All its contents will be copied over to the working directory before packages are installed.
Place any custom files and/or directories in the desired location under `airootfs/`. For example, if you have a set of iptables scripts on your current system you want to be used on your live image, copy them over as such:
```sh
cp -r /etc/iptables archlive/airootfs/etc
```
Similarly, some care is required for special configuration files that reside somewhere down the hierarchy. Missing parts of the directory structure can be simply created with `mkdir`.
Tip: To add a file to the install user's home directory, place it in `archlive/airootfs/root/`. To add a file to all other users home directories, place it in `archlive/airootfs/etc/skel/`.
> **Note**: Custom files that conflict with those provided by packages will be overwritten unless a package specifies them as backup files.
By default, permissions will be 644 for files and 755 for directories. All of them will be owned by the root user. To set different permissions or ownership for specific files and/or folders, use the `file_permissions` associative array in `profiledef.sh`.
### Adding repositories to the image
To add a repository that can be used in the live environment, create a suitably modified `pacman.conf` and place it in `archlive/airootfs/etc/`.
If the repository also uses a key, place the key in `archlive/airootfs/usr/share/pacman/keyrings/`. The key file name must end with `.gpg`. Additionally, the key must be trusted. This can be accomplished by creating a GnuPG exported trust file in the same directory. The file name must end with `-trusted`. The first field is the key fingerprint, and the second is the trust. You can reference `/usr/share/pacman/keyrings/archlinux-trusted` for an example.
#### archzfs example
The files in this example are:
```
airootfs
├── etc
│ ├── pacman.conf
│ └── pacman.d
│ └── archzfs_mirrorlist
└── usr
└── share
└── pacman
└── keyrings
├── archzfs.gpg
└── archzfs-trusted
```
`airootfs/etc/pacman.conf`:
```ini
[archzfs]
Include = /etc/pacman.d/archzfs_mirrorlist
```
`airootfs/etc/pacman.d/archzfs_mirrorlist`:
```
Server = https://archzfs.com/$repo/$arch
Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch
Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch
Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch
Server = https://zxcvfdsa.com/archzfs/$repo/$arch
```
`airootfs/usr/share/pacman/keyrings/archzfs-trusted`:
```
DDF7DB817396A49B2A2723F7403BD972F75D9D76:4:
```
`archzfs.gpg` itself can be obtained directly from the repository site at https://archzfs.com/archzfs.gpg.
### Kernel
Although both archiso's included profiles only have linux, ISOs can be made to include other or even multiple kernels.
First, edit `packages.x86_64` to include kernel package names that you want. When mkarchiso runs, it will include all `work_dir/airootfs/boot/vmlinuz-*` and `work_dir/boot/initramfs-*.img` files in the ISO (and additionally in the FAT image used for UEFI booting).
mkinitcpio presets by default will build fallback initramfs images. For an ISO, the main initramfs image would not typically include the autodetect hook, thus making an additional fallback image unnecessary. To prevent the creation of an fallback initramfs image, so that it does not take up space or slow down the build process, place a custom preset in `archlive/airootfs/etc/mkinitcpio.d/pkgbase.preset`. For example, for linux-lts:
`archlive/airootfs/etc/mkinitcpio.d/linux-lts.preset`:
```
PRESETS=('archiso')
ALL_kver='/boot/vmlinuz-linux-lts'
ALL_config='/etc/mkinitcpio.conf'
archiso_image="/boot/initramfs-linux-lts.img"
```
Finally create boot loader configuration to allow booting the kernel(s).
### Boot loader
Archiso supports syslinux for BIOS booting and GRUB or systemd-boot for UEFI booting. Refer to the articles of the boot loaders for information on their configuration syntax.
mkarchiso expects that GRUB configuration is in the `grub` directory, systemd-boot configuration is in the `efiboot` directory and syslinux configuration in the `syslinux` directory.
### UEFI Secure Boot
If you want to make your archiso bootable on a UEFI Secure Boot enabled environment, you must use a signed boot loader.
### systemd units
To enable systemd services/sockets/timers for the live environment, you need to manually create the symbolic links just as `systemctl enable` does it.
For example, to enable `gpm.service`, which contains `WantedBy=multi-user.target`, run:
```sh
mkdir -p archlive/airootfs/etc/systemd/system/multi-user.target.wants
ln -s /usr/lib/systemd/system/gpm.service archlive/airootfs/etc/systemd/system/multi-user.target.wants/
```
The required symlinks can be found out by reading the systemd unit, or if you have the service installed, by enabling it and observing the systemctl output.
### Login manager
Starting X at boot is done by enabling your login manager's systemd service. If you do not know which `.service` to enable, you can easily find out in case you are using the same program on the system you build your ISO on. Just use:
```sh
ls -l /etc/systemd/system/display-manager.service
```
Now create the same symlink in `archlive/airootfs/etc/systemd/system/`.
### Changing automatic login
The configuration for getty's automatic login is located under `airootfs/etc/systemd/system/getty@tty1.service.d/autologin.conf`.
You can modify this file to change the auto login user:
```ini
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin username --noclear %I 38400 linux
```
Or remove `autologin.conf` altogether to disable auto login.
If you are using the serial console, create `airootfs/etc/systemd/system/serial-getty@ttyS0.service.d/autologin.conf` with the following content instead:
```ini
[Service]
ExecStart=
ExecStart=-/sbin/agetty -o '-p -- \\u' --noclear --autologin root --keep-baud 115200,57600,38400,9600 - $TERM
```
### Users and passwords
To create a user which will be available in the live environment, you must manually edit `archlive/airootfs/etc/passwd`, `archlive/airootfs/etc/shadow`, `archlive/airootfs/etc/group` and `archlive/airootfs/etc/gshadow`.
> **Note**: If these files exist, they must contain the root user and group.
For example, to add a user `archie`. Add them to `archlive/airootfs/etc/passwd` following the passwd syntax:
```
root:x:0:0:root:/root:/usr/bin/zsh
archie:x:1000:1000::/home/archie:/usr/bin/zsh
```
> **Note**: The passwd file must end with a newline.
Add the user to `archlive/airootfs/etc/shadow` following the syntax of shadow. If you want to define a password for the user, generate a password hash with `openssl passwd -6` and add it to the file. For example:
```
root::14871::::::
archie:$6$randomsalt$cij4/pJREFQV/NgAgh9YyBIoCRRNq2jp5l8lbnE5aLggJnzIRmNVlogAg8N6hEEecLwXHtMQIl2NX2HlDqhCU1:14871::::::
```
Otherwise, you may keep the password field empty, meaning that the user can log in with no password.
Add the user's group and the groups which they will part of to `archlive/airootfs/etc/group` according to group syntax. For example:
```
root:x:0:root
adm:x:4:archie
wheel:x:10:archie
uucp:x:14:archie
archie:x:1000:
```
Create the appropriate `archlive/airootfs/etc/gshadow` according to gshadow:
```
root:!*::root
archie:!*::
```
Make sure `/etc/shadow` and `/etc/gshadow` have the correct permissions:
`archlive/profiledef.sh`:
```
file_permissions=(
...
["/etc/shadow"]="0:0:0400"
["/etc/gshadow"]="0:0:0400"
)
```
After package installation, mkarchiso will create all specified home directories for users listed in `archlive/airootfs/etc/passwd` and copy `work_directory/x86_64/airootfs/etc/skel/*` to them. The copied files will have proper user and group ownership.
### Changing the distribution name used in the ISO
Start by copying the file `/etc/os-release` into the `etc/` folder in the rootfs. Then, edit the file accordingly. You can also change the name inside of GRUB and syslinux.
### Adjusting the size of the root file system
When installing packages in the live environment, for example on hardware requiring DKMS modules, the default size of the root file system might not allow the download and installation of such packages due to its size.
To adjust the size on the fly:
```sh
mount -o remount,size=SIZE /run/archiso/cowspace
```
To adjust the size at the bootloader stage (as a kernel cmdline by pressing `e` or `Tab`) use the boot option:
```sh
cow_spacesize=SIZE
```
To adjust the size while building an image add the boot option to:
- `efiboot/loader/entries/*.cfg`
- `grub/*.cfg`
- `syslinux/*.cfg`
## Build the ISO
Build an ISO which you can then burn to CD or USB by running:
```sh
mkarchiso -v -w /path/to/work_dir -o /path/to/out_dir /path/to/profile/
```
Replace `/path/to/profile/` with the path to your custom profile, or with `/usr/share/archiso/configs/releng/` if you are building an unmodified profile.
When run, the script will download and install the packages you specified to `work_directory/x86_64/airootfs`, create the kernel and init images, apply your customizations and finally build the ISO into the output directory.
> **Tip**: If memory allows, it is preferred to place the working directory on `tmpfs`.
```sh
mkdir ./work
mount -t tmpfs -o size=1G tmpfs ./work
mkarchiso -v -w ./work -o /path/to/out_dir /path/to/profile/
umount -r ./work
```
### Removal of work directory
> **Warning**: If mkarchiso is interrupted, run `findmnt` to make sure there are no mount binds before deleting it - otherwise, you may lose data (e.g. an external device mounted at `/run/media/user/label` gets bound within `work/x86_64/airootfs/run/media/user/label` during the build process).
The temporary files are copied into work directory. After successfully building the ISO , the work directory and its contents can be deleted. E.g.:
```sh
rm -rf /path/to/work_dir
```

View file

@ -0,0 +1,61 @@
---
obj: filesystem
---
# Ceph
#wip
Ceph is a distributed storage system providing Object, Block and Filesystem Storage.
## Concepts
- Monitors: A Ceph Monitor (`ceph-mon`) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, the MDS map, and the CRUSH map. These maps are critical cluster state required for Ceph daemons to coordinate with each other. Monitors are also responsible for managing authentication between daemons and clients. At least three monitors are normally required for redundancy and high availability.
- Managers: A Ceph Manager daemon (`ceph-mgr`) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based modules to manage and expose Ceph cluster information, including a web-based Ceph Dashboard and REST API. At least two managers are normally required for high availability.
- Ceph OSDs: An Object Storage Daemon (Ceph OSD, `ceph-osd`) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least three Ceph OSDs are normally required for redundancy and high availability.
- MDSs: A Ceph Metadata Server (MDS, `ceph-mds`) stores metadata for the Ceph File System. Ceph Metadata Servers allow CephFS users to run basic commands (like ls, find, etc.) without placing a burden on the Ceph Storage Cluster.
Ceph stores data as objects within logical storage pools. Using the CRUSH algorithm, Ceph calculates which placement group (PG) should contain the object, and which OSD should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically.
## Setup
Cephadm creates a new Ceph cluster by bootstrapping a single host, expanding the cluster to encompass any additional hosts, and then deploying the needed services.
Run the ceph bootstrap command with the IP of the first cluster host:
```
cephadm bootstrap --mon-ip <mon-ip>
```
This command will:
- Create a Monitor and a Manager daemon for the new cluster on the local host.
- Generate a new SSH key for the Ceph cluster and add it to the root users `/root/.ssh/authorized_keys` file.
- Write a copy of the public key to `/etc/ceph/ceph.pub`.
- Write a minimal configuration file to `/etc/ceph/ceph.conf`. This file is needed to communicate with Ceph daemons.
- Write a copy of the `client.admin` administrative (privileged!) secret key to `/etc/ceph/ceph.client.admin.keyring`.
- Add the `_admin` label to the bootstrap host. By default, any host with this label will (also) get a copy of `/etc/ceph/ceph.conf` and `/etc/ceph/ceph.client.admin.keyring`.
### Ceph CLI
The `cephadm shell` command launches a bash shell in a container with all of the Ceph packages installed. By default, if configuration and keyring files are found in `/etc/ceph` on the host, they are passed into the container environment so that the shell is fully functional. Note that when executed on a MON host, cephadm shell will infer the config from the MON container instead of using the default configuration. If `--mount <path>` is given, then the host `<path>` (file or directory) will appear under `/mnt` inside the container:
```shell
cephadm shell
```
To execute ceph commands, you can also run commands like this:
```shell
cephadm shell -- ceph -s
```
You can install the ceph-common package, which contains all of the ceph commands, including ceph, rbd, mount.ceph (for mounting CephFS file systems), etc.:
```shell
cephadm add-repo --release reef
cephadm install ceph-common
```
Confirm that the ceph command is accessible with:
```shell
ceph -v
ceph status
```
## Host Management
#todo -> https://docs.ceph.com/en/latest/cephadm/host-management/

View file

@ -14,9 +14,12 @@ obj: meta/collection
- [MergerFS](MergerFS.md)
- [LVM](./LVM.md)
- [LUKS](./LUKS.md)
- [tmpFS](./tmpFS.md)
- [overlayfs](./overlayfs.md)
## Network
- [SSHFS](SSHFS.md)
- [NFS](NFS.md)
### FreeBSD
- [gpart](../../bsd/gpart.md)

View file

@ -0,0 +1,56 @@
---
obj: filesystem
arch-wiki: https://wiki.archlinux.org/title/NFS
wiki: https://en.wikipedia.org/wiki/Network_File_System
rfc: https://datatracker.ietf.org/doc/html/rfc3530
rev: 2024-10-21
---
# NFS
**Network File System (NFS)** is a distributed file system protocol that allows a user to access files over a network much like accessing local storage. **NFSv4**, the latest version of the protocol, offers several improvements over its predecessors, including better performance, security, and management features.
## Server Setup
Install `nfs-utils` package and activate the `nfs-server.service` unit.
### Configuration
To export a filesystem, add it to `/etc/exports`:
```
/directory client_ip(options,...)
```
Example:
```
/srv/nfs 192.168.1.0/24(rw,sync,no_subtree_check)
```
Then reexport everything in `/etc/exports`
```bash
sudo exportfs -ra
```
#### Export Options
In `/etc/exports`, various options can be specified to control access permissions and behavior for the exported filesystems. Here are some of the most common options:
- `rw`: Allows the client to read and write to the shared directory.
- `ro`: Read-only access. The client can only read data from the shared directory.
- `sync`: Ensures data is written to disk before replying to the client. This option improves data safety at the cost of performance.
- `async`: Opposite of sync. The server does not wait for data to be written to disk before responding to the client. This improves performance but may lead to data loss in case of a crash.
- `no_root_squash`: By default, NFS maps requests from the root user (uid=0) on the client to the nobody user (uid=65534) on the server for security reasons. With `no_root_squash`, root on the client retains its root privileges on the server.
- `root_squash`: This is the default behavior. It maps requests from root users to the nobody user, which helps to avoid security risks.
- `all_squash`: Maps all user requests to the nobody user, regardless of their identity on the client. This can be useful for environments where access control is strictly managed.
- `anonuid`: Sets the UID of the anonymous user. This option is used with all_squash or root_squash to specify a different UID than nobody.
- `anongid`: Sets the GID of the anonymous user, similar to `anonuid`.
- `no_subtree_check`: Disables subtree checking. NFS verifies whether the file resides within the exported tree. Disabling this option can improve performance, but at a potential security cost.
- `subtree_check`: Ensures that the client only accesses the files within the exact subtree they are allowed to. This is the default behavior.
- `insecure`: Allows clients to connect from non-privileged ports (i.e., ports higher than 1024).
- `secure`: Ensures clients use privileged ports to connect (ports below 1024). This is the default option.
- `crossmnt`: Allows the NFS server to cross mount points when a filesystem is exported. Useful for when the exported directory has multiple submounts (e.g., logical volumes).
- `fsid`: Useful when exporting multiple filesystems. Assigns a unique filesystem identifier to the export.
- `nohide`: This option allows clients to access filesystems that are mounted on subdirectories of an exported directory.
- `hide`: This option hides filesystems mounted under the export directory. This is the default behavior.
## Usage
**Mount the NFS Share**:
```bash
mount -t nfs4 192.168.1.10:/srv/nfs /mnt
```

View file

@ -0,0 +1,60 @@
---
obj: filesystem
arch-wiki: https://wiki.archlinux.org/title/Overlay_filesystem
source: https://docs.kernel.org/filesystems/overlayfs.html
wiki: https://en.wikipedia.org/wiki/OverlayFS
rev: 2024-12-19
---
# OverlayFS
Overlayfs allows one, usually read-write, directory tree to be overlaid onto another, read-only directory tree. All modifications go to the upper, writable layer. This type of mechanism is most often used for live CDs but there is a wide variety of other uses.
The implementation differs from other "union filesystem" implementations in that after a file is opened all operations go directly to the underlying, lower or upper, filesystems. This simplifies the implementation and allows native performance in these cases.
## Usage
To mount an overlay use the following mount options:
```sh
mount -t overlay overlay -o lowerdir=/lower,upperdir=/upper,workdir=/work /merged
```
> **Note**:
> - The working directory (`workdir`) needs to be an empty directory on the same filesystem as the upper directory.
> - The lower directory can be read-only or could be an overlay itself.
> - The upper directory is normally writable.
> - The workdir is used to prepare files as they are switched between the layers.
The lower directory can actually be a list of directories separated by `:`, all changes in the merged directory are still reflected in upper.
### Read-only overlay
Sometimes, it is only desired to create a read-only view of the combination of two or more directories. In that case, it can be created in an easier manner, as the directories `upper` and `work` are not required:
```sh
mount -t overlay overlay -o lowerdir=/lower1:/lower2 /merged
```
When `upperdir` is not specified, the overlay is automatically mounted as read-only.
## Example:
```sh
mount -t overlay overlay -o lowerdir=/lower1:/lower2:/lower3,upperdir=/upper,workdir=/work /merged
```
> **Note**: The order of lower directories is the rightmost is the lowest, thus the upper directory is on top of the first directory in the left-to-right list of lower directories; NOT on top of the last directory in the list, as the order might seem to suggest.
The above example will have the order:
- /upper
- /lower1
- /lower2
- /lower3
To add an overlayfs entry to `/etc/fstab` use the following format:
```
# /etc/fstab
overlay /merged overlay noauto,x-systemd.automount,lowerdir=/lower,upperdir=/upper,workdir=/work 0 0
```
The `noauto` and `x-systemd.automount` mount options are necessary to prevent systemd from hanging on boot because it failed to mount the overlay. The overlay is now mounted whenever it is first accessed and requests are buffered until it is ready.

View file

@ -0,0 +1,30 @@
---
obj: filesystem
wiki: https://en.wikipedia.org/wiki/Tmpfs
arch-wiki: https://wiki.archlinux.org/title/Tmpfs
---
# tmpFS
tmpfs is a temporary filesystem that resides in memory and/or swap partition(s). Mounting directories as tmpfs can be an effective way of speeding up accesses to their files, or to ensure that their contents are automatically cleared upon reboot.
## Usage
**Create a tmpfs**:
`mount -t tmpfs -o [OPTIONS] tmpfs [MOUNT_POINT]`
**Resize a tmpfs**:
`mount -t tmpfs -o remount,size=<NEW_SIZE> tmpfs [MOUNT_POINT]`
### Options
| **Option** | **Description** |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `size=bytes` | Specify an upper limit on the size of the filesystem. Size is given in bytes, rounded up to entire pages. A `k`, `m`, or `g` suffix can be used for Ki, Mi, or Gi. Use `%` to specify a percentage of physical RAM. Default: 50%. Set to `0` to remove the limit. |
| `nr_blocks=blocks` | Similar to `size`, but in blocks of `PAGE_CACHE_SIZE`. Accepts `k`, `m`, or `g` suffixes, but not `%`. |
| `nr_inodes=inodes` | Sets the maximum number of inodes. Default is half the number of physical RAM pages or the number of lowmem RAM pages (whichever is smaller). Use `k`, `m`, or `g` suffixes, but `%` is not supported. Set to `0` to remove the limit. |
| `noswap` | Disables swap. Remounts must respect the original settings. By default, swap is enabled. |
| `mode=mode` | Sets the initial permissions of the root directory. |
| `gid=gid` | Sets the initial group ID of the root directory. |
| `uid=uid` | Sets the initial user ID of the root directory. |
| `huge=huge_option` | Sets the huge table memory allocation policy for all files (if `CONFIG_TRANSPARENT_HUGEPAGE` is enabled). Options: `never` (default), `always`, `within_size`, `advise`, `deny`, or `force`. |
| `mpol=mpol_option` | Sets NUMA memory allocation policy (if `CONFIG_NUMA` is enabled). Options: `default`, `prefer:node`, `bind:nodelist`, `interleave`, `interleave:nodelist`, or `local`. Example: `mpol=bind:0-3,5,7,9-15`. |

View file

@ -1,5 +1,7 @@
---
obj: concept
arch-wiki: https://wiki.archlinux.org/title/Mkinitcpio
rev: 2024-12-16
---
# mkinitcpio
@ -8,20 +10,11 @@ The initial ramdisk is in essence a very small environment (early userspace) whi
## Configuration
The primary configuration file for _mkinitcpio_ is `/etc/mkinitcpio.conf`. Additionally, preset definitions are provided by kernel packages in the `/etc/mkinitcpio.d` directory (e.g. `/etc/mkinitcpio.d/linux.preset`).
`MODULES`
Kernel modules to be loaded before any boot hooks are run.
`BINARIES`
Additional binaries to be included in the initramfs image.
`FILES`
Additional files to be included in the initramfs image.
`HOOKS`
Hooks are scripts that execute in the initial ramdisk.
`COMPRESSION`
Used to compress the initramfs image.
- `MODULES` : Kernel modules to be loaded before any boot hooks are run.
- `BINARIES` : Additional binaries to be included in the initramfs image.
- `FILES` : Additional files to be included in the initramfs image.
- `HOOKS` : Hooks are scripts that execute in the initial ramdisk.
- `COMPRESSION` : Used to compress the initramfs image.
### MODULES
The `MODULES` array is used to specify modules to load before anything else is done.
@ -61,3 +54,28 @@ The default `HOOKS` setting should be sufficient for most simple, single disk se
| **lvm2** | Adds the device mapper kernel module and the `lvm` tool to the image. |
| **fsck** | Adds the fsck binary and file system-specific helpers to allow running fsck against your root device (and `/usr` if separate) prior to mounting. If added after the **autodetect** hook, only the helper specific to your root file system will be added. Usage of this hook is **strongly** recommended, and it is required with a separate `/usr` partition. It is highly recommended that if you include this hook that you also include any necessary modules to ensure your keyboard will work in early userspace. |
| **filesystems** | This includes necessary file system modules into your image. This hook is **required** unless you specify your file system modules in `MODULES`. |
### UKI
A Unified Kernel Image (UKI) is a single executable file that can be directly booted by UEFI firmware or automatically sourced by boot-loaders.
In essence, a UKI combines all the necessary components for the operating system to start up, including:
- EFI stub loader
- Kernel command line
- Microcode updates
- Initramfs image (initial RAM file system)
- Kernel image itself
- Splash screen
To enable the UKI edit `/etc/mkinitcpio.d/linux.preset`:
```sh
default_uki="/boot/EFI/Linux/arch-linux.efi"
fallback_uki="/boot/EFI/Linux/arch-linux-fallback.efi"
```
Build the Unified Kernel Image:
```sh
mkinitcpio --allpresets
```

57
technology/linux/sbctl.md Normal file
View file

@ -0,0 +1,57 @@
---
obj: application
repo: https://github.com/Foxboron/sbctl
rev: 2024-12-16
---
# sbctl (Secure Boot Manager)
sbctl intends to be a user-friendly secure boot key manager capable of setting up secure boot, offer key management capabilities, and keep track of files that needs to be signed in the boot chain.
## Usage
Install the necessary packages:
```sh
pacman -S sbctl sbsigntools
```
Check that Secure Boot "Setup Mode" is "Enabled" in UEFI:
```sh
sbctl status
```
Create your own signing keys:
```sh
sbctl create-keys
```
Sign the systemd bootloader:
```sh
sbctl sign -s \
-o /usr/lib/systemd/boot/efi/systemd-bootx64.efi.signed \
/usr/lib/systemd/boot/efi/systemd-bootx64.efi
```
Enroll your custom keys:
```sh
sbctl enroll-keys
# Enroll and include Microsoft Keys
sbctl enroll-keys --microsoft
```
Sign EFI files:
```sh
sbctl sign -s /boot/EFI/Linux/arch-linux.efi
sbctl sign -s /boot/EFI/Linux/arch-linux-fallback.efi
sbctl sign -s /efi/EFI/systemd/systemd-bootx64.efi
sbctl sign -s /efi/EFI/Boot/bootx64.efi
```
Verify signature of EFI files:
```sh
sbctl verify
```
Resign everything:
```sh
sbctl sign-all
```

View file

@ -12,6 +12,7 @@ systemd is a suite of basic building blocks for a [Linux](../Linux.md) system. I
See also:
- [Systemd-Timers](Systemd-Timers.md)
- [systemd-boot](systemd-boot.md)
- [systemd-cryptenroll](systemd-cryptenroll.md)
## Using Units
Units commonly include, but are not limited to, services (_.service_), mount points (_.mount_), devices (_.device_) and sockets (_.socket_).

View file

@ -1,6 +1,7 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Systemd-boot
rev: 2024-12-17
---
# Systemd Boot
@ -20,7 +21,8 @@ bootctl update
```
## Configuration
The loader configuration is stored in the file `_esp_/loader/loader.conf`
The loader configuration is stored in the file `_esp_/loader/loader.conf`.
Example:
```
default arch.conf
@ -30,7 +32,7 @@ editor no
```
### Adding loaders
_systemd-boot_ will search for boot menu items in `_esp_/loader/entries/*.conf`
_systemd-boot_ will search for boot menu items in `_esp_/loader/entries/*.conf`.
Values:
- `title` : Name
@ -58,3 +60,17 @@ Firmware Setup:
```shell
systemctl reboot --firmware-setup
```
## Keybindings
While the menu is shown, the following keys are active:
| Key | Description |
| ------------- | ----------------------------------------------------------------------------------- |
| `Up` / `Down` | Select menu entry |
| `Enter` | Boot the selected entry |
| `d` | select the default entry to boot (stored in a non-volatile EFI variable) |
| `t` / `T` | adjust the timeout (stored in a non-volatile EFI variable) |
| `e` | edit the option line (kernel command line) for this bootup to pass to the EFI image |
| `Q` | quit |
| `v` | show the systemd-boot and UEFI version |
| `P` | print the current configuration to the console |

View file

@ -0,0 +1,130 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Systemd-cryptenroll
rev: 2024-12-16
---
# systemd-cryptenroll
systemd-cryptenroll allows enrolling smartcards, FIDO2 tokens and Trusted Platform Module security chips into LUKS devices, as well as regular passphrases. These devices are later unlocked by `systemd-cryptsetup@.service`, using the enrolled tokens.
## Usage
### **List keyslots**
systemd-cryptenroll can list the keyslots in a LUKS device, similar to cryptsetup luksDump, but in a more user-friendly format.
```sh
$ systemd-cryptenroll /dev/disk
SLOT TYPE
0 password
1 tpm2
```
### **Erasing keyslots**
```sh
systemd-cryptenroll /dev/disk --wipe-slot=SLOT
```
Where `SLOT` can be:
- A single keyslot index
- A type of keyslot, which will erase all keyslots of that type. Valid types are `empty`, `password`, `recovery`, `pkcs11`, `fido2`, `tpm2`
- A combination of all of the above, separated by commas
- The string `all`, which erases all keyslots on the device. This option can only be used when enrolling another device or passphrase at the same time.
The `--wipe-slot` operation can be used in combination with all enrollment options, which is useful to update existing device enrollments:
```sh
systemd-cryptenroll /dev/disk --wipe-slot=fido2 --fido2-device=auto
```
### **Enrolling passphrases**
#### Regular password
This is equivalent to `cryptsetup luksAddKey`.
```sh
systemd-cryptenroll /dev/disk --password
```
#### Recovery key
Recovery keys are mostly identical to passphrases, but are computer-generated instead of being chosen by a human, and thus have a guaranteed high entropy. The key uses a character set that is easy to type in, and may be scanned off screen via a QR code.
A recovery key is designed to be used as a fallback if the hardware tokens are unavailable, and can be used in place of regular passphrases whenever they are required.
```sh
systemd-cryptenroll /dev/disk --recovery-key
```
### Enrolling hardware devices
The `--type-device` options must point to a valid device path of their respective type. A list of available devices can be obtained by passing the list argument to this option. Alternatively, if you only have a single device of the desired type connected, the auto option can be used to automatically select it.
#### PKCS#11 tokens or smartcards
The token or smartcard must contain a RSA key pair, which will be used to encrypt the generated key that will be used to unlock the volume.
```sh
systemd-cryptenroll /dev/disk --pkcs11-token-uri=device
```
#### FIDO2 tokens
Any FIDO2 token that supports the "hmac-secret" extension can be used with systemd-cryptenroll. The following example would enroll a FIDO2 token to an encrypted LUKS2 block device, requiring only user presence as authentication.
```sh
systemd-cryptenroll /dev/disk --fido2-device=device --fido2-with-client-pin=no
```
In addition, systemd-cryptenroll supports using the token's built-in user verification methods:
- `--fido2-with-user-presence` defines whether to verify the user presence (i.e. by tapping the token) before unlocking, defaults to `yes`
- `--fido2-with-user-verification` defines whether to require user verification before unlocking, defaults to `no`
By default, the cryptographic algorithm used when generating a FIDO2 credential is es256 which denotes Elliptic Curve Digital Signature Algorithm (ECDSA) over NIST P-256 with SHA-256. If desired and provided by the FIDO2 token, a different cryptographic algorithm can be specified during enrollment.
Suppose that a previous FIDO2 token has already been enrolled and the user wishes to enroll another, the following generates an eddsa credential which denotes EdDSA over Curve25519 with SHA-512 and authenticates the device with a previous enrolled token instead of a password.
```sh
systemd-cryptenroll /dev/disk --fido2-device=device --fido2-credential-algorithm=eddsa --unlock-fido2-device=auto
```
#### Trusted Platform Module
systemd-cryptenroll has native support for enrolling LUKS keys in TPMs. It requires the following:
- `tpm2-tss` must be installed,
- A LUKS2 device (currently the default type used by cryptsetup),
- If you intend to use this method on your root partition, some tweaks need to be made to the initramfs
To begin, run the following command to list your installed TPMs and the driver in use:
```sh
systemd-cryptenroll --tpm2-device=list
```
> **Tip**: If your computer has multiple TPMs installed, specify the one you wish to use with `--tpm2-device=/path/to/tpm2_device` in the following steps.
A key may be enrolled in both the TPM and the LUKS volume using only one command. The following example generates a new random key, adds it to the volume so it can be used to unlock it in addition to the existing keys, and binds this new key to PCR 7 (Secure Boot state):
```sh
systemd-cryptenroll --tpm2-device=auto /dev/sdX
```
where `/dev/sdX` is the full path to the encrypted LUKS volume. Use `--unlock-key-file=/path/to/keyfile` if the LUKS volume is unlocked by a keyfile instead of a passphrase.
> Note: It is possible to require a PIN to be entered in addition to the TPM state being correct. Simply add the option `--tpm2-with-pin=yes` to the command above and enter the PIN when prompted.
To check that the new key was enrolled, dump the LUKS configuration and look for a systemd-tpm2 token entry, as well as an additional entry in the Keyslots section:
```sh
cryptsetup luksDump /dev/sdX
```
To test that the key works, run the following command while the LUKS volume is closed:
```sh
systemd-cryptsetup attach mapping_name /dev/sdX none tpm2-device=auto
```
where `mapping_name` is your chosen name for the volume once opened.
##### Modules
If your TPM requires a kernel module, edit `/etc/mkinitcpio.conf` and edit the `MODULES` line to add the module used by your TPM. For instance:
```sh
MODULES=(tpm_tis)
```