Compare commits

..

4 commits

Author SHA1 Message Date
51859b6171
add cadvisor 2024-12-12 09:48:42 +01:00
8289890ccd
update prometheus 2024-12-12 09:39:36 +01:00
f715b43402
add grafana applications 2024-12-12 09:28:21 +01:00
9d67459479
add node exporter 2024-12-12 08:54:48 +01:00
35 changed files with 381 additions and 4497 deletions

View file

@ -0,0 +1,20 @@
name: Validate Schema
on:
push:
branches:
- main
jobs:
validate:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Validation
uses: docker://git.hydrar.de/mdtools/mdtools:latest
with:
entrypoint: /bin/bash
args: scripts/validate_schema.sh

View file

@ -1,9 +0,0 @@
when:
- event: push
branch: main
steps:
- name: "Validate Schema"
image: git.hydrar.de/mdtools/mdtools:latest
commands:
- /bin/bash scripts/validate_schema.sh

View file

@ -1,6 +1,6 @@
---
obj: meta/collection
rev: 2024-12-10
rev: 2024-07-14
---
# Applications
@ -38,7 +38,6 @@ rev: 2024-12-10
## Desktop
- [KDE Plasma](./desktops/KDE%20Plasma.md)
- [SDDM](./desktops/SDDM.md)
- [dwm](./desktops/dwm.md)
- [picom](./desktops/picom.md)
- [Hyprland](./desktops/hyprland.md)
@ -119,6 +118,8 @@ rev: 2024-12-10
- [Wildcard](utilities/Wildcard.md)
- [Textpieces](utilities/Textpieces.md)
- [ImHex](utilities/ImHex.md)
- [Node Exporter](utilities/node-exporter.md)
- [cAdvisor](utilities/cAdvisor.md)
# Mobile
- [Aegis](./utilities/Aegis.md)
@ -140,7 +141,6 @@ rev: 2024-12-10
- [AdGuard](./web/AdGuard.md)
- [Gitea](./web/Gitea.md)
- [Forgejo](./web/Forgejo.md)
- [Woodpecker CI](./web/WoodpeckerCI.md)
- [SearXNG](./web/Searxng.md)
- [Grocy](./web/Grocy.md)
- [Guacamole](./web/Guacamole.md)
@ -167,6 +167,9 @@ rev: 2024-12-10
- [Caddy](./web/Caddy.md)
- [zigbee2MQTT](./web/zigbee2mqtt.md)
- [dawarich](./web/dawarich.md)
- [Grafana](./web/Grafana.md)
- [Prometheus](./web/Prometheus.md)
- [Loki](./web/loki.md)
# CLI
## Terminal
@ -250,8 +253,6 @@ rev: 2024-12-10
- [mergerfs](../linux/filesystems/MergerFS.md)
- [sshfs](../linux/filesystems/SSHFS.md)
- [wine](../windows/Wine.md)
- [sbctl](../linux/sbctl.md)
- [systemd-cryptenroll](../linux/systemd/systemd-cryptenroll.md)
## Development
- [act](./development/act.md)
@ -265,7 +266,6 @@ rev: 2024-12-10
- [Docker](../tools/Docker.md)
- [Podman](../tools/Podman.md)
- [serie](./cli/serie.md)
- [usql](./cli/usql.md)
## Media
- [yt-dlp](./media/yt-dlp.md)

View file

@ -1,7 +1,6 @@
---
obj: application
website: https://rsync.samba.org
arch-wiki: https://wiki.archlinux.org/title/Rsync
website: https://rsync.samba.org/
repo: https://github.com/WayneD/rsync
---
@ -45,3 +44,4 @@ Either `source` or `destination` can be a local folder or a remote path (`user@h
| --log-file=FILE | log what we're doing to the specified FILE |
| --partial | keep partially transferred files |
| -P | same as --partial --progress |

View file

@ -3,18 +3,16 @@ obj: application
repo: https://github.com/tmux/tmux
arch-wiki: https://wiki.archlinux.org/title/tmux
wiki: https://en.wikipedia.org/wiki/Tmux
rev: 2024-12-16
rev: 2024-01-15
---
# tmux
tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached.
tmux is a terminal multiplexer: it enables a number of terminals to be created, accessed, and controlled from a single screen. tmux may be detached from a screen and continue running in the background, then later reattached.
# Usage
**New tmux session:**
```shell
tmux
tmux new -s name
tmux new -s mysession -n mywindow
```
**List existing sessions:**
@ -25,7 +23,6 @@ tmux ls
**Attach to a named session:**
```shell
tmux attach -t name
tmux a -t name
```
**Kill a session:**
@ -34,30 +31,14 @@ tmux kill-session -t name
```
# Keybinds
- Show the time: `Ctrl-b + t`
## Sessions
- Rename current session: `Ctrl-b + $`
- Vertical Split: `Ctrl-b %`
- Horizontal Split: `Ctrl-b "`
- Select Pane: `Ctrl-b q [num]`
- Change Pane Size: `Ctrl-b Ctrl [Down/Up/Left/Right]`
- Switch sessions: `Ctrl-b s`
- Detach from a running session: `Ctrl-b + d`
- Sessions and windows overview: `Ctrl-b + w`
- Move to previous session: `Ctrl-b + (`
- Move to next session: `Ctrl-b + )`
- Switch sessions: `Ctrl-b + s`
## Windows
- Create a new window: `Ctrl-b + c`
- Rename current window: `Ctrl-b + ,`
- Go to previous window: `Ctrl-b + p`
- Go to next window: `Ctrl-b + n`
- Go to window: `Ctrl-b + [0-9]`
## Panes
- Vertical Split: `Ctrl-b + %`
- Horizontal Split: `Ctrl-b + "`
- Select Pane: `Ctrl-b + q + [num]`
- Change Pane Size: `Ctrl-b + Ctrl + [Down/Up/Left/Right]`
- Move current pane left: `Ctrl-b + {`
- Move current pane right: `Ctrl-b + }`
- Close current pane: `Ctrl-b + x`
- Switch to the next pane: `Ctrl-b + o`
- Convert pane into a window: `Ctrl-b + !`
- Create a new window inside session: `Ctrl-b c`
- Go to next window: `Ctrl-b n`
- Switch sessions and windows: `Ctrl-B w`
- Go to window: `Ctrl-b [0-9]`
- Kill a window: `Ctrl-b x`

View file

@ -1,229 +0,0 @@
---
obj: application
repo: https://github.com/xo/usql
rev: 2024-12-10
---
# usql
usql is a universal command-line interface for PostgreSQL, MySQL, Oracle Database, SQLite3, Microsoft SQL Server, and many other databases including NoSQL and non-relational databases!
usql provides a simple way to work with SQL and NoSQL databases via a command-line inspired by PostgreSQL's psql. usql supports most of the core psql features, such as variables, backticks, backslash commands and has additional features that psql does not, such as multiple database support, copying between databases, syntax highlighting, context-based completion, and terminal graphics.
## Usage
```sh
usql [options]... [DSN]
```
DSN can be any database connection string like `sqlite:///path/to/my/file` or `postgres://user:pass@host:port/db`.
### Options
| Option | Description |
| ----------------------------------------- | -------------------------------------------------------------------------------------- |
| `-c, --command COMMAND` | run only single command (SQL or internal) and exit |
| `-f, --file FILE` | execute commands from file and exit |
| `-w, --no-password` | never prompt for password |
| `-X, --no-init` | do not execute initialization scripts (aliases: `--no-rc` `--no-psqlrc` `--no-usqlrc`) |
| `-o, --out FILE` | output file |
| `-W, --password` | force password prompt (should happen automatically) |
| `-1, --single-transaction` | execute as a single transaction (if non-interactive) |
| `-v, --set NAME=VALUE` | set variable NAME to VALUE (see \set command, aliases: --var --variable) |
| `-N, --cset NAME=DSN` | set named connection NAME to DSN (see \cset command) |
| `-P, --pset VAR=ARG` | set printing option VAR to ARG (see \pset command) |
| `-F, --field-separator FIELD-SEPARATOR` | field separator for unaligned and CSV output |
| `-R, --record-separator RECORD-SEPARATOR` | record separator for unaligned and CSV output (default \n) |
| `-T, --table-attr TABLE-ATTR` | set HTML table tag attributes (e.g., width, border) |
| `-A, --no-align` | unaligned table output mode |
| `-H, --html` | HTML table output mode |
| `-t, --tuples-only` | print rows only |
| `-x, --expanded` | turn on expanded table output |
| `-z, --field-separator-zero` | set field separator for unaligned and CSV output to zero byte |
| `-0, --record-separator-zero` | set record separator for unaligned and CSV output to zero byte |
| `-J, --json` | JSON output mode |
| `-C, --csv` | CSV output mode |
| `-G, --vertical` | vertical output mode |
| `-q, --quiet` | run quietly (no messages, only query output) |
| `--config string` | config file |
## Commands
| Command | Description |
| ---------------------------------- | ----------------------------------------------------------------------------- |
| **General:** | |
| `\q` | quit usql |
| `\quit` | alias for `\q` |
| `\drivers` | show database drivers available to usql |
| **Connection:** | |
| `\c DSN` | connect to database url |
| `\c DRIVER PARAMS...` | connect to database with driver and parameters |
| `\cset [NAME [DSN]]` | set named connection, or list all if no parameters |
| `\cset NAME DRIVER PARAMS...` | define named connection for database driver |
| `\Z` | close database connection |
| `\password [USERNAME]` | change the password for a user |
| `\conninfo` | display information about the current database connection |
| **Operating System:** | |
| `\cd [DIR]` | change the current working directory |
| `\getenv VARNAME ENVVAR` | fetch environment variable |
| `\setenv NAME [VALUE]` | set or unset environment variable |
| `\! [COMMAND]` | execute command in shell or start interactive shell |
| `\timing [on/off]` | toggle timing of commands |
| **Variables:** | |
| `\prompt [-TYPE] VAR [PROMPT]` | prompt user to set variable |
| `\set [NAME [VALUE]]` | set internal variable, or list all if no parameters |
| `\unset NAME` | unset (delete) internal variable |
| **Query Execute:** | |
| `\g [(OPTIONS)] [FILE] or ;` | execute query (and send results to file or pipe) |
| `\G [(OPTIONS)] [FILE]` | as \g, but forces vertical output mode |
| `\gx [(OPTIONS)] [FILE]` | as \g, but forces expanded output mode |
| `\gexec` | execute query and execute each value of the result |
| `\gset [PREFIX]` | execute query and store results in usql variables |
| **Query Buffer:** | |
| `\e [FILE] [LINE]` | edit the query buffer (or file) with external editor |
| `\p` | show the contents of the query buffer |
| `\raw` | show the raw (non-interpolated) contents of the query buffer |
| `\r` | reset (clear) the query buffer |
| **Input/Output:** | |
| `\copy SRC DST QUERY TABLE` | copy query from source url to table on destination url |
| `\copy SRC DST QUERY TABLE(A,...)` | copy query from source url to columns of table on destination url |
| `\echo [-n] [STRING]` | write string to standard output (-n for no newline) |
| `\qecho [-n] [STRING]` | write string to \o output stream (-n for no newline) |
| `\warn [-n] [STRING]` | write string to standard error (-n for no newline) |
| `\o [FILE]` | send all query results to file or pipe |
| **Informational:** | |
| `\d[S+] [NAME]` | list tables, views, and sequences or describe table, view, sequence, or index |
| `\da[S+] [PATTERN]` | list aggregates |
| `\df[S+] [PATTERN]` | list functions |
| `\di[S+] [PATTERN]` | list indexes |
| `\dm[S+] [PATTERN]` | list materialized views |
| `\dn[S+] [PATTERN]` | list schemas |
| `\dp[S] [PATTERN]` | list table, view, and sequence access privileges |
| `\ds[S+] [PATTERN]` | list sequences |
| `\dt[S+] [PATTERN]` | list tables |
| `\dv[S+] [PATTERN]` | list views |
| `\l[+]` | list databases |
| `\ss[+] [TABLE/QUERY] [k]` | show stats for a table or a query |
| **Formatting:** | |
| `\pset [NAME [VALUE]]` | Set table output option |
| `\a` | Toggle between unaligned and aligned output mode |
| `\C [STRING]` | Set table title, or unset if none |
| `\f [STRING]` | Show or set field separator for unaligned query output |
| `\H` | Toggle HTML output mode |
| `\T [STRING]` | Set HTML <table> tag attributes, or unset if none |
| `\t [on/off]` | Show only rows |
| `\x [on/off/auto]` | Toggle expanded output |
| **Transaction:** | |
| `\\begin` | Begin a transaction |
| `\\begin [-read-only] [ISOLATION]` | Begin a transaction with isolation level |
| `\\commit` | Commit current transaction |
| `\\rollback` | Rollback (abort) current transaction |
## Configuration
During its initialization phase, usql reads a standard YAML configuration file `config.yaml`. On Windows this is `%AppData%/usql/config.yaml`, on macOS this is `$HOME/Library/Application Support/usql/config.yaml`, and on Linux and other Unix systems this is normally `$HOME/.config/usql/config.yaml`.
```yml
# named connections
# name can be used instead of database url
connections:
my_couchbase_conn: couchbase://Administrator:P4ssw0rd@localhost
my_clickhouse_conn: clickhouse://clickhouse:P4ssw0rd@localhost
css: cassandra://cassandra:cassandra@localhost
fsl: flightsql://flight_username:P4ssw0rd@localhost
gdr:
protocol: godror
username: system
password: P4ssw0rd
hostname: localhost
port: 1521
database: free
ign: ignite://ignite:ignite@localhost
mss: sqlserver://sa:Adm1nP@ssw0rd@localhost
mym: mysql://root:P4ssw0rd@localhost
myz: mymysql://root:P4ssw0rd@localhost
ora: oracle://system:P4ssw0rd@localhost/free
ore: oracle://system:P4ssw0rd@localhost:1522/db1
pgs: postgres://postgres:P4ssw0rd@localhost
pgx: pgx://postgres:P4ssw0rd@localhost
vrt:
proto: vertica
user: vertica
pass: vertica
host: localhost
sll:
file: /path/to/mydb.sqlite3
mdc: modernsqlite:test.db
dkd: test.duckdb
zzz: ["databricks", "token:dapi*****@adb-*************.azuredatabricks.net:443/sql/protocolv1/o/*********/*******"]
zz2:
proto: mysql
user: "my username"
pass: "my password!"
host: localhost
opts:
opt1: "😀"
# init script
init: |
\echo welcome to the jungle `date`
\set SYNTAX_HL_STYLE paraiso-dark
\set PROMPT1 '\033[32m%S%M%/%R%#\033[0m '
\set bar test
\set foo test
-- \set SHOW_HOST_INFORMATION false
-- \set SYNTAX_HL false
\set 型示師 '本門台初埼本門台初埼'
# charts path
charts_path: charts
# defined queries
queries:
q1:
```
### Time Formatting
Some databases support time/date columns that support formatting. By default, usql formats time/date columns as RFC3339Nano, and can be set using `\pset time FORMAT`:
```
$ usql pg://
Connected with driver postgres (PostgreSQL 13.2 (Debian 13.2-1.pgdg100+1))
Type "help" for help.
pg:postgres@=> \pset
time RFC3339Nano
pg:postgres@=> select now();
now
-----------------------------
2021-05-01T22:21:44.710385Z
(1 row)
pg:postgres@=> \pset time Kitchen
Time display is "Kitchen" ("3:04PM").
pg:postgres@=> select now();
now
---------
10:22PM
(1 row)
```
usql's time format supports any Go supported time format, or can be any standard Go const name, such as Kitchen above. See below for an overview of the available time constants.
#### Time Constants
The following are the time constant names available in `usql`, corresponding time format value, and example display output:
| Constant | Format | Display |
| ----------- | ------------------------------------: | ----------------------------------: |
| ANSIC | `Mon Jan _2 15:04:05 2006` | `Wed Aug 3 20:12:48 2022` |
| UnixDate | `Mon Jan _2 15:04:05 MST 2006` | `Wed Aug 3 20:12:48 UTC 2022` |
| RubyDate | `Mon Jan 02 15:04:05 -0700 2006` | `Wed Aug 03 20:12:48 +0000 2022` |
| RFC822 | `02 Jan 06 15:04 MST` | `03 Aug 22 20:12 UTC` |
| RFC822Z | `02 Jan 06 15:04 -0700` | `03 Aug 22 20:12 +0000` |
| RFC850 | `Monday, 02-Jan-06 15:04:05 MST` | `Wednesday, 03-Aug-22 20:12:48 UTC` |
| RFC1123 | `Mon, 02 Jan 2006 15:04:05 MST` | `Wed, 03 Aug 2022 20:12:48 UTC` |
| RFC1123Z | `Mon, 02 Jan 2006 15:04:05 -0700` | `Wed, 03 Aug 2022 20:12:48 +0000` |
| RFC3339 | `2006-01-02T15:04:05Z07:00` | `2022-08-03T20:12:48Z` |
| RFC3339Nano | `2006-01-02T15:04:05.999999999Z07:00` | `2022-08-03T20:12:48.693257Z` |
| Kitchen | `3:04PM` | `8:12PM` |
| Stamp | `Jan _2 15:04:05` | `Aug 3 20:12:48` |
| StampMilli | `Jan _2 15:04:05.000` | `Aug 3 20:12:48.693` |
| StampMicro | `Jan _2 15:04:05.000000` | `Aug 3 20:12:48.693257` |
| StampNano | `Jan _2 15:04:05.000000000` | `Aug 3 20:12:48.693257000` |

View file

@ -1,80 +0,0 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/SDDM
wiki: https://en.wikipedia.org/wiki/Simple_Desktop_Display_Manager
repo: https://github.com/sddm/sddm
rev: 2024-12-18
---
# SDDM
The Simple Desktop Display Manager (SDDM) is a display manager. It is the recommended display manager for the KDE Plasma and LXQt desktop environments.
## Configuration
The default configuration file for SDDM can be found at `/usr/lib/sddm/sddm.conf.d/default.conf`. For any changes, create configuration file(s) in `/etc/sddm.conf.d/`.
Everything should work out of the box, since Arch Linux uses systemd and SDDM defaults to using `systemd-logind` for session management.
### Autologin
SDDM supports automatic login through its configuration file, for example (`/etc/sddm.conf.d/autologin.conf`):
```ini
[Autologin]
User=john
Session=plasma
# Optionally always relogin the user on logout
Relogin=true
```
This configuration causes a KDE Plasma session to be started for user `john` when the system is booted. Available session types can be found in `/usr/share/xsessions/` for X and in `/usr/share/wayland-sessions/` for Wayland.
To autologin into KDE Plasma while simultaneously locking the session (e.g. to allow autostarted apps to warm up), create a systemd user unit drop in to pass `--lockscreen` in `plasma-ksmserver.service` (`~/.config/systemd/user/plasma-ksmserver.service.d/override.conf`):
```ini
[Service]
ExecStart=
ExecStart=/usr/bin/ksmserver --lockscreen
```
### Theme settings
Theme settings can be changed in the `[Theme]` section. If you use Plasma's system settings, themes may show previews.
Set to `breeze` for the default Plasma theme.
#### Current theme
Set the current theme through the Current value, e.g. `Current=archlinux-simplyblack`.
#### Editing themes
The default SDDM theme directory is `/usr/share/sddm/themes/`. You can add your custom made themes to that directory under a separate subdirectory. Note that SDDM requires these subdirectory names to be the same as the theme names. Study the files installed to modify or create your own theme.
#### Customizing a theme
To override settings in the `theme.conf` configuration file, create a custom `theme.conf.user` file in the same directory. For example, to change the theme's background (`/usr/share/sddm/themes/name/theme.conf.user`):
```ini
[General]
background=/path/to/background.png
```
#### Testing (previewing) a theme
You can preview an SDDM theme if needed. This is especially helpful if you are not sure how the theme would look if selected or just edited a theme and want to see how it would look without logging out. You can run something like this:
```sh
sddm-greeter-qt6 --test-mode --theme /usr/share/sddm/themes/breeze
```
This should open a new window for every monitor you have connected and show a preview of the theme.
#### Mouse cursor
To set the mouse cursor theme, set `CursorTheme` to your preferred cursor theme.
Valid Plasma mouse cursor theme names are `breeze_cursors`, `Breeze_Snow` and `breeze-dark`.
### Keyboard Layout
To set the keyboard layout with SDDM, edit ` /usr/share/sddm/scripts/Xsetup`:
```
#!/bin/sh
# Xsetup - run as root before the login dialog appears
setxkbmap de,us
```

View file

@ -1,7 +1,5 @@
---
obj: application
repo: https://git.launchpad.net/ufw/
arch-wiki: https://wiki.archlinux.org/title/Uncomplicated_Firewall
---
# ufw
@ -19,134 +17,19 @@ The next line is only needed _once_ the first time you install the package:
ufw enable
```
**See status:**
See status:
```shell
ufw status
```
**Enable/Disable:**
Enable/Disable
```shell
ufw enable
ufw disable
```
**Allow/Deny:**
Allow/Deny ports
```shell
ufw allow <app|port>
ufw deny <app|port>
ufw allow from <CIDR>
ufw deny from <CIDR>
```
## Forward policy
Users needing to run a VPN such as OpenVPN or WireGuard can adjust the `DEFAULT_FORWARD_POLICY` variable in `/etc/default/ufw` from a value of `DROP` to `ACCEPT` to forward all packets regardless of the settings of the user interface. To forward for a specific interface like `wg0`, user can add the following line in the filter block
```sh
# /etc/ufw/before.rules
-A ufw-before-forward -i wg0 -j ACCEPT
-A ufw-before-forward -o wg0 -j ACCEPT
```
You may also need to uncomment
```sh
# /etc/ufw/sysctl.conf
net/ipv4/ip_forward=1
net/ipv6/conf/default/forwarding=1
net/ipv6/conf/all/forwarding=1
```
## Adding other applications
The PKG comes with some defaults based on the default ports of many common daemons and programs. Inspect the options by looking in the `/etc/ufw/applications.d` directory or by listing them in the program itself:
```sh
ufw app list
```
If users are running any of the applications on a non-standard port, it is recommended to simply make `/etc/ufw/applications.d/custom` containing the needed data using the defaults as a guide.
> **Warning**: If users modify any of the PKG provided rule sets, these will be overwritten the first time the ufw package is updated. This is why custom app definitions need to reside in a non-PKG file as recommended above!
Example, deluge with custom tcp ports that range from 20202-20205:
```ini
[Deluge-my]
title=Deluge
description=Deluge BitTorrent client
ports=20202:20205/tcp
```
Should you require to define both tcp and udp ports for the same application, simply separate them with a pipe as shown: this app opens tcp ports 10000-10002 and udp port 10003:
```ini
ports=10000:10002/tcp|10003/udp
```
One can also use a comma to define ports if a range is not desired. This example opens tcp ports 10000-10002 (inclusive) and udp ports 10003 and 10009
```ini
ports=10000:10002/tcp|10003,10009/udp
```
## Deleting applications
Drawing on the Deluge/Deluge-my example above, the following will remove the standard Deluge rules and replace them with the Deluge-my rules from the above example:
```sh
ufw delete allow Deluge
ufw allow Deluge-my
```
## Black listing IP addresses
It might be desirable to add ip addresses to a blacklist which is easily achieved simply by editing `/etc/ufw/before.rules` and inserting an `iptables DROP` line at the bottom of the file right above the "COMMIT" word.
```sh
# /etc/ufw/before.rules
...
## blacklist section
# block just 199.115.117.99
-A ufw-before-input -s 199.115.117.99 -j DROP
# block 184.105.*.*
-A ufw-before-input -s 184.105.0.0/16 -j DROP
# don't delete the 'COMMIT' line or these rules won't be processed
COMMIT
```
## Rate limiting with ufw
ufw has the ability to deny connections from an IP address that has attempted to initiate 6 or more connections in the last 30 seconds. Users should consider using this option for services such as SSH.
Using the above basic configuration, to enable rate limiting we would simply replace the allow parameter with the limit parameter. The new rule will then replace the previous.
```sh
ufw limit SSH
```
## Disable remote ping
Change `ACCEPT` to `DROP` in the following lines:
```sh
/etc/ufw/before.rules
# ok icmp codes
...
-A ufw-before-input -p icmp --icmp-type echo-request -j ACCEPT
```
If you use IPv6, related rules are in `/etc/ufw/before6.rules`.
## Disable UFW logging
Disabling logging may be useful to stop UFW filling up the kernel (dmesg) and message logs:
```sh
ufw logging off
```
## UFW and Docker
Docker in standard mode writes its own iptables rules and ignores ufw ones, which could lead to security issues. A solution can be found at https://github.com/chaifeng/ufw-docker.
## GUI frontends
If you are using KDE Plasma, you can just go to `Wi-Fi & Networking > Firewall` to access and adjust firewall configurations given `plasma-firewall` is installed.
```

View file

@ -1,18 +1,17 @@
---
arch-wiki: https://wiki.archlinux.org/title/PKGBUILD
obj: concept
rev: 2024-12-19
---
# PKGBUILD
A `PKGBUILD` is a shell script containing the build information required by [Arch Linux](../../../linux/Arch%20Linux.md) packages. [Arch Wiki](https://wiki.archlinux.org/title/PKGBUILD)
Packages in [Arch Linux](../../../linux/Arch%20Linux.md) are built using the [makepkg](makepkg.md) utility. When [makepkg](makepkg.md) is run, it searches for a `PKGBUILD` file in the current directory and follows the instructions therein to either compile or otherwise acquire the files to build a package archive (`pkgname.pkg.tar.zst`). The resulting package contains binary files and installation instructions, readily installable with [pacman](Pacman.md).
Packages in [Arch Linux](../../../linux/Arch%20Linux.md) are built using the [makepkg](makepkg.md) utility. When [makepkg](makepkg.md) is run, it searches for a PKGBUILD file in the current directory and follows the instructions therein to either compile or otherwise acquire the files to build a package archive (pkgname.pkg.tar.zst). The resulting package contains binary files and installation instructions, readily installable with [pacman](Pacman.md).
Mandatory variables are `pkgname`, `pkgver`, `pkgrel`, and `arch`. `license` is not strictly necessary to build a package, but is recommended for any `PKGBUILD` shared with others, as [makepkg](makepkg.md) will produce a warning if not present.
Mandatory variables are `pkgname`, `pkgver`, `pkgrel`, and `arch`. `license` is not strictly necessary to build a package, but is recommended for any PKGBUILD shared with others, as [makepkg](makepkg.md) will produce a warning if not present.
## Example
# Example
PKGBUILD:
```sh
# Maintainer: User <mail>
@ -49,187 +48,4 @@ package() {
cd "$pkgname"
install -Dm755 ./app "$pkgdir/usr/bin/app"
}
```
## Directives
The following is a list of standard options and directives available for use in a `PKGBUILD`. These are all understood and interpreted by `makepkg`, and most of them will be directly transferred to the built package.
If you need to create any custom variables for use in your build process, it is recommended to prefix their name with an `_` (underscore). This will prevent any possible name clashes with internal `makepkg` variables. For example, to store the base kernel version in a variable, use something similar to `$_basekernver`.
### Name and Version
#### `pkgname`
Either the name of the package or an array of names for split packages.
Valid characters for members of this array are alphanumerics, and any of the following characters: `@ . _ + -`. Additionally, names are not allowed to start with hyphens or dots.
#### `pkgver`
The version of the software as released from the author (e.g., `2.7.1`). The variable is not allowed to contain colons, forward slashes, hyphens or whitespace.
The pkgver variable can be automatically updated by providing a `pkgver()` function in the `PKGBUILD` that outputs the new package version. This is run after downloading and extracting the sources and running the `prepare()` function (if present), so it can use those files in determining the new `pkgver`. This is most useful when used with sources from version control systems.
#### `pkgrel`
This is the release number specific to the distribution. This allows package maintainers to make updates to the packages configure flags, for example. This is typically set to `1` for each new upstream software release and incremented for intermediate `PKGBUILD` updates. The variable is a positive integer, with an optional subrelease level specified by adding another positive integer separated by a period (i.e. in the form `x.y`).
#### `epoch`
Used to force the package to be seen as newer than any previous versions with a lower epoch, even if the version number would normally not trigger such an upgrade. This value is required to be a positive integer; the default value if left unspecified is 0. This is useful when the version numbering scheme of a package changes (or is alphanumeric), breaking normal version comparison logic.
### Generic
#### `pkgdesc`
This should be a brief description of the package and its functionality. Try to keep the description to one line of text and to not use the packages name.
#### `url`
This field contains a URL that is associated with the software being packaged. This is typically the projects web site.
#### `license` (array)
This field specifies the license(s) that apply to the package. If multiple licenses are applicable, list all of them: `license=('GPL' 'FDL')`.
#### `arch` (array)
Defines on which architectures the given package is available (e.g., `arch=('i686' 'x86_64')`). Packages that contain no architecture specific files should use `arch=('any')`. Valid characters for members of this array are alphanumerics and `_`.
#### `groups` (array)
An array of symbolic names that represent groups of packages, allowing you to install multiple packages by requesting a single target. For example, one could install all KDE packages by installing the kde group.
### Dependencies
#### `depends` (array)
An array of packages this package depends on to run. Entries in this list should be surrounded with single quotes and contain at least the package name. Entries can also include a version requirement of the form `name<>version`, where `<>` is one of five comparisons: `>=` (greater than or equal to), `<=` (less than or equal to), `=` (equal to), `>` (greater than), or `<` (less than).
If the dependency name appears to be a library (ends with `.so`), `makepkg` will try to find a binary that depends on the library in the built package and append the version needed by the binary. Appending the version yourself disables automatic detection.
Additional architecture-specific depends can be added by appending an underscore and the architecture name e.g., `depends_x86_64=()`.
#### `makedepends` (array)
An array of packages this package depends on to build but are not needed at runtime. Packages in this list follow the same format as `depends`.
Additional architecture-specific `makedepends` can be added by appending an underscore and the architecture name e.g., `makedepends_x86_64=()`.
#### `checkdepends` (array)
An array of packages this package depends on to run its test suite but are not needed at runtime. Packages in this list follow the same format as depends. These dependencies are only considered when the `check()` function is present and is to be run by `makepkg`.
Additional architecture-specific checkdepends can be added by appending an underscore and the architecture name e.g., `checkdepends_x86_64=()`
#### `optdepends` (array)
An array of packages (and accompanying reasons) that are not essential for base functionality, but may be necessary to make full use of the contents of this package. optdepends are currently for informational purposes only and are not utilized by pacman during dependency resolution. Packages in this list follow the same format as depends, with an optional description appended. The format for specifying optdepends descriptions is:
```shell
optdepends=('python: for library bindings')
```
Additional architecture-specific optdepends can be added by appending an underscore and the architecture name e.g., `optdepends_x86_64=()`.
### Package Relations
#### `provides` (array)
An array of “virtual provisions” this package provides. This allows a package to provide dependencies other than its own package name. For example, the `dcron` package can provide `cron`, which allows packages to depend on `cron` rather than `dcron` OR `fcron`.
Versioned provisions are also possible, in the `name=version` format. For example, `dcron` can provide `cron=2.0` to satisfy the `cron>=2.0` dependency of other packages. Provisions involving the `>` and `<` operators are invalid as only specific versions of a package may be provided.
If the provision name appears to be a library (ends with `.so`), makepkg will try to find the library in the built package and append the correct version. Appending the version yourself disables automatic detection.
Additional architecture-specific provides can be added by appending an underscore and the architecture name e.g., `provides_x86_64=()`.
#### `conflicts` (array)
An array of packages that will conflict with this package (i.e. they cannot both be installed at the same time). This directive follows the same format as `depends`. Versioned conflicts are supported using the operators as described in `depends`.
Additional architecture-specific conflicts can be added by appending an underscore and the architecture name e.g., `conflicts_x86_64=()`.
#### `replaces` (array)
An array of packages this package should replace. This can be used to handle renamed/combined packages. For example, if the `j2re` package is renamed to `jre`, this directive allows future upgrades to continue as expected even though the package has moved. Versioned replaces are supported using the operators as described in `depends`.
Sysupgrade is currently the only pacman operation that utilizes this field. A normal sync or upgrade will not use its value.
Additional architecture-specific replaces can be added by appending an underscore and the architecture name e.g., `replaces_x86_64=()`.
### Other
#### `backup` (array)
An array of file names, without preceding slashes, that should be backed up if the package is removed or upgraded. This is commonly used for packages placing configuration files in `/etc`.
#### `options` (array)
This array allows you to override some of makepkgs default behavior when building packages. To set an option, just include the option name in the `options` array. To reverse the default behavior, place an `!` at the front of the option. Only specify the options you specifically want to override, the rest will be taken from `makepkg.conf`
| Option | Description |
| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `strip` | Strip symbols from binaries and libraries. If you frequently use a debugger on programs or libraries, it may be helpful to disable this option. |
| `docs` | Save doc directories. If you wish to delete doc directories, specify `!docs` in the array. |
| `libtool` | Leave libtool (`.la`) files in packages. Specify `!libtool` to remove them. |
| `staticlibs` | Leave static library (`.a`) files in packages. Specify `!staticlibs` to remove them (if they have a shared counterpart). |
| `emptydirs` | Leave empty directories in packages. |
| `zipman` | Compress man and info pages with gzip. |
| `ccache` | Allow the use of ccache during `build()`. More useful in its negative form `!ccache` with select packages that have problems building with ccache. |
| `distcc` | Allow the use of distcc during `build()`. More useful in its negative form `!distcc` with select packages that have problems building with distcc. |
| `buildflags` | Allow the use of user-specific buildflags (`CPPFLAGS`, `CFLAGS`, `CXXFLAGS`, `LDFLAGS`) during `build()` as specified in `makepkg.conf`. More useful in its negative form `!buildflags` with select packages that have problems building with custom buildflags. |
| `makeflags` | Allow the use of user-specific makeflags during `build()` as specified in `makepkg.conf`. More useful in its negative form `!makeflags` with select packages that have problems building with custom makeflags such as `-j2`. |
| `debug` | Add the user-specified debug flags (`DEBUG_CFLAGS`, `DEBUG_CXXFLAGS`) to their counterpart buildflags as specified in `makepkg.conf`. When used in combination with the `strip` option, a separate package containing the debug symbols is created. |
| `lto` | Enable building packages using link time optimization. Adds `-flto` to both `CFLAGS` and `CXXFLAGS`. |
#### `install`
Specifies a special install script that is to be included in the package. This file should reside in the same directory as the `PKGBUILD` and will be copied into the package by `makepkg`. It does not need to be included in the source array (e.g., `install=$pkgname.install`).
Pacman has the ability to store and execute a package-specific script when it installs, removes, or upgrades a package. This allows a package to configure itself after installation and perform an opposite action upon removal.
The exact time the script is run varies with each operation, and should be self-explanatory. Note that during an upgrade operation, none of the install or remove functions will be called.
Scripts are passed either one or two “full version strings”, where a full version string is either `pkgver-pkgrel` or `epoch:pkgver-pkgrel`, if `epoch` is non-zero.
- `pre_install`: Run right before files are extracted. One argument is passed: new package full version string.
- `post_install`: Run right after files are extracted. One argument is passed: new package full version string.
- `pre_upgrade`: Run right before files are extracted. Two arguments are passed in this order: new package full version string, old package full version string.
- `post_upgrade`: Run after files are extracted. Two arguments are passed in this order: new package full version string, old package full version string.
- `pre_remove`: Run right before files are removed. One argument is passed: old package full version string.
- `post_remove`: Run right after files are removed. One argument is passed: old package full version string.
To use this feature, create a file such as `pkgname.install` and put it in the same directory as the `PKGBUILD` script. Then use the install directive: `install=pkgname.install`
#### `changelog`
Specifies a changelog file that is to be included in the package. The changelog file should end in a single newline. This file should reside in the same directory as the `PKGBUILD` and will be copied into the package by `makepkg`. It does not need to be included in the source array (e.g., `changelog=$pkgname.changelog`).
### Sources
#### `source` (array)
An array of source files required to build the package. Source files must either reside in the same directory as the `PKGBUILD`, or be a fully-qualified URL that `makepkg` can use to download the file. To simplify the maintenance of `PKGBUILDs`, use the `$pkgname` and `$pkgver` variables when specifying the download location, if possible. Compressed files will be extracted automatically unless found in the `noextract` array described below.
Additional architecture-specific sources can be added by appending an underscore and the architecture name e.g., `source_x86_64=()`. There must be a corresponding integrity array with checksums, e.g. `cksums_x86_64=()`.
It is also possible to change the name of the downloaded file, which is helpful with weird URLs and for handling multiple source files with the same name. The syntax is: `source=('filename::url')`.
Files in the source array with extensions `.sig`, `.sign` or, `.asc` are recognized by makepkg as PGP signatures and will be automatically used to verify the integrity of the corresponding source file.
#### `noextract` (array)
An array of file names corresponding to those from the source array. Files listed here will not be extracted with the rest of the source files. This is useful for packages that use compressed data directly.
#### `validpgpkeys` (array)
An array of PGP fingerprints. If this array is non-empty, `makepkg` will only accept signatures from the keys listed here and will ignore the trust values from the keyring. If the source file was signed with a subkey, `makepkg` will still use the primary key for comparison.
Only full fingerprints are accepted. They must be uppercase and must not contain whitespace characters.
### Integrity
#### `cksums` (array)
This array contains CRC checksums for every source file specified in the source array (in the same order). `makepkg` will use this to verify source file integrity during subsequent builds. If `SKIP` is put in the array in place of a normal hash, the integrity check for that source file will be skipped. To easily generate cksums, run `makepkg -g >> PKGBUILD`. If desired, move the cksums line to an appropriate location. Note that checksums generated by `makepkg -g` should be verified using checksum values provided by the software developer.
#### `md5sums`, `sha1sums`, `sha224sums`, `sha256sums`, `sha384sums`, `sha512sums`, `b2sums` (arrays)
Alternative integrity checks that `makepkg` supports; these all behave similar to the cksums option described above. To enable use and generation of these checksums, be sure to set up the `INTEGRITY_CHECK` option in `makepkg.conf`.
## Packaging Functions
In addition to the above directives, `PKGBUILDs` require a set of functions that provide instructions to build and install the package. As a minimum, the `PKGBUILD` must contain a `package()` function which installs all the packages files into the packaging directory, with optional `prepare()`, `build()`, and `check()` functions being used to create those files from source.
This is directly sourced and executed by `makepkg`, so anything that Bash or the system has available is available for use here. Be sure any exotic commands used are covered by the `makedepends` array.
If you create any variables of your own in any of these functions, it is recommended to use the Bash `local` keyword to scope the variable to inside the function.
### `package()` Function
The `package()` function is used to install files into the directory that will become the root directory of the built package and is run after all the optional functions listed below. The packaging stage is run using `fakeroot` to ensure correct file permissions in the resulting package. All other functions will be run as the user calling `makepkg`. This function is run inside `$srcdir`.
### `verify()` Function
An optional `verify()` function can be specified to implement arbitrary source authentication. The function should return a non-zero exit code when verification fails. This function is run before sources are extracted. This function is run inside `$startdir`.
### `prepare()` Function
An optional `prepare()` function can be specified in which operations to prepare the sources for building, such as patching, are performed. This function is run after the source extraction and before the `build()` function. The `prepare()` function is skipped when source extraction is skipped. This function is run inside `$srcdir`.
### `build()` Function
The optional `build()` function is used to compile and/or adjust the source files in preparation to be installed by the `package()` function. This function is run inside `$srcdir`.
### `check()` Function
An optional `check()` function can be specified in which a packages test-suite may be run. This function is run between the `build()` and `package()` functions. Be sure any exotic commands used are covered by the `checkdepends` array. This function is run inside `$srcdir`.
```

View file

@ -1,9 +1,6 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Pacman
rev: 2024-12-19
---
# Pacman
Pacman is the default [Arch Linux](../../../linux/Arch%20Linux.md) Package Manager
@ -59,363 +56,6 @@ pacman -Q
```
Empty the entire pacman cache:
```shel
pacman -Scc
```
Read changelog of package:
```shell
pacman -Qc pkgname
```
### File Conflicts
When pacman removes a package that has a configuration file, it normally creates a backup copy of that configuration file and appends `.pacsave` to the name of the file. Likewise, when pacman upgrades a package which includes a new configuration file created by the maintainer differing from the currently installed file, it saves a `.pacnew` file with the new configuration. pacman provides notice when these files are written.
## Configuration
Pacman, using libalpm, will attempt to read `pacman.conf` each time it is invoked. This configuration file is divided into sections or repositories. Each section defines a package repository that pacman can use when searching for packages in `--sync` mode. The exception to this is the `[options]` section, which defines global options.
```ini
# /etc/pacman.conf
[options]
# Set the default root directory for pacman to install to.
# This option is used if you want to install a package on a temporary mounted partition which is "owned" by another system, or for a chroot install.
# NOTE: If database path or log file are not specified on either the command line or in pacman.conf(5), their default location will be inside this root path.
RootDir = /path/to/root/dir
# Overrides the default location of the toplevel database directory.
# The default is /var/lib/pacman/.
# Most users will not need to set this option.
# NOTE: if specified, this is an absolute path and the root path is not automatically prepended.
DBPath = /path/to/db/dir
# Overrides the default location of the package cache directory.
# The default is /var/cache/pacman/pkg/.
# Multiple cache directories can be specified, and they are tried in the order they are listed in the config file.
# If a file is not found in any cache directory, it will be downloaded to the first cache directory with write access.
# NOTE: this is an absolute path, the root path is not automatically prepended.
CacheDir = /path/to/cache/dir
# Add directories to search for alpm hooks in addition to the system hook directory (/usr/share/libalpm/hooks/).
# The default is /etc/pacman.d/hooks.
# Multiple directories can be specified with hooks in later directories taking precedence over hooks in earlier directories.
# NOTE: this is an absolute path, the root path is not automatically prepended. For more information on the alpm hooks, see alpm-hooks(5).
HookDir = /path/to/hook/dir
# Overrides the default location of the directory containing configuration files for GnuPG.
# The default is /etc/pacman.d/gnupg/.
# This directory should contain two files: pubring.gpg and trustdb.gpg.
# pubring.gpg holds the public keys of all packagers. trustdb.gpg contains a so-called trust database, which specifies that the keys are authentic and trusted.
# NOTE: this is an absolute path, the root path is not automatically prepended.
GPGDir = /path/to/gpg/dir
# Overrides the default location of the pacman log file.
# The default is /var/log/pacman.log.
# This is an absolute path and the root directory is not prepended.
LogFile = /path/to/log/file
# If a user tries to --remove a package thats listed in HoldPkg, pacman will ask for confirmation before proceeding. Shell-style glob patterns are allowed.
HoldPkg = package ...
# Instructs pacman to ignore any upgrades for this package when performing a --sysupgrade. Shell-style glob patterns are allowed.
IgnorePkg = package ...
# Instructs pacman to ignore any upgrades for all packages in this group when performing a --sysupgrade. Shell-style glob patterns are allowed.
IgnoreGroup = group ...
# Include another configuration file.
# This file can include repositories or general configuration options.
# Wildcards in the specified paths will get expanded based on glob rules.
Include = /path/to/config/file
# If set, pacman will only allow installation of packages with the given architectures (e.g. i686, x86_64, etc).
# The special value auto will use the system architecture, provided via “uname -m”.
# If unset, no architecture checks are made.
# NOTE: Packages with the special architecture any can always be installed, as they are meant to be architecture independent.
Architecture = auto &| i686 &| x86_64 | ...
# If set, an external program will be used to download all remote files.
# All instances of %u will be replaced with the download URL.
# If present, instances of %o will be replaced with the local filename, plus a “.part” extension, which allows programs like wget to do file resumes properly.
XferCommand = /path/to/command %u [%o]
# All files listed with a NoUpgrade directive will never be touched during a package install/upgrade, and the new files will be installed with a .pacnew extension.
# These files refer to files in the package archive, so do not include the leading slash (the RootDir) when specifying them.
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a file with an exclamation mark.
# Inverted files will result in previously blacklisted files being whitelisted again. Subsequent matches will override previous ones.
# A leading literal exclamation mark or backslash needs to be escaped.
NoUpgrade = file ...
# All files listed with a NoExtract directive will never be extracted from a package into the filesystem.
# This can be useful when you dont want part of a package to be installed.
# For example, if your httpd root uses an index.php, then you would not want the index.html file to be extracted from the apache package.
# These files refer to files in the package archive, so do not include the leading slash (the RootDir) when specifying them.
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a file with an exclamation mark.
# Inverted files will result in previously blacklisted files being whitelisted again. Subsequent matches will override previous ones.
# A leading literal exclamation mark or backslash needs to be escaped.
NoExtract = file ...
# If set to KeepInstalled (the default), the -Sc operation will clean packages that are no longer installed (not present in the local database).
# If set to KeepCurrent, -Sc will clean outdated packages (not present in any sync database).
# The second behavior is useful when the package cache is shared among multiple machines, where the local databases are usually different, but the sync databases in use could be the same.
# If both values are specified, packages are only cleaned if not installed locally and not present in any known sync database.
CleanMethod = KeepInstalled &| KeepCurrent
# Set the default signature verification level. For more information, see Package and Database Signature Checking below.
SigLevel = ...
# Set the signature verification level for installing packages using the "-U" operation on a local file. Uses the value from SigLevel as the default.
LocalFileSigLevel = ...
# Set the signature verification level for installing packages using the "-U" operation on a remote file URL. Uses the value from SigLevel as the default.
RemoteFileSigLevel = ...
# Log action messages through syslog().
# This will insert log entries into /var/log/messages or equivalent.
UseSyslog
# Automatically enable colors only when pacmans output is on a tty.
Color
# Disables progress bars. This is useful for terminals which do not support escape characters.
NoProgressBar
# Performs an approximate check for adequate available disk space before installing packages.
CheckSpace
# Displays name, version and size of target packages formatted as a table for upgrade, sync and remove operations.
VerbosePkgLists
# Disable defaults for low speed limit and timeout on downloads.
# Use this if you have issues downloading files with proxy and/or security gateway.
DisableDownloadTimeout
# Specifies number of concurrent download streams.
# The value needs to be a positive integer.
# If this config option is not set then only one download stream is used (i.e. downloads happen sequentially).
ParallelDownloads = ...
# Specifies the user to switch to for downloading files.
# If this config option is not set then the downloads are done as the user running pacman.
DownloadUser = username
# Disable the default sandbox applied to the process downloading files on Linux systems.
# Useful if experiencing landlock related failures while downloading files when running a Linux kernel that does not support this feature.
DisableSandbox
```
### Repository Sections
Each repository section defines a section name and at least one location where the packages can be found. The section name is defined by the string within square brackets (the two above are core and custom). Repository names must be unique and the name local is reserved for the database of installed packages. Locations are defined with the Server directive and follow a URL naming structure. If you want to use a local directory, you can specify the full path with a `file://` prefix, as shown above.
A common way to define DB locations utilizes the Include directive. For each repository defined in the configuration file, a single Include directive can contain a file that lists the servers for that repository.
```ini
[core]
# use this server first
Server = ftp://ftp.archlinux.org/$repo/os/$arch
# next use servers as defined in the mirrorlist below
Include = {sysconfdir}/pacman.d/mirrorlist
# Include another config file.
Include = path
# A full URL to a location where the packages, and signatures (if available) for this repository can be found.
# Cache servers will be tried before any non-cache servers, will not be removed from the server pool for 404 download errors, and will not be used for database files.
CacheServer = url
# A full URL to a location where the database, packages, and signatures (if available) for this repository can be found.
Server = url
# Set the signature verification level for this repository. For more information, see Package and Database Signature Checking below.
SigLevel = ...
# Set the usage level for this repository. This option takes a list of tokens which must be at least one of the following:
# Sync : Enables refreshes for this repository.
# Search : Enables searching for this repository.
# Install : Enables installation of packages from this repository during a --sync operation.
# Upgrade : Allows this repository to be a valid source of packages when performing a --sysupgrade.
# All : Enables all of the above features for the repository. This is the default if not specified.
# Note that an enabled repository can be operated on explicitly, regardless of the Usage level set.
Usage = ...
```
### Signature Checking
The `SigLevel` directive is valid in both the `[options]` and repository sections. If used in `[options]`, it sets a default value for any repository that does not provide the setting.
- If set to `Never`, no signature checking will take place.
- If set to `Optional` , signatures will be checked when present, but unsigned databases and packages will also be accepted.
- If set to `Required`, signatures will be required on all packages and databases.
### Hooks
libalpm provides the ability to specify hooks to run before or after transactions based on the packages and/or files being modified. Hooks consist of a single `[Action]` section describing the action to be run and one or more `[Trigger]` section describing which transactions it should be run for.
Hooks are read from files located in the system hook directory `/usr/share/libalpm/hooks`, and additional custom directories specified in pacman.conf (the default is `/etc/pacman.d/hooks`). The file names are required to have the suffix `.hook`. Hooks are run in alphabetical order of their file name, where the ordering ignores the suffix.
Hooks may be overridden by placing a file with the same name in a higher priority hook directory. Hooks may be disabled by overriding them with a symlink to `/dev/null`.
Hooks must contain at least one `[Trigger]` section that determines which transactions will cause the hook to run. If multiple trigger sections are defined the hook will run if the transaction matches any of the triggers.
```ini
# Example: Force disks to sync to reduce the risk of data corruption
[Trigger]
# Select the type of operation to match targets against.
# May be specified multiple times.
# Installations are considered an upgrade if the package or file is already present on the system regardless of whether the new package version is actually greater than the currently installed version. For Path triggers, this is true even if the file changes ownership from one package to another.
# Operation = Install | Upgrade | Remove
Operation = Install
Operation = Upgrade
Operation = Remove
# Select whether targets are matched against transaction packages or files.
# Type = Path|Package
Type = Package
# The path or package name to match against the active transaction.
# Paths refer to the files in the package archive; the installation root should not be included in the path.
# Shell-style glob patterns are allowed. It is possible to invert matches by prepending a target with an exclamation mark. May be specified multiple times.
# Target = <path|package>
Target = *
[Action]
# An optional description that describes the action being taken by the hook for use in front-end output.
# Description = ...
# Packages that must be installed for the hook to run. May be specified multiple times.
# Depends = <package>
Depends = coreutils
# When to run the hook. Required.
# When = PreTransaction | PostTransaction
When = PostTransaction
# Command to run.
# Command arguments are split on whitespace. Values containing whitespace should be enclosed in quotes.
# Exec = <command>
Exec = /usr/bin/sync
# Causes the transaction to be aborted if the hook exits non-zero. Only applies to PreTransaction hooks.
# AbortOnFail
# Causes the list of matched trigger targets to be passed to the running hook on stdin.
# NeedsTargets
```
## Repositories
You can create your own package repository.
A repository essentially consists of:
- the packages (`.tar.zst`) and their signatures (`.tar.zst.sig`)
- a package index (`.db.tar.gz`)
### Adding a repo
To use a repo, add it to your `pacman.conf`:
```ini
# Local Repository
[myrepo]
SigLevel = Optional TrustAll
Server = file:///path/to/myrepo
# Remote Repository
[myrepo]
SigLevel = Optional
Server = http://yourserver.com/myrepo
```
### Package Database
To manage the package data (index) use the `repo-add` and `repo-remove` commands.
`repo-add` will update a package database by reading a built package file. Multiple packages to add can be specified on the command line.
If a matching `.sig` file is found alongside a package file, the signature will automatically be embedded into the database.
`repo-remove` will update a package database by removing the package name specified on the command line. Multiple packages to remove can be specified on the command line.
```sh
repo-add [options] <path-to-db> <package> [<package> ...]
repo-remove [options] <path-to-db> <packagename> [<packagename> ...]
```
| Option | Description |
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-q, --quiet` | Force this program to keep quiet and run silently except for warning and error messages. |
| `-s, --sign` | Generate a PGP signature file using GnuPG. This will execute `gpg --detach-sign` on the generated database to generate a detached signature file, using the GPG agent if it is available. |
| `-k, --key <key>` | Specify a key to use when signing packages. Can also be specified using the `GPGKEY` environment variable. If not specified in either location, the default key from the keyring will be used. |
| `-v, --verify` | Verify the PGP signature of the database before updating the database. If the signature is invalid, an error is produced and the update does not proceed. |
| `--nocolor` | Remove color from repo-add and repo-remove output. |
| **`repo-add` ONLY OPTIONS:** | - |
| `-n, --new` | Only add packages that are not already in the database. Warnings will be printed upon detection of existing packages, but they will not be re-added. |
| `-R, --remove` | Remove old package files from the disk when updating their entry in the database. |
| `--include-sigs` | Include package PGP signatures in the repository database (if available) |
## Package Signing
To determine if packages are authentic, pacman uses OpenPGP keys in a web of trust model. Each user also has a unique OpenPGP key, which is generated when you configure `pacman-key`.
Examples of webs of trust:
- Custom packages: Packages made and signed with a local key.
- Unofficial packages: Packages made and signed by a developer. Then, a local key was used to sign the developer's key.
- Official packages: Packages made and signed by a developer. The developer's key was signed by the Arch Linux master keys. You used your key to sign the master keys, and you trust them to vouch for developers.
### Setup
The `SigLevel` option in `/etc/pacman.conf` determines the level of trust required to install a package with `pacman -S`. One can set signature checking globally, or per repository. If `SigLevel` is set globally in the `[options]` section, all packages installed with `pacman -S` will require signing. With the `LocalFileSigLevel` setting from the default `pacman.conf`, any packages you build, and install with `pacman -U`, will not need to be signed using `makepkg`.
For remote packages, the default configuration will only support the installation of packages signed by trusted keys:
```
# /etc/pacman.conf
SigLevel = Required DatabaseOptional TrustedOnly
```
To initialize the pacman keyring run:
```sh
pacman-key --init
```
### Keyring Management
#### Verifying the master keys
The initial setup of keys is achieved using:
```sh
pacman-key --populate
```
OpenPGP keys are too large (2048 bits or more) for humans to work with, so they are usually hashed to create a 40-hex-digit fingerprint which can be used to check by hand that two keys are the same. The last eight digits of the fingerprint serve as a name for the key known as the '(short) key ID' (the last sixteen digits of the fingerprint would be the 'long key ID').
#### Adding developer keys
The official Developers' and Package Maintainers' keys are signed by the master keys, so you do not need to use `pacman-key` to sign them yourself. Whenever pacman encounters a key it does not recognize, it will prompt you to download it from a keyserver configured in `/etc/pacman.d/gnupg/gpg.conf` (or by using the `--keyserver` option on the command line).
Once you have downloaded a developer key, you will not have to download it again, and it can be used to verify any other packages signed by that developer.
> **Note**: The `archlinux-keyring` package, which is a dependency of base, contains the latest keys. However keys can also be updated manually using `pacman-key --refresh-keys` (as root). While doing `--refresh-keys`, your local key will also be looked up on the remote keyserver, and you will receive a message about it not being found. This is nothing to be concerned about.
#### Adding unofficial keys
This method can be utilized to add a key to the pacman keyring, or to enable signed unofficial user repositories.
First, get the key ID (keyid) from its owner. Then add it to the keyring using one of the two methods:
If the key is found on a keyserver, import it with:
```sh
pacman-key --recv-keys keyid
```
If otherwise a link to a keyfile is provided, download it and then run:
```sh
pacman-key --add /path/to/downloaded/keyfile
```
It is recommended to verify the fingerprint, as with any master key or any other key you are going to sign:
```sh
pacman-key --finger keyid
```
Finally, you must locally sign the imported key:
```sh
pacman-key --lsign-key keyid
```
You now trust this key to sign packages.
pacman -Scc
```

View file

@ -1,190 +1,11 @@
---
arch-wiki: https://wiki.archlinux.org/title/Makepkg
obj: application
rev: 2024-12-19
---
# makepkg
makepkg is a tool for creating [pacman](Pacman.md) packages based on [PKGBUILD](PKGBUILD.md) files.
## Configuration
The system configuration is available in `/etc/makepkg.conf`, but user-specific changes can be made in `$XDG_CONFIG_HOME/pacman/makepkg.conf` or `~/.makepkg.conf`. Also, system wide changes can be made with a drop-in file `/etc/makepkg.conf.d/makepkg.conf`. It is recommended to review the configuration prior to building packages.
> **Tip**: devtools helper scripts for building packages in a clean chroot use the `/usr/share/devtools/makepkg.conf.d/arch.conf` configuration file instead.
```sh
#!/hint/bash
# shellcheck disable=2034
#
# /etc/makepkg.conf
#
#########################################################################
# SOURCE ACQUISITION
#########################################################################
#
#-- The download utilities that makepkg should use to acquire sources
# Format: 'protocol::agent'
DLAGENTS=('file::/usr/bin/curl -qgC - -o %o %u'
'ftp::/usr/bin/curl -qgfC - --ftp-pasv --retry 3 --retry-delay 3 -o %o %u'
'http::/usr/bin/curl -qgb "" -fLC - --retry 3 --retry-delay 3 -o %o %u'
'https::/usr/bin/curl -qgb "" -fLC - --retry 3 --retry-delay 3 -o %o %u'
'rsync::/usr/bin/rsync --no-motd -z %u %o'
'scp::/usr/bin/scp -C %u %o')
# Other common tools:
# /usr/bin/snarf
# /usr/bin/lftpget -c
# /usr/bin/wget
#-- The package required by makepkg to download VCS sources
# Format: 'protocol::package'
VCSCLIENTS=('bzr::breezy'
'fossil::fossil'
'git::git'
'hg::mercurial'
'svn::subversion')
#########################################################################
# ARCHITECTURE, COMPILE FLAGS
#########################################################################
#
CARCH="x86_64"
CHOST="x86_64-pc-linux-gnu"
#-- Compiler and Linker Flags
#CPPFLAGS=""
CFLAGS="-march=x86-64 -mtune=generic -O2 -pipe -fno-plt -fexceptions \
-Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security \
-fstack-clash-protection -fcf-protection \
-fno-omit-frame-pointer -mno-omit-leaf-frame-pointer"
CXXFLAGS="$CFLAGS -Wp,-D_GLIBCXX_ASSERTIONS"
LDFLAGS="-Wl,-O1 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now \
-Wl,-z,pack-relative-relocs"
LTOFLAGS="-flto=auto"
#-- Make Flags: change this for DistCC/SMP systems
MAKEFLAGS="-j8"
#-- Debugging flags
DEBUG_CFLAGS="-g"
DEBUG_CXXFLAGS="$DEBUG_CFLAGS"
#########################################################################
# BUILD ENVIRONMENT
#########################################################################
#
# Makepkg defaults: BUILDENV=(!distcc !color !ccache check !sign)
# A negated environment option will do the opposite of the comments below.
#
#-- distcc: Use the Distributed C/C++/ObjC compiler
#-- color: Colorize output messages
#-- ccache: Use ccache to cache compilation
#-- check: Run the check() function if present in the PKGBUILD
#-- sign: Generate PGP signature file
#
BUILDENV=(!distcc color !ccache check !sign)
#
#-- If using DistCC, your MAKEFLAGS will also need modification. In addition,
#-- specify a space-delimited list of hosts running in the DistCC cluster.
#DISTCC_HOSTS=""
#-- Specify a directory for package building.
BUILDDIR=/tmp/makepkg
#########################################################################
# GLOBAL PACKAGE OPTIONS
# These are default values for the options=() settings
#########################################################################
#
# Makepkg defaults: OPTIONS=(!strip docs libtool staticlibs emptydirs !zipman !purge !debug !lto !autodeps)
# A negated option will do the opposite of the comments below.
#
#-- strip: Strip symbols from binaries/libraries
#-- docs: Save doc directories specified by DOC_DIRS
#-- libtool: Leave libtool (.la) files in packages
#-- staticlibs: Leave static library (.a) files in packages
#-- emptydirs: Leave empty directories in packages
#-- zipman: Compress manual (man and info) pages in MAN_DIRS with gzip
#-- purge: Remove files specified by PURGE_TARGETS
#-- debug: Add debugging flags as specified in DEBUG_* variables
#-- lto: Add compile flags for building with link time optimization
#-- autodeps: Automatically add depends/provides
#
OPTIONS=(strip docs !libtool !staticlibs emptydirs zipman purge !debug lto)
#-- File integrity checks to use. Valid: md5, sha1, sha224, sha256, sha384, sha512, b2
INTEGRITY_CHECK=(sha256)
#-- Options to be used when stripping binaries. See `man strip' for details.
STRIP_BINARIES="--strip-all"
#-- Options to be used when stripping shared libraries. See `man strip' for details.
STRIP_SHARED="--strip-unneeded"
#-- Options to be used when stripping static libraries. See `man strip' for details.
STRIP_STATIC="--strip-debug"
#-- Manual (man and info) directories to compress (if zipman is specified)
MAN_DIRS=({usr{,/local}{,/share},opt/*}/{man,info})
#-- Doc directories to remove (if !docs is specified)
DOC_DIRS=(usr/{,local/}{,share/}{doc,gtk-doc} opt/*/{doc,gtk-doc})
#-- Files to be removed from all packages (if purge is specified)
PURGE_TARGETS=(usr/{,share}/info/dir .packlist *.pod)
#-- Directory to store source code in for debug packages
DBGSRCDIR="/usr/src/debug"
#-- Prefix and directories for library autodeps
LIB_DIRS=('lib:usr/lib' 'lib32:usr/lib32')
#########################################################################
# PACKAGE OUTPUT
#########################################################################
#
# Default: put built package and cached source in build directory
#
#-- Destination: specify a fixed directory where all packages will be placed
PKGDEST=/home/packages
#-- Source cache: specify a fixed directory where source files will be cached
SRCDEST=/home/sources
#-- Source packages: specify a fixed directory where all src packages will be placed
SRCPKGDEST=/home/srcpackages
#-- Log files: specify a fixed directory where all log files will be placed
#LOGDEST=/home/makepkglogs
#-- Packager: name/email of the person or organization building packages
PACKAGER="John Doe <john@doe.com>"
#-- Specify a key to use for package signing
GPGKEY=""
#########################################################################
# COMPRESSION DEFAULTS
#########################################################################
#
COMPRESSGZ=(gzip -c -f -n)
COMPRESSBZ2=(bzip2 -c -f)
COMPRESSXZ=(xz -c -z -)
COMPRESSZST=(zstd -c -T0 -)
COMPRESSLRZ=(lrzip -q)
COMPRESSLZO=(lzop -q)
COMPRESSZ=(compress -c -f)
COMPRESSLZ4=(lz4 -q)
COMPRESSLZ=(lzip -c -f)
#########################################################################
# EXTENSION DEFAULTS
#########################################################################
#
PKGEXT='.pkg.tar.zst'
SRCEXT='.src.tar.gz'
#########################################################################
# OTHER
#########################################################################
#
#-- Command used to run pacman as root, instead of trying sudo and su
#PACMAN_AUTH=()
# vim: set ft=sh ts=2 sw=2 et:
```
## Usage
Make a package:
```shell
@ -217,102 +38,22 @@ makepkg --verifysource
```
## Options
| Option | Description |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-A, --ignorearch` | Ignore a missing or incomplete arch field in the build script |
| `-c, --clean` | Clean up leftover work files and directories after a successful build |
| `-d, --nodeps` | Do not perform any dependency checks. This will let you override and ignore any dependencies required. There is a good chance this option will break the build process if all of the dependencies are not installed |
| Option | Description |
| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-A, --ignorearch` | Ignore a missing or incomplete arch field in the build script |
| `-c, --clean` | Clean up leftover work files and directories after a successful build |
| `-d, --nodeps` | Do not perform any dependency checks. This will let you override and ignore any dependencies required. There is a good chance this option will break the build process if all of the dependencies are not installed |
| `-e, --noextract` | Do not extract source files or run the prepare() function (if present); use whatever source already exists in the $srcdir/ directory. This is handy if you want to go into $srcdir/ and manually patch or tweak code, then make a package out of the result. Keep in mind that creating a patch may be a better solution to allow others to use your [PKGBUILD](PKGBUILD.md). |
| `--skipinteg` | Do not perform any integrity checks (checksum and [PGP](../../../cryptography/GPG.md)) on source files |
| `--skipchecksums` | Do not verify checksums of source files |
| `--skippgpcheck` | Do not verify [PGP](../../../cryptography/GPG.md) signatures of source files |
| `-i, --install` | Install or upgrade the package after a successful build using [pacman](Pacman.md) |
| `-o, --nobuild` | Download and extract files, run the prepare() function, but do not build them. Useful with the `--noextract` option if you wish to tweak the files in $srcdir/ before building |
| `-r, --rmdeps` | Upon successful build, remove any dependencies installed by makepkg during dependency auto-resolution and installation |
| `-s, --syncdeps` | Install missing dependencies using [pacman](Pacman.md). When build-time or run-time dependencies are not found, [pacman](Pacman.md) will try to resolve them. If successful, the missing packages will be downloaded and installed |
| `-C, --cleanbuild` | Remove the $srcdir before building the package |
| `-f, --force` | Overwrite package if it already exists |
| `--noarchive` | Do not create the archive at the end of the build process. This can be useful to test the package() function or if your target distribution does not use [pacman](Pacman.md) |
| `--sign` | Sign the resulting package with [gpg](../../../cryptography/GPG.md) |
| `--nosign` | Do not create a signature for the built package |
| `--key <key>` | Specify a key to use when signing packages |
| `--noconfirm` | (Passed to [pacman](Pacman.md)) Prevent [pacman](Pacman.md) from waiting for user input before proceeding with operations |
## Misc
### Using mold linker
[mold](../../development/mold.md) is a drop-in replacement for ld/lld linkers, which claims to be significantly faster.
To use mold, append `-fuse-ld=mold` to `LDFLAGS`. For example:
```sh
# /etc/makepkg.conf
LDFLAGS="... -fuse-ld=mold"
```
To pass extra options to mold, additionally add those to `LDFLAGS`. For example:
```sh
# /etc/makepkg.conf
LDFLAGS="... -fuse-ld=mold -Wl,--separate-debug-file"
```
To use mold for Rust packages, append `-C link-arg=-fuse-ld=mold` to `RUSTFLAGS`. For example:
```sh
# /etc/makepkg.conf.d/rust.conf
RUSTFLAGS="... -C link-arg=-fuse-ld=mold"
```
### Parallel compilation
The make build system uses the `MAKEFLAGS` environment variable to specify additional options for make. The variable can also be set in the `makepkg.conf` file.
Users with multi-core/multi-processor systems can specify the number of jobs to run simultaneously. This can be accomplished with the use of `nproc` to determine the number of available processors, e.g.
```sh
MAKEFLAGS="--jobs=$(nproc)".
```
Some `PKGBUILDs` specifically override this with `-j1`, because of race conditions in certain versions or simply because it is not supported in the first place.
### Building from files in memory
As compiling requires many I/O operations and handling of small files, moving the working directory to a [tmpfs](../../../linux/filesystems/tmpFS.md) may bring improvements in build times.
The `BUILDDIR` variable can be temporarily exported to makepkg to set the build directory to an existing tmpfs. For example:
```sh
BUILDDIR=/tmp/makepkg makepkg
```
Persistent configuration can be done in `makepkg.conf` by uncommenting the `BUILDDIR` option, which is found at the end of the BUILD ENVIRONMENT section in the default `/etc/makepkg.conf` file. Setting its value to e.g. `BUILDDIR=/tmp/makepkg` will make use of the Arch's default `/tmp` temporary file system.
> **Note:**
> - Avoid compiling larger packages in tmpfs to prevent running out of memory.
> - The tmpfs directory must be mounted without the `noexec` option, otherwise it will prevent built binaries from being executed.
> - Keep in mind that packages compiled in tmpfs will not persist across reboot. Consider setting the `PKGDEST` option appropriately to move the built package automatically to a persistent directory.
### Generate new checksums
Install `pacman-contrib` and run the following command in the same directory as the [PKGBUILD](./PKGBUILD.md) file to generate new checksums:
```sh
updpkgsums
```
`updpkgsums` uses `makepkg --geninteg` to generate the checksums.
The checksums can also be obtained with e.g `sha256sum` and added to the `sha256sums` array by hand.
### Build from local source files
If you want to make changes to the source code you can download the source code without building the package by using the `-o, --nobuild` Download and extract files only option.
```sh
makepkg -o
```
You can now make changes to the sources and then build the package by using the `-e, --noextract` Do not extract source files (use existing `$srcdir/` dir) option. Use the `-f` option to overwrite already built and existing packages.
```sh
makepkg -ef
```
| `--skipinteg` | Do not perform any integrity checks (checksum and [PGP](../../../cryptography/GPG.md)) on source files |
| `--skipchecksums` | Do not verify checksums of source files |
| `--skippgpcheck` | Do not verify [PGP](../../../cryptography/GPG.md) signatures of source files |
| `-i, --install` | Install or upgrade the package after a successful build using [pacman](Pacman.md) |
| `-o, --nobuild` | Download and extract files, run the prepare() function, but do not build them. Useful with the `--noextract` option if you wish to tweak the files in $srcdir/ before building |
| `-r, --rmdeps` | Upon successful build, remove any dependencies installed by makepkg during dependency auto-resolution and installation |
| `-s, --syncdeps` | Install missing dependencies using [pacman](Pacman.md). When build-time or run-time dependencies are not found, [pacman](Pacman.md) will try to resolve them. If successful, the missing packages will be downloaded and installed |
| `-C, --cleanbuild` | Remove the $srcdir before building the package |
| `--noarchive` | Do not create the archive at the end of the build process. This can be useful to test the package() function or if your target distribution does not use [pacman](Pacman.md) |
| `--sign` | Sign the resulting package with [gpg](../../../cryptography/GPG.md) |
| `--nosign` | Do not create a signature for the built package |
| `--key <key>` | Specify a key to use when signing packages |
| `--noconfirm` | (Passed to [pacman](Pacman.md)) Prevent [pacman](Pacman.md) from waiting for user input before proceeding with operations |

View file

@ -0,0 +1,42 @@
---
obj: application
repo: https://github.com/google/cadvisor
rev: 2024-12-12
---
# cAdvisor
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide.
## Prometheus
Add this to [Prometheus](../web/Prometheus.md) config file:
```yml
scrape_configs:
- job_name: cadvisor
scrape_interval: 5s
static_configs:
- targets:
- cadvisor:8080
```
## Docker-Compose
```yml
services:
cadvisor:
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
privileged: true
image: gcr.io/cadvisor/cadvisor
deploy:
mode: global
```

View file

@ -0,0 +1,178 @@
---
obj: application
repo: https://github.com/prometheus/node_exporter
rev: 2024-12-12
---
# Prometheus Node Exporter
Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.
A Dashboard to use with Node Exporter and Grafana can be found [here](https://grafana.com/grafana/dashboards/1860-node-exporter-full/).
## Usage
The node_exporter listens on HTTP port 9100 by default.
### Docker
The `node_exporter` is designed to monitor the host system. Deploying in containers requires extra care in order to avoid monitoring the container itself.
For situations where containerized deployment is needed, some extra flags must be used to allow the `node_exporter` access to the host namespaces.
Be aware that any non-root mount points you want to monitor will need to be bind-mounted into the container.
If you start container for host monitoring, specify `path.rootfs` argument. This argument must match path in bind-mount of host root. The `node_exporter` will use `path.rootfs` as prefix to access host filesystem.
```yml
---
version: '3.8'
services:
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
command:
- '--path.rootfs=/host'
network_mode: host
pid: host
restart: unless-stopped
volumes:
- '/:/host:ro,rslave'
```
On some systems, the timex collector requires an additional Docker flag, `--cap-add=SYS_TIME`, in order to access the required syscalls.
### Prometheus
Configure Prometheus to scrape the exposed node exporter:
```yml
global:
scrape_interval: 15s
scrape_configs:
- job_name: node
static_configs:
- targets: ['localhost:9100']
```
## Configuration
Node Exporter can be configured using CLI arguments.
### Options
| **Option** | **Description** |
| ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- |
| `--path.procfs="/proc"` | procfs mountpoint. |
| `--path.sysfs="/sys"` | sysfs mountpoint. |
| `--path.rootfs="/"` | rootfs mountpoint. |
| `--path.udev.data="/run/udev/data"` | udev data path. |
| `--collector.runit.servicedir="/etc/service"` | Path to runit service directory. |
| `--collector.supervisord.url="http://localhost:9001/RPC2"` | XML RPC endpoint. |
| `--collector.sysctl.include=COLLECTOR.SYSCTL.INCLUDE ...` | Select sysctl metrics to include. |
| `--collector.sysctl.include-info=COLLECTOR.SYSCTL.INCLUDE-INFO ...` | Select sysctl metrics to include as info metrics. |
| `--collector.systemd.unit-include=".+"` | Regexp of systemd units to include. Units must both match include and not match exclude to be included. |
| `--collector.systemd.unit-exclude=".+\\.(automountdevicemountscopeslicetarget)"` | Regexp of systemd units to exclude. Units must both match include and not match exclude to be included. |
| `--collector.systemd.enable-task-metrics` | Enables service unit tasks metrics `unit_tasks_current` and `unit_tasks_max`. |
| `--collector.systemd.enable-restarts-metrics` | Enables service unit metric `service_restart_total`. |
| `--collector.systemd.enable-start-time-metrics` | Enables service unit metric `unit_start_time_seconds`. |
| `--collector.tapestats.ignored-devices="^$"` | Regexp of devices to ignore for tapestats. |
| `--collector.textfile.directory="/var/lib/prometheus/node-exporter"` | Directory to read text files with metrics from. |
| `--collector.vmstat.fields="^(oom_killpgpgpswppg.*fault).*"` | Regexp of fields to return for vmstat collector. |
| `--collector.arp` | Enable the arp collector (default: enabled). |
| `--collector.bcache` | Enable the bcache collector (default: enabled). |
| `--collector.bonding` | Enable the bonding collector (default: enabled). |
| `--collector.btrfs` | Enable the btrfs collector (default: enabled). |
| `--collector.buddyinfo` | Enable the buddyinfo collector (default: disabled). |
| `--collector.cgroups` | Enable the cgroups collector (default: disabled). |
| `--collector.conntrack` | Enable the conntrack collector (default: enabled). |
| `--collector.cpu` | Enable the cpu collector (default: enabled). |
| `--collector.cpufreq` | Enable the cpufreq collector (default: enabled). |
| `--collector.diskstats` | Enable the diskstats collector (default: enabled). |
| `--collector.dmi` | Enable the dmi collector (default: enabled). |
| `--collector.drbd` | Enable the drbd collector (default: disabled). |
| `--collector.drm` | Enable the drm collector (default: disabled). |
| `--collector.edac` | Enable the edac collector (default: enabled). |
| `--collector.entropy` | Enable the entropy collector (default: enabled). |
| `--collector.ethtool` | Enable the ethtool collector (default: disabled). |
| `--collector.fibrechannel` | Enable the fibrechannel collector (default: enabled). |
| `--collector.filefd` | Enable the filefd collector (default: enabled). |
| `--collector.filesystem` | Enable the filesystem collector (default: enabled). |
| `--collector.hwmon` | Enable the hwmon collector (default: enabled). |
| `--collector.infiniband` | Enable the infiniband collector (default: enabled). |
| `--collector.interrupts` | Enable the interrupts collector (default: disabled). |
| `--collector.ipvs` | Enable the ipvs collector (default: enabled). |
| `--collector.ksmd` | Enable the ksmd collector (default: disabled). |
| `--collector.lnstat` | Enable the lnstat collector (default: disabled). |
| `--collector.loadavg` | Enable the loadavg collector (default: enabled). |
| `--collector.logind` | Enable the logind collector (default: disabled). |
| `--collector.mdadm` | Enable the mdadm collector (default: enabled). |
| `--collector.meminfo` | Enable the meminfo collector (default: enabled). |
| `--collector.meminfo_numa` | Enable the meminfo_numa collector (default: disabled). |
| `--collector.mountstats` | Enable the mountstats collector (default: disabled). |
| `--collector.netclass` | Enable the netclass collector (default: enabled). |
| `--collector.netdev` | Enable the netdev collector (default: enabled). |
| `--collector.netstat` | Enable the netstat collector (default: enabled). |
| `--collector.network_route` | Enable the network_route collector (default: disabled). |
| `--collector.nfs` | Enable the nfs collector (default: enabled). |
| `--collector.nfsd` | Enable the nfsd collector (default: enabled). |
| `--collector.ntp` | Enable the ntp collector (default: disabled). |
| `--collector.nvme` | Enable the nvme collector (default: enabled). |
| `--collector.os` | Enable the os collector (default: enabled). |
| `--collector.perf` | Enable the perf collector (default: disabled). |
| `--collector.powersupplyclass` | Enable the powersupplyclass collector (default: enabled). |
| `--collector.pressure` | Enable the pressure collector (default: enabled). |
| `--collector.processes` | Enable the processes collector (default: disabled). |
| `--collector.qdisc` | Enable the qdisc collector (default: disabled). |
| `--collector.rapl` | Enable the rapl collector (default: enabled). |
| `--collector.runit` | Enable the runit collector (default: disabled). |
| `--collector.schedstat` | Enable the schedstat collector (default: enabled). |
| `--collector.selinux` | Enable the selinux collector (default: enabled). |
| `--collector.slabinfo` | Enable the slabinfo collector (default: disabled). |
| `--collector.sockstat` | Enable the sockstat collector (default: enabled). |
| `--collector.softnet` | Enable the softnet collector (default: enabled). |
| `--collector.stat` | Enable the stat collector (default: enabled). |
| `--collector.supervisord` | Enable the supervisord collector (default: disabled). |
| `--collector.sysctl` | Enable the sysctl collector (default: disabled). |
| `--collector.systemd` | Enable the systemd collector (default: enabled). |
| `--collector.tapestats` | Enable the tapestats collector (default: enabled). |
| `--collector.tcpstat` | Enable the tcpstat collector (default: disabled). |
| `--collector.textfile` | Enable the textfile collector (default: enabled). |
| `--collector.thermal_zone` | Enable the thermal_zone collector (default: enabled). |
| `--collector.time` | Enable the time collector (default: enabled). |
| `--collector.timex` | Enable the timex collector (default: enabled). |
| `--collector.udp_queues` | Enable the udp_queues collector (default: enabled). |
| `--collector.uname` | Enable the uname collector (default: enabled). |
| `--collector.vmstat` | Enable the vmstat collector (default: enabled). |
| `--collector.wifi` | Enable the wifi collector (default: disabled). |
| `--collector.xfs` | Enable the xfs collector (default: enabled). |
| `--collector.zfs` | Enable the zfs collector (default: enabled). |
| `--collector.zoneinfo` | Enable the zoneinfo collector (default: disabled). |
| `--web.telemetry-path="/metrics"` | Path under which to expose metrics. |
| `--web.disable-exporter-metrics` | Exclude metrics about the exporter itself (`promhttp_*`, `process_*`, `go_*`). |
| `--web.max-requests=40` | Maximum number of parallel scrape requests. Use 0 to disable. |
| `--collector.disable-defaults` | Set all collectors to disabled by default. |
| `--runtime.gomaxprocs=1` | The target number of CPUs Go will run on (`GOMAXPROCS`). |
| `--web.systemd-socket` | Use systemd socket activation listeners instead of port listeners (Linux only). |
| `--web.listen-address=:9100 ...` | Addresses on which to expose metrics and web interface. Repeatable for multiple addresses. |
| `--web.config.file=""` | [EXPERIMENTAL] Path to configuration file that can enable TLS or authentication. |
| `--log.level=info` | Only log messages with the given severity or above. One of: `[debug, info, warn, error]`. |
| `--log.format=logfmt` | Output format of log messages. One of: `[logfmt, json]`. |
### Web Configuration
Exporters and services instrumented with the Exporter Toolkit share the same web configuration file format. This is experimental and might change in the future.
To specify which web configuration file to load, use the `--web.config.file` flag.
Basic config file:
```yml
# TLS and basic authentication configuration example.
#
# Additionally, a certificate and a key file are needed.
tls_server_config:
cert_file: server.crt
key_file: server.key
# Usernames and passwords required to connect.
# Passwords are hashed with bcrypt: https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md#about-bcrypt.
basic_auth_users:
alice: $2y$10$mDwo.lAisC94iLAyP81MCesa29IzH37oigHC/42V2pdJlUprsJPze
bob: $2y$10$hLqFl9jSjoAAy95Z/zw8Ye8wkdMBM8c5Bn1ptYqP/AXyV0.oy0S8m
```

View file

@ -0,0 +1,8 @@
---
obj: application
website: https://grafana.com
repo: https://github.com/grafana/grafana
---
# Grafana
#wip

View file

@ -0,0 +1,58 @@
---
obj: application
website: https://prometheus.io
repo: https://github.com/prometheus/prometheus
rev: 2024-12-12
---
# Prometheus
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
It collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.
This data can then be visualized using [Grafana](./Grafana.md).
## Docker Compose
```yml
services:
prometheus:
image: prom/prometheus
ports:
- 9090:9090
volumes:
- ./data:/prometheus
- ./conf:/etc/prometheus
```
## Configuration
Basic prometheus config:
```yml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
# Node Exporter Config
- job_name: node_exporter
scrape_interval: 5s
static_configs:
- targets: ['host:9100']
# Job with custom CA
- job_name: custom_ca
static_configs:
- targets: ['endpoint']
tls_config:
ca_file: '/ca_file.crt'
# Job with Bearer Auth
- job_name: bearer_auth
scrape_interval: 120s
static_configs:
- targets: ['endpoint']
bearer_token: 'BEARER_TOKEN'
```

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,8 @@
---
obj: application
repo: https://github.com/grafana/loki
website: https://grafana.com/oss/loki
---
# Grafana Loki
#wip

View file

@ -3,7 +3,7 @@ obj: application
wiki: https://en.wikipedia.org/wiki/Git
repo: https://github.com/git/git
website: https://git-scm.com
rev: 2024-12-04
rev: 2024-04-15
---
# Git
@ -286,19 +286,4 @@ git am --abort < patch
## .gitignore
A `.gitignore` file specifies intentionally untracked files that Git should ignore. Files already tracked by Git are not affected.
This file contains pattern on each line which exclude files from git versioning.
## Git Hooks
Git hooks are custom scripts that run automatically in response to certain Git events or actions. These hooks are useful for automating tasks like code quality checks, running tests, enforcing commit message conventions, and more. Git hooks can be executed at different points in the Git workflow, such as before or after a commit, push, or merge.
Git hooks are stored in the `.git/hooks` directory of your repository. By default, this directory contains example scripts with the `.sample` extension. You can customize these scripts by removing the `.sample` extension and editing them as needed.
Hooks only apply to your local repository. If a hook script fails it prevents the associated action as well.
### Common Git Hooks
- pre-commit
- prepare-commit-msg
- commit-msg
- post-commit
- post-checkout
- pre-rebase
This file contains pattern on each line which exclude files from git versioning.

View file

@ -2,9 +2,6 @@
obj: meta/collection
---
# Best Practices
- [URL Suffix API](./URL%20Suffix%20API.md)
# Creational Patterns
- [Abstract Factory](creational/Abstract%20Factory%20Pattern.md)
- [Builder](creational/Builder%20Pattern.md)

View file

@ -1,15 +0,0 @@
# URL Suffix API
When designing a website, consider leveraging URL suffixes to indicate the format of the resource being accessed, similar to how file extensions are used in operating systems.
For example, a webpage located at `/blog/post/id` that renders human-readable content could have its machine-readable data served by appending a format-specific suffix to the same URL, such as `/blog/post/id.json`.
#### Benefits:
1. **Intuitive API from Website Usage**
Users can easily derive API endpoints from existing website URLs by appending the desired format suffix.
2. **Interchangeable Formats**
The same approach allows for multiple formats (e.g., `.json`, `.msgpack`, `.protobuf`) to be supported seamlessly, improving flexibility and usability.
This method simplifies the architecture, enhances consistency, and provides an elegant mechanism to serve both human-readable and machine-readable content from the same base URL.

View file

@ -1,84 +0,0 @@
---
obj: format
website: https://jsonlines.org
extension: "jsonl"
mime: "application/jsonl"
rev: 2024-12-02
---
# JSON Lines
This page describes the JSON Lines text format, also called newline-delimited JSON. JSON Lines is a convenient format for storing structured data that may be processed one record at a time. It works well with unix-style text processing tools and shell pipelines. It's a great format for log files. It's also a flexible format for passing messages between cooperating processes.
The JSON Lines format has three requirements:
- **UTF-8 Encoding**: JSON allows encoding Unicode strings with only ASCII escape sequences, however those escapes will be hard to read when viewed in a text editor. The author of the JSON Lines file may choose to escape characters to work with plain ASCII files. Encodings other than UTF-8 are very unlikely to be valid when decoded as UTF-8 so the chance of accidentally misinterpreting characters in JSON Lines files is low.
- **Each Line is a Valid JSON Value**: The most common values will be objects or arrays, but any JSON value is permitted.
- **Line Separator is `\n`**: This means `\r\n` is also supported because surrounding white space is implicitly ignored when parsing JSON values.
## Better than CSV
```json
["Name", "Session", "Score", "Completed"]
["Gilbert", "2013", 24, true]
["Alexa", "2013", 29, true]
["May", "2012B", 14, false]
["Deloise", "2012A", 19, true]
```
CSV seems so easy that many programmers have written code to generate it themselves, and almost every implementation is different. Handling broken CSV files is a common and frustrating task. CSV has no standard encoding, no standard column separator and multiple character escaping standards. String is the only type supported for cell values, so some programs attempt to guess the correct types.
JSON Lines handles tabular data cleanly and without ambiguity. Cells may use the standard JSON types.
The biggest missing piece is an import/export filter for popular spreadsheet programs so that non-programmers can use this format.
## Self-describing data
```json
{"name": "Gilbert", "session": "2013", "score": 24, "completed": true}
{"name": "Alexa", "session": "2013", "score": 29, "completed": true}
{"name": "May", "session": "2012B", "score": 14, "completed": false}
{"name": "Deloise", "session": "2012A", "score": 19, "completed": true}
```
JSON Lines enables applications to read objects line-by-line, with each line fully describing a JSON object. The example above contains the same data as the tabular example above, but allows applications to split files on newline boundaries for parallel loading, and eliminates any ambiguity if fields are omitted or re-ordered.
## Easy Nested Data
```json
{"name": "Gilbert", "wins": [["straight", "7♣"], ["one pair", "10♥"]]}
{"name": "Alexa", "wins": [["two pair", "4♠"], ["two pair", "9♠"]]}
{"name": "May", "wins": []}
{"name": "Deloise", "wins": [["three of a kind", "5♣"]]}
```
JSON Lines' biggest strength is in handling lots of similar nested data structures. One `.jsonl` file is easier to work with than a directory full of XML files.
If you have large nested structures then reading the JSON Lines text directly isn't recommended. Use the "jq" tool to make viewing large structures easier:
```
grep pair winning_hands.jsonl | jq .
{
"name": "Gilbert",
"wins": [
[
"straight",
"7♣"
],
[
"one pair",
"10♥"
]
]
}
{
"name": "Alexa",
"wins": [
[
"two pair",
"4♠"
],
[
"two pair",
"9♠"
]
]
}
```

View file

@ -1,251 +0,0 @@
---
obj: concept
website: https://ogp.me
rev: 2024-12-16
---
# The Open Graph protocol
The [Open Graph protocol](https://ogp.me/) enables any web page to become a rich object in a social graph. For instance, this is used on Facebook to allow any web page to have the same functionality as any other object on Facebook.
## Basic Metadata
To turn your web pages into graph objects, you need to add basic metadata to your page. Which means that you'll place additional `<meta>` tags in the `<head>` of your web page. The four required properties for every page are:
- `og:title` - The title of your object as it should appear within the graph, e.g., "The Rock".
- `og:type` - The type of your object, e.g., `video.movie`. Depending on the type you specify, other properties may also be required.
- `og:image` - An image URL which should represent your object within the graph.
- `og:url` - The canonical URL of your object that will be used as its permanent ID in the graph, e.g., "https://www.imdb.com/title/tt0117500/".
As an example, the following is the Open Graph protocol markup for [The Rock on IMDB](https://www.imdb.com/title/tt0117500/):
```html
<html prefix="og: https://ogp.me/ns#">
<head>
<title>The Rock (1996)</title>
<meta property="og:title" content="The Rock" />
<meta property="og:type" content="video.movie" />
<meta property="og:url" content="https://www.imdb.com/title/tt0117500/" />
<meta property="og:image" content="https://ia.media-imdb.com/images/rock.jpg" />
...
</head>
...
</html>
```
### Optional Metadata
The following properties are optional for any object and are generally recommended:
- `og:audio` - A URL to an audio file to accompany this object.
- `og:description` - A one to two sentence description of your object.
- `og:determiner` - The word that appears before this object's title in a sentence. An enum of (`a`, `an`, `the`, `""`, `auto`). If `auto` is chosen, the consumer of your data should chose between `a` or `an`. Default is `""` (blank).
- `og:locale` - The locale these tags are marked up in. Of the format `language_TERRITORY`. Default is `en_US`.
- `og:locale:alternate` - An array of other locales this page is available in.
- `og:site_name` - If your object is part of a larger web site, the name which should be displayed for the overall site. e.g., "IMDb".
- `og:video` - A URL to a video file that complements this object.
For example (line-break solely for display purposes):
```html
<meta property="og:audio" content="https://example.com/bond/theme.mp3" />
<meta property="og:description"
content="Sean Connery found fame and fortune as the
suave, sophisticated British agent, James Bond." />
<meta property="og:determiner" content="the" />
<meta property="og:locale" content="en_GB" />
<meta property="og:locale:alternate" content="fr_FR" />
<meta property="og:locale:alternate" content="es_ES" />
<meta property="og:site_name" content="IMDb" />
<meta property="og:video" content="https://example.com/bond/trailer.swf" />
```
## Structured Properties
Some properties can have extra metadata attached to them. These are specified in the same way as other metadata with `property` and `content`, but the `property` will have extra `:`.
The `og:image` property has some optional structured properties:
- `og:image:url` - Identical to `og:image`.
- `og:image:secure_url` - An alternate url to use if the webpage requires HTTPS.
- `og:image:type` - A MIME type for this image.
- `og:image:width` - The number of pixels wide.
- `og:image:height` - The number of pixels high.
- `og:image:alt` - A description of what is in the image (not a caption). If the page specifies an og:image it should specify `og:image:alt`.
A full image example:
```html
<meta property="og:image" content="https://example.com/ogp.jpg" />
<meta property="og:image:secure_url" content="https://secure.example.com/ogp.jpg" />
<meta property="og:image:type" content="image/jpeg" />
<meta property="og:image:width" content="400" />
<meta property="og:image:height" content="300" />
<meta property="og:image:alt" content="A shiny red apple with a bite taken out" />
```
The `og:video` tag has the identical tags as `og:image`. Here is an example:
```html
<meta property="og:video" content="https://example.com/movie.swf" />
<meta property="og:video:secure_url" content="https://secure.example.com/movie.swf" />
<meta property="og:video:type" content="application/x-shockwave-flash" />
<meta property="og:video:width" content="400" />
<meta property="og:video:height" content="300" />
```
The `og:audio` tag only has the first 3 properties available (since size doesn't make sense for sound):
```html
<meta property="og:audio" content="https://example.com/sound.mp3" />
<meta property="og:audio:secure_url" content="https://secure.example.com/sound.mp3" />
<meta property="og:audio:type" content="audio/mpeg" />
```
## Arrays
If a tag can have multiple values, just put multiple versions of the same `<meta>` tag on your page. The first tag (from top to bottom) is given preference during conflicts.
```html
<meta property="og:image" content="https://example.com/rock.jpg" />
<meta property="og:image" content="https://example.com/rock2.jpg" />
```
Put structured properties after you declare their root tag. Whenever another root element is parsed, that structured property is considered to be done and another one is started.
For example:
```html
<meta property="og:image" content="https://example.com/rock.jpg" />
<meta property="og:image:width" content="300" />
<meta property="og:image:height" content="300" />
<meta property="og:image" content="https://example.com/rock2.jpg" />
<meta property="og:image" content="https://example.com/rock3.jpg" />
<meta property="og:image:height" content="1000" />
```
means there are 3 images on this page, the first image is `300x300`, the middle one has unspecified dimensions, and the last one is `1000px` tall.
## Object Types
In order for your object to be represented within the graph, you need to specify its type. This is done using the `og:type` property:
```html
<meta property="og:type" content="website" />
```
When the community agrees on the schema for a type, it is added to the list of global types. All other objects in the type system are CURIEs of the form.
```html
<head prefix="my_namespace: https://example.com/ns#">
<meta property="og:type" content="my_namespace:my_type" />
```
The global types are grouped into verticals. Each vertical has its own namespace. The `og:type` values for a namespace are always prefixed with the namespace and then a period. This is to reduce confusion with user-defined namespaced types which always have colons in them.
### Music
- Namespace URI: [`https://ogp.me/ns/music#`](https://ogp.me/ns/music)
`og:type` values:
[`music.song`](https://ogp.me/#type_music.song)
- `music:duration` - [integer](https://ogp.me/#integer) >=1 - The song's length in seconds.
- `music:album` - [music.album](https://ogp.me/#type_music.album) [array](https://ogp.me/#array) - The album this song is from.
- `music:album:disc` - [integer](https://ogp.me/#integer) >=1 - Which disc of the album this song is on.
- `music:album:track` - [integer](https://ogp.me/#integer) >=1 - Which track this song is.
- `music:musician` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - The musician that made this song.
[`music.album`](https://ogp.me/#type_music.album)
- `music:song` - [music.song](https://ogp.me/#type_music.song) - The song on this album.
- `music:song:disc` - [integer](https://ogp.me/#integer) >=1 - The same as `music:album:disc` but in reverse.
- `music:song:track` - [integer](https://ogp.me/#integer) >=1 - The same as `music:album:track` but in reverse.
- `music:musician` - [profile](https://ogp.me/#type_profile) - The musician that made this song.
- `music:release_date` - [datetime](https://ogp.me/#datetime) - The date the album was released.
[`music.playlist`](https://ogp.me/#type_music.playlist)
- `music:song` - Identical to the ones on [music.album](https://ogp.me/#type_music.album)
- `music:song:disc`
- `music:song:track`
- `music:creator` - [profile](https://ogp.me/#type_profile) - The creator of this playlist.
[`music.radio_station`](https://ogp.me/#type_music.radio_station)
- `music:creator` - [profile](https://ogp.me/#type_profile) - The creator of this station.
### Video
- Namespace URI: [`https://ogp.me/ns/video#`](https://ogp.me/ns/video)
`og:type` values:
[`video.movie`](https://ogp.me/#type_video.movie)
- `video:actor` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Actors in the movie.
- `video:actor:role` - [string](https://ogp.me/#string) - The role they played.
- `video:director` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Directors of the movie.
- `video:writer` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Writers of the movie.
- `video:duration` - [integer](https://ogp.me/#integer) >=1 - The movie's length in seconds.
- `video:release_date` - [datetime](https://ogp.me/#datetime) - The date the movie was released.
- `video:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this movie.
[`video.episode`](https://ogp.me/#type_video.episode)
- `video:actor` - Identical to [video.movie](https://ogp.me/#type_video.movie)
- `video:actor:role`
- `video:director`
- `video:writer`
- `video:duration`
- `video:release_date`
- `video:tag`
- `video:series` - [video.tv_show](https://ogp.me/#type_video.tv_show) - Which series this episode belongs to.
[`video.tv_show`](https://ogp.me/#type_video.tv_show)
A multi-episode TV show. The metadata is identical to [video.movie](https://ogp.me/#type_video.movie).
[`video.other`](https://ogp.me/#type_video.other)
A video that doesn't belong in any other category. The metadata is identical to [video.movie](https://ogp.me/#type_video.movie).
### No Vertical
These are globally defined objects that just don't fit into a vertical but yet are broadly used and agreed upon.
`og:type` values:
[`article`](https://ogp.me/#type_article) - Namespace URI: [`https://ogp.me/ns/article#`](https://ogp.me/ns/article)
- `article:published_time` - [datetime](https://ogp.me/#datetime) - When the article was first published.
- `article:modified_time` - [datetime](https://ogp.me/#datetime) - When the article was last changed.
- `article:expiration_time` - [datetime](https://ogp.me/#datetime) - When the article is out of date after.
- `article:author` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Writers of the article.
- `article:section` - [string](https://ogp.me/#string) - A high-level section name. E.g. Technology
- `article:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this article.
[`book`](https://ogp.me/#type_book) - Namespace URI: [`https://ogp.me/ns/book#`](https://ogp.me/ns/book)
- `book:author` - [profile](https://ogp.me/#type_profile) [array](https://ogp.me/#array) - Who wrote this book.
- `book:isbn` - [string](https://ogp.me/#string) - The [ISBN](https://en.wikipedia.org/wiki/International_Standard_Book_Number)
- `book:release_date` - [datetime](https://ogp.me/#datetime) - The date the book was released.
- `book:tag` - [string](https://ogp.me/#string) [array](https://ogp.me/#array) - Tag words associated with this book.
[`profile`](https://ogp.me/#type_profile) - Namespace URI: [`https://ogp.me/ns/profile#`](https://ogp.me/ns/profile)
- `profile:first_name` - [string](https://ogp.me/#string) - A name normally given to an individual by a parent or self-chosen.
- `profile:last_name` - [string](https://ogp.me/#string) - A name inherited from a family or marriage and by which the individual is commonly known.
- `profile:username` - [string](https://ogp.me/#string) - A short unique string to identify them.
- `profile:gender` - [enum](https://ogp.me/#enum)(male, female) - Their gender.
[`website`](https://ogp.me/#type_website) - Namespace URI: [`https://ogp.me/ns/website#`](https://ogp.me/ns/website)
No additional properties other than the basic ones. Any non-marked up webpage should be treated as `og:type` website.
## Types
The following types are used when defining attributes in Open Graph protocol.
| **Type** | **Description** | **Literals** |
| -------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| Boolean | A Boolean represents a true or false value | true, false, 1, 0 |
| DateTime | A DateTime represents a temporal value composed of a date (year, month, day) and an optional time component (hours, minutes) | ISO 8601 |
| Enum | A type consisting of bounded set of constant string values (enumeration members). | A string value that is a member of the enumeration |
| Float | A 64-bit signed floating point number | All literals that conform to the following formats: `1.234`, `-1.234`, `1.2e3`, `-1.2e3`, `7E-10` |
| Integer | A 32-bit signed integer. | All literals that conform to the following formats: `1234`, `-123` |
| String | A sequence of Unicode characters | All literals composed of Unicode characters with no escape characters |
| URL | A sequence of Unicode characters that identify an Internet resource. | All valid URLs that utilize the `http://` or `https://` protocols |

View file

@ -12,8 +12,6 @@ Installation of Arch Linux is typically done manually following the [Wiki](https
curl -L matmoul.github.io/archfi | bash
```
You can create a (custom) ISO with [archiso](./archiso.md).
## Basic Install
```shell
# Set keyboard

View file

@ -43,41 +43,3 @@ A typical Linux system has, among others, the following directories:
| `/var` | This directory contains files which may change in size, such as spool and [log](../dev/Log.md) files. |
| `/var/cache` | Data cached for programs. |
| `/var/log` | Miscellaneous [log](../dev/Log.md) files. |
## Kernel Commandline
The kernel, the programs running in the initrd and in the host system may be configured at boot via kernel command line arguments.
The current cmdline can be seen at `/proc/cmdline`.
For setting the cmdline use `/etc/kernel/cmdline` if you use UKIs.
**Common Kernel Cmdline Arguments:**
| Argument | Description |
| ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `quiet` | Parameter understood by both the kernel and the system and service manager to control console log verbosity. |
| `splash` | Show a plymouth splash screen while booting. |
| `init=` | This sets the initial command to be executed by the kernel. If this is not set, or cannot be found, the kernel will try `/sbin/init`, then `/etc/init`, then `/bin/init`, then `/bin/sh` and panic if all of this fails. |
| `ro` and `rw` | The `ro` option tells the kernel to mount the root filesystem as 'read-only'. The `rw` option tells the kernel to mount the root filesystem read/write. This is the default. |
| `resume=...` | This tells the kernel the location of the suspend-to-disk data that you want the machine to resume from after hibernation. Usually, it is the same as your swap partition or file. Example: `resume=/dev/hda2` |
| `panic=N` | By default, the kernel will not reboot after a panic, but this option will cause a kernel reboot after `N` seconds (if `N` is greater than zero). This panic timeout can also be set by `echo N > /proc/sys/kernel/panic` |
| `plymouth.enable=` | May be used to disable the Plymouth boot splash. For details, see plymouth. |
| `vconsole.keymap=, vconsole.keymap_toggle=, vconsole.font=, vconsole.font_map=, vconsole.font_unimap=` | Parameters understood by the virtual console setup logic. For details, see `vconsole.conf` |
| `luks=, rd.luks=` | Defaults to "yes". If "no", disables the crypt mount generator entirely. `rd.luks=` is honored only in the initrd while `luks=` is honored by both the main system and in the initrd. |
| `luks.crypttab=, rd.luks.crypttab=` | Defaults to "yes". If "no", causes the generator to ignore any devices configured in `/etc/crypttab` (`luks.uuid=` will still work however). `rd.luks.crypttab=` is honored only in initrd while `luks.crypttab=` is honored by both the main system and in the initrd. |
| `luks.uuid=, rd.luks.uuid=` | Takes a LUKS superblock UUID as argument. This will activate the specified device as part of the boot process as if it was listed in `/etc/crypttab`. This option may be specified more than once in order to set up multiple devices. `rd.luks.uuid=` is honored only in the initrd, while `luks.uuid=` is honored by both the main system and in the initrd. |
| `luks.name=, rd.luks.name=` | Takes a LUKS super block UUID followed by an `=` and a name. This implies `rd.luks.uuid=` or `luks.uuid=` and will additionally make the LUKS device given by the UUID appear under the provided name. `rd.luks.name=` is honored only in the initrd, while `luks.name=` is honored by both the main system and in the initrd. |
| `luks.options=, rd.luks.options=` | Takes a LUKS super block UUID followed by an `=` and a string of options separated by commas as argument. This will override the options for the given UUID. If only a list of options, without a UUID, is specified, they apply to any UUIDs not specified elsewhere, and without an entry in `/etc/crypttab`. `rd.luks.options=` is honored only by initial RAM disk (initrd) while `luks.options=` is honored by both the main system and in the initrd. |
| `fstab=, rd.fstab=` | Defaults to "yes". If "no", causes the generator to ignore any mounts or swap devices configured in `/etc/fstab`. `rd.fstab=` is honored only in the initrd, while `fstab=` is honored by both the main system and the initrd. |
| `root=` | Configures the operating system's root filesystem to mount when running in the initrd. This accepts a device node path (usually `/dev/disk/by-uuid/...` or similar), or the special values `gpt-auto`, `fstab`, and `tmpfs`. Use `gpt-auto` to explicitly request automatic root file system discovery via `systemd-gpt-auto-generator`. Use `fstab` to explicitly request automatic root file system discovery via the initrd `/etc/fstab` rather than via kernel command line. Use `tmpfs` in order to mount a tmpfs file system as root file system of the OS. This is useful in combination with `mount.usr=` in order to combine a volatile root file system with a separate, immutable `/usr/` file system. Also see `systemd.volatile=` below. |
| `rootfstype=` | Takes the root filesystem type that will be passed to the mount command. `rootfstype=` is honored by the initrd. |
| `mount.usr=` | Takes the `/usr/` filesystem to be mounted by the initrd. If `mount.usrfstype=` or `mount.usrflags=` is set, then `mount.usr=` will default to the value set in `root=`. Otherwise, this parameter defaults to the `/usr/` entry found in `/etc/fstab` on the root filesystem. |
| `mount.usrfstype=` | Takes the `/usr` filesystem type that will be passed to the mount command. |
| `systemd.volatile=` | Controls whether the system shall boot up in volatile mode. |
| `systemd.swap=` | Takes a boolean argument or enables the option if specified without an argument. If disabled, causes the generator to ignore any swap devices configured in `/etc/fstab`. Defaults to enabled. |
## Misc
### Cause a kernel panic
To manually cause a kernel panic run:
```sh
echo c > /proc/sysrq-trigger
```

View file

@ -1,105 +0,0 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Plymouth
rev: 2024-12-20
---
# Plymouth
Plymouth is a project from Fedora providing a flicker-free graphical boot process. It relies on kernel mode setting (KMS) to set the native resolution of the display as early as possible, then provides an eye-candy splash screen leading all the way up to the login manager.
## Setup
By default, Plymouth logs the boot messages into `/var/log/boot.log`, and does not show the graphical splash screen.
- If you want to see the splash screen, append `splash` to the kernel parameters.
- If you want silent boot, append `quiet` too.
- If you want to disable the logging, append `plymouth.boot-log=/dev/null`. Alternatively, add `plymouth.nolog` which also disables console redirection.
To start Plymouth on early boot, you must configure your initramfs generator to create images including Plymouth.
For mkinitcpio, add plymouth to the `HOOKS` array in `mkinitcpio.conf`:
```sh
# /etc/mkinitcpio.conf
HOOKS=(... plymouth ...)
```
If you are using the `systemd` hook, it must be before `plymouth`.
Furthermore make sure you place `plymouth` before the `crypt` hook if your system is encrypted with dm-crypt.
## Configuration
Plymouth can be configured in file `/etc/plymouth/plymouthd.conf`. You can see the default values in `/usr/share/plymouth/plymouthd.defaults`.
### Changing the theme
Plymouth comes with a selection of themes:
- BGRT: A variation of Spinner that keeps the OEM logo if available (BGRT stands for Boot Graphics Resource Table)
- Fade-in: "Simple theme that fades in and out with shimmering stars"
- Glow: "Corporate theme with pie chart boot progress followed by a glowing emerging logo"
- Script: "Script example plugin" (Despite the description seems to be a quite nice Arch logo theme)
- Solar: "Space theme with violent flaring blue star"
- Spinner: "Simple theme with a loading spinner"
- Spinfinity: "Simple theme that shows a rotating infinity sign in the center of the screen"
- Tribar: "Text mode theme with tricolor progress bar"
- (Text: "Text mode theme with tricolor progress bar")
- (Details: "Verbose fallback theme")
The theme can be changed by editing the configuration file:
```ini
# /etc/plymouth/plymouthd.conf
[Daemon]
Theme=theme
```
or by running:
```sh
plymouth-set-default-theme -R theme
```
Every time a theme is changed, the initrd must be rebuilt. The `-R` option ensures that it is rebuilt (otherwise regenerate the initramfs manually).
### Install new themes
All currently installed themes can be listed by using this command:
```sh
plymouth-set-default-theme -l
# or:
ls /usr/share/plymouth/themes
```
### Show delay
Plymouth has a configuration option to delay the splash screen:
```ini
# /etc/plymouth/plymouthd.conf
[Daemon]
ShowDelay=5
```
On systems that boot quickly, you may only see a flicker of your splash theme before your DM or login prompt is ready. You can set `ShowDelay` to an interval (in seconds) longer than your boot time to prevent this flicker and only show a blank screen. The default is 0 seconds, so you should not need to change this to a different value to see your splash earlier during boot.
### HiDPI
Edit the configuration file:
```ini
# /etc/plymouth/plymouthd.conf
[Daemon]
DeviceScale=an-integer-scaling-factor
```
and regenerate the initramfs.
## Misc
### Show boot messages
During boot you can switch to boot messages by pressing the `Esc` key.
### Disable with kernel parameters
If you experience problems during boot, you can temporary disable Plymouth with the following kernel parameters:
```
plymouth.enable=0 disablehooks=plymouth
```

View file

@ -1,202 +0,0 @@
---
obj: concept
arch-wiki: https://wiki.archlinux.org/title/Zram
source: https://docs.kernel.org/admin-guide/blockdev/zram.html
wiki: https://en.wikipedia.org/wiki/Zram
rev: 2024-12-20
---
# Zram
zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, i.e. a RAM disk with on-the-fly disk compression. The block device created with zram can then be used for swap or as a general-purpose RAM disk. The two most common uses for zram are for the storage of temporary files (`/tmp`) and as a swap device. Initially, zram had only the latter function, hence the original name "compcache" ("compressed cache").
## Usage as swap
Initially the created zram block device does not reserve or use any RAM. Only as files need or want to be swapped out, they will be compressed and moved into the zram block device. The zram block device will then dynamically grow or shrink as required.
Even when assuming that zstd only achieves a conservative 1:2 compression ratio (real world data shows a common ratio of 1:3), zram will offer the advantage of being able to store more content in RAM than without memory compression.
### Manually
To set up one zstd compressed zram device with half the system memory capacity and a higher-than-normal priority (only for the current session):
```sh
modprobe zram
zramctl /dev/zram0 --algorithm zstd --size "$(($(grep -Po 'MemTotal:\s*\K\d+' /proc/meminfo)/2))KiB"
mkswap -U clear /dev/zram0
swapon --discard --priority 100 /dev/zram0
```
To disable it again, either reboot or run:
```sh
swapoff /dev/zram0
modprobe -r zram
echo 1 > /sys/module/zswap/parameters/enabled
```
For a permanent solution, use a method from one of the following sections.
### Using a udev rule
The example below describes how to set up swap on zram automatically at boot with a single udev rule. No extra package should be needed to make this work.
Explicitly load the module at boot:
```ini
# /etc/modules-load.d/zram.conf
zram
```
Create the following udev rule adjusting the disksize attribute as necessary:
```
# /etc/udev/rules.d/99-zram.rules
ACTION=="add", KERNEL=="zram0", ATTR{comp_algorithm}="zstd", ATTR{disksize}="4G", RUN="/usr/bin/mkswap -U clear /dev/%k", TAG+="systemd"
```
Add `/dev/zram` to your fstab with a higher than default priority:
```
# /etc/fstab
/dev/zram0 none swap defaults,discard,pri=100 0 0
```
### Using zram-generator
`zram-generator` provides `systemd-zram-setup@zramN.service` units to automatically initialize zram devices without users needing to enable/start the template or its instances.
To use it, install `zram-generator`, and create `/etc/systemd/zram-generator.conf` with the following:
```ini
# /etc/systemd/zram-generator.conf
[zram0]
zram-size = min(ram / 2, 4096)
compression-algorithm = zstd
```
`zram-size` is the size (in MiB) of zram device, you can use ram to represent the total memory.
`compression-algorithm` specifies the algorithm used to compress in zram device.
`cat /sys/block/zram0/comp_algorithm` gives the available compression algorithm (as well as the current one included in brackets).
Then run `daemon-reload`, start your configured `systemd-zram-setup@zramN.service` instance (`N` matching the numerical instance-ID, in the example it is `systemd-zram-setup@zram0.service`).
You can check the swap status of your configured `/dev/zramN` device(s) by reading the unit status of your `systemd-zram-setup@zramN.service` instance(s), by using `zramctl`, or by using `swapon`.
## zramctl
zramctl is used to quickly set up zram device parameters, to reset zram devices, and to query the status of used zram devices.
Usage:
```sh
# Get info:
# If no option is given, all non-zero size zram devices are shown.
zramctl [options]
# Reset zram:
zramctl -r zramdev...
# Print name of first unused zram device:
zramctl -f
# Set up a zram device:
zramctl [-f | zramdev] [-s size] [-t number] [-a algorithm]
```
### Options
| Option | Description |
| ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `-a, --algorithm lzo/lz4/lz4hc/deflate/842/zstd` | Set the compression algorithm to be used for compressing data in the zram device. The list of supported algorithms could be inaccurate as it depends on the current kernel configuration. A basic overview can be obtained by using the command `cat /sys/block/zram0/comp_algorithm`; |
| `-f, --find` | Find the first unused zram device. If a `--size` argument is present, then initialize the device. |
| `-n, --noheadings` | Do not print a header line in status output. ` |
| `-o, --output list` | Define the status output columns to be used. If no output arrangement is specified, then a default set is used. See below for list of all supported columns. |
| `--output-all` | Output all available columns. |
| `--raw` | Use the raw format for status output. |
| `-r, --reset` | Reset the options of the specified zram device(s). Zram device settings can be changed only after a reset. |
| `-s, --size size` | Create a zram device of the specified size. Zram devices are aligned to memory pages; when the requested size is not a multiple of the page size, it will be rounded up to the next multiple. When not otherwise specified, the unit of the size parameter is bytes. |
| `-t, --streams number` | Set the maximum number of compression streams that can be used for the device. The default is use all CPUs and one stream for kernels older than 4.6. |
### Output Columns
| Output | Description |
| ------------ | ------------------------------------------------------------------ |
| `NAME` | zram device name |
| `DISKSIZE` | limit on the uncompressed amount of data |
| `DATA` | uncompressed size of stored data |
| `COMPR` | compressed size of stored data |
| `ALGORITHM` | the selected compression algorithm |
| `STREAMS` | number of concurrent compress operations |
| `ZERO-PAGES` | empty pages with no allocated memory |
| `TOTAL` | all memory including allocator fragmentation and metadata overhead |
| `MEM-LIMIT` | memory limit used to store compressed data |
| `MEM-USED` | memory zram has consumed to store compressed data |
| `MIGRATED` | number of objects migrated by compaction |
| `COMP-RATIO` | compression ratio: DATA/TOTAL |
| `MOUNTPOINT` | where the device is mounted |
## Misc
### Checking zram statistics
Use zramctl. Example:
```
$ zramctl
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 zstd 32G 1.9G 318.6M 424.9M 16 [SWAP]
DISKSIZE = 32G: this zram device will store up to 32 GiB of uncompressed data.
DATA = 1.9G: currently, 1.9 GiB (uncompressed) of data is being stored in this zram device
COMPR = 318.6M: the 1.9 GiB uncompressed data was compressed to 318.6 MiB
TOTAL = 424.9M: including metadata, the 1.9 GiB of uncompressed data is using up 424.9 MiB of physical RAM
```
### Multiple zram devices
By default, loading the zram module creates a single `/dev/zram0` device.
If you need more than one `/dev/zram` device, specify the amount using the `num_devices` kernel module parameter or add them as needed afterwards.
### Optimizing swap on zram
Since zram behaves differently than disk swap, we can configure the system's swap to take full potential of the zram advantages:
```ini
# /etc/sysctl.d/99-vm-zram-parameters.conf
vm.swappiness = 180
vm.watermark_boost_factor = 0
vm.watermark_scale_factor = 125
vm.page-cluster = 0
```
### Enabling a backing device for a zram block
zram can be configured to push incompressible pages to a specified block device when under memory pressure.
To add a backing device manually:
```sh
echo /dev/sdX > /sys/block/zram0/backing_dev
```
To add a backing device to your zram block device using `zram-generator`, update `/etc/systemd/zram-generator.conf` with the following under your `[zramX]` device you want the backing device added to:
```ini
# /etc/systemd/zram-generator.conf
writeback-device=/dev/disk/by-partuuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
```
### Using zram for non-swap purposes
zram can also be used as a generic RAM-backed block device, e.g. a `/dev/ram` with less physical memory usage, but slightly lower performance. However there are some caveats:
- There is no partition table support (no automatic creation of `/dev/zramxpy`).
- The block size is fixed to 4 kiB.
The obvious way around this is to stack a loop device on-top the zram, using [losetup](../applications/cli/system/losetup.md), specifying the desired block size using the `-b` option and the `-P` option to process partition tables and automatic creation of the partition loop devices.
```sh
zramctl -f -s <SIZE>G
```
Copy the disk image to the new `/dev/zramx`, then create a loop device. If the disk image has a partition table, the block size of the loop device must match the block size used by the partition table, which is typically 512 or 4096 bytes.
```sh
losetup -f -b 512 -P /dev/zramx
mount /dev/loop0p1 /mnt/boot
mount /dev/loop0p2 /mnt/root
```

View file

@ -1,426 +0,0 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Archiso
repo: https://gitlab.archlinux.org/archlinux/archiso
rev: 2024-12-17
---
# archiso
Archiso is a highly-customizable tool for building Arch Linux live CD/USB ISO images. The official images are built with archiso and include the following packages. It can be used as the basis for rescue systems, linux installers or other systems. This wiki article explains how to install archiso, and how to configure it to control aspects of the resulting ISO image such as included packages and files. Technical requirements and build steps can be found in the official project documentation. Archiso is implemented with a number of bash scripts. The core component of archiso is the mkarchiso command. Its options are documented in mkarchiso -h and not covered here.
## Prepare a custom profile
Archiso comes with two profiles, `releng` and `baseline`.
- `releng` is used to create the official monthly installation ISO. It can be used as a starting point for creating a customized ISO image.
- `baseline` is a minimal configuration, that includes only the bare minimum packages required to boot the live environment from the medium.
If you wish to adapt or customize one of archiso's shipped profiles, copy it from `/usr/share/archiso/configs/profile-name/` to a writable directory with a name of your choice. For example:
```sh
cp -r /usr/share/archiso/configs/releng/ archlive
```
## Profile structure
An archiso profile contains configuration that defines the resulting ISO image. The profile structure is documented in `/usr/share/doc/archiso/README.profile.rst`.
An archiso profile consists of several configuration files and a directory for files to be added to the resulting image.
```
profile/
├── airootfs/
├── efiboot/
├── syslinux/
├── grub/
├── bootstrap_packages.arch
├── packages.arch
├── pacman.conf
└── profiledef.sh
```
The required files and directories are explained in the following sections.
### profiledef.sh
This file describes several attributes of the resulting image and is a place for customization to the general behavior of the image.
The image file is constructed from some of the variables in ``profiledef.sh``: ``<iso_name>-<iso_version>-<arch>.iso``
(e.g. ``archlinux-202010-x86_64.iso``).
* ``iso_name``: The first part of the name of the resulting image (defaults to ``mkarchiso``)
* ``iso_label``: The ISO's volume label (defaults to ``MKARCHISO``)
* ``iso_publisher``: A free-form string that states the publisher of the resulting image (defaults to ``mkarchiso``)
* ``iso_application``: A free-form string that states the application (i.e. its use-case) of the resulting image (defaults
to ``mkarchiso iso``)
* ``iso_version``: A string that states the version of the resulting image (defaults to ``""``)
* ``install_dir``: A string (maximum eight characters long, which **must** consist of ``[a-z0-9]``) that states the
directory on the resulting image into which all files will be installed (defaults to ``mkarchiso``)
* ``buildmodes``: An optional list of strings, that state the build modes that the profile uses. Only the following are
understood:
- ``bootstrap``: Build a compressed file containing a minimal system to bootstrap from
- ``iso``: Build a bootable ISO image (implicit default, if no ``buildmodes`` are set)
- ``netboot``: Build artifacts required for netboot using iPXE
* ``bootmodes``: A list of strings, that state the supported boot modes of the resulting image. Only the following are
understood:
- ``bios.syslinux.mbr``: Syslinux for x86 BIOS booting from a disk
- ``bios.syslinux.eltorito``: Syslinux for x86 BIOS booting from an optical disc
- ``uefi-ia32.grub.esp``: GRUB for IA32 UEFI booting from a disk
- ``uefi-ia32.grub.eltorito``: GRUB for IA32 UEFI booting from an optical disc
- ``uefi-x64.grub.esp``: GRUB for x64 UEFI booting from a disk
- ``uefi-x64.grub.eltorito``: GRUB for x64 UEFI booting from an optical disc
- ``uefi-ia32.systemd-boot.esp``: systemd-boot for IA32 UEFI booting from a disk
- ``uefi-ia32.systemd-boot.eltorito``: systemd-boot for IA32UEFI booting from an optical disc
- ``uefi-x64.systemd-boot.esp``: systemd-boot for x64 UEFI booting from a disk
- ``uefi-x64.systemd-boot.eltorito``: systemd-boot for x64 UEFI booting from an optical disc
Note that BIOS El Torito boot mode must always be listed before UEFI El Torito boot mode.
* ``arch``: The architecture (e.g. ``x86_64``) to build the image for. This is also used to resolve the name of the packages
file (e.g. ``packages.x86_64``)
* ``pacman_conf``: The ``pacman.conf`` to use to install packages to the work directory when creating the image (defaults to
the host's ``/etc/pacman.conf``)
* ``airootfs_image_type``: The image type to create. The following options are understood (defaults to ``squashfs``):
- ``squashfs``: Create a squashfs image directly from the airootfs work directory
- ``ext4+squashfs``: Create an ext4 partition, copy the airootfs work directory to it and create a squashfs image from it
- ``erofs``: Create an EROFS image for the airootfs work directory
* ``airootfs_image_tool_options``: An array of options to pass to the tool to create the airootfs image. ``mksquashfs`` and
``mkfs.erofs`` are supported. See ``mksquashfs --help`` or ``mkfs.erofs --help`` for all possible options
* ``bootstrap_tarball_compression``: An array containing the compression program and arguments passed to it for
compressing the bootstrap tarball (defaults to ``cat``). For example: ``bootstrap_tarball_compression=(zstd -c -T0 --long -19)``.
* ``file_permissions``: An associative array that lists files and/or directories who need specific ownership or
permissions. The array's keys contain the path and the value is a colon separated list of owner UID, owner GID and
access mode. E.g. ``file_permissions=(["/etc/shadow"]="0:0:400")``. When directories are listed with a trailing backslash (``/``) **all** files and directories contained within the listed directory will have the same owner UID, owner GID, and access mode applied recursively.
### bootstrap_packages.arch
All packages to be installed into the environment of a bootstrap image have to be listed in an architecture specific file (e.g. ``bootstrap_packages.x86_64``), which resides top-level in the profile.
Packages have to be listed one per line. Lines starting with a ``#`` and blank lines are ignored.
This file is required when generating bootstrap images using the ``bootstrap`` build mode.
### packages.arch
All packages to be installed into the environment of an ISO image have to be listed in an architecture specific file (e.g. ``packages.x86_64``), which resides top-level in the profile.
Packages have to be listed one per line. Lines starting with a ``#`` and blank lines are ignored.
This file is required when generating ISO images using the ``iso`` or ``netboot`` build modes.
### pacman.conf
A configuration for pacman is required per profile.
Some configuration options will not be used or will be modified:
* ``CacheDir``: the profile's option is **only** used if it is not the default (i.e. ``/var/cache/pacman/pkg``) and if it is
not the same as the system's option. In all other cases the system's pacman cache is used.
* ``HookDir``: it is **always** set to the ``/etc/pacman.d/hooks`` directory in the work directory's airootfs to allow
modification via the profile and ensure interoparability with hosts using dracut
* ``RootDir``: it is **always** removed, as setting it explicitely otherwise refers to the host's root filesystem (see
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
* ``LogFile``: it is **always** removed, as setting it explicitely otherwise refers to the host's pacman log file (see
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
* ``DBPath``: it is **always** removed, as setting it explicitely otherwise refers to the host's pacman database (see
``man 8 pacman`` for further information on the ``-r`` option used by ``pacstrap``)
### airootfs
This optional directory may contain files and directories that will be copied to the work directory of the resulting image's root filesystem.
The files are copied before packages are being installed to work directory location.
Ownership and permissions of files and directories from the profile's ``airootfs`` directory are not preserved. The mode will be ``644`` for files and ``755`` for directories, all of them will be owned by root. To set custom ownership and/or permissions, use ``file_permissions`` in ``profiledef.sh``.
With this overlay structure it is possible to e.g. create users and set passwords for them, by providing ``airootfs/etc/passwd``, ``airootfs/etc/shadow``, ``airootfs/etc/gshadow`` (see ``man 5 passwd``, ``man 5 shadow`` and ``man 5 gshadow`` respectively).
If user home directories exist in the profile's ``airootfs``, their ownership and (and top-level) permissions will be altered according to the provided information in the password file.
### Boot loader configuration
A profile may contain configuration for several boot loaders. These reside in specific top-level directories, which are explained in the following subsections.
The following *custom template identifiers* are understood and will be replaced according to the assignments of the respective variables in ``profiledef.sh``:
* ``%ARCHISO_LABEL%``: Set this using the ``iso_label`` variable in ``profiledef.sh``.
* ``%INSTALL_DIR%``: Set this using the ``install_dir`` variable in ``profiledef.sh``.
* ``%ARCH%``: Set this using the ``arch`` variable in ``profiledef.sh``.
Additionally there are also *custom template identifiers* have harcoded values set by ``mkarchiso`` that cannot be overridden:
* ``%ARCHISO_UUID%``: the ISO 9660 modification date in UTC, i.e. its "UUID",
* ``%ARCHISO_SEARCH_FILENAME%``: file path on ISO 9660 that can be used by GRUB to find the ISO volume
(**for GRUB ``.cfg`` files only**).
### efiboot
This directory is mandatory when the ``uefi-x64.systemd-boot.esp`` or ``uefi-x64.systemd-boot.eltorito`` bootmodes are selected in ``profiledef.sh``. It contains configuration for `systemd-boot`.
> **Note:** The directory is a top-level representation of the systemd-boot configuration directories and files found in the root of an EFI system partition.
The *custom template identifiers* are **only** understood in the boot loader entry `.conf` files (i.e. **not** in ``loader.conf``).
### syslinux
This directory is mandatory when the ``bios.syslinux.mbr`` or the ``bios.syslinux.eltorito`` bootmodes are selected in ``profiledef.sh``.
It contains configuration files for `syslinux` or `isolinux` , or `pxelinux` used in the resulting image.
The *custom template identifiers* are understood in all `.cfg` files in this directory.
### grub
This directory is mandatory when any of the following bootmodes is used in ``profiledef.sh``:
- ``uefi-ia32.grub.esp`` or
- ``uefi-ia32.grub.eltorito`` or
- ``uefi-x64.grub.esp`` or
- ``uefi-x64.grub.eltorito``
It contains configuration files for `GRUB` used in the resulting image.
## Customization
### Selecting packages
Edit `packages.x86_64` to select which packages are to be installed on the live system image, listing packages line by line.
### Custom local repository
To add packages not located in standard Arch repositories (e.g. packages from the AUR or customized with the ABS), set up a custom local repository and add your custom packages to it. Then add your repository to `pacman.conf` as follows:
```ini
[customrepo]
SigLevel = Optional TrustAll
Server = file:///path/to/customrepo
```
> **Note**: The ordering within `pacman.conf` matters. To give top priority to your custom repository, place it above the other repository entries.
> This `pacman.conf` is only used for building the image. It will not be used in the live environment.
> Ensure that the repository is located in a directory accessible by the chrooted mkarchiso process, such as `/tmp`, to ensure the repository is read correctly during the image building process.
### Packages from multilib
To install packages from the multilib repository, simply uncomment that repository in `pacman.conf`.
### Adding files to image
The `airootfs` directory is used as the starting point for the root directory (`/`) of the live system on the image. All its contents will be copied over to the working directory before packages are installed.
Place any custom files and/or directories in the desired location under `airootfs/`. For example, if you have a set of iptables scripts on your current system you want to be used on your live image, copy them over as such:
```sh
cp -r /etc/iptables archlive/airootfs/etc
```
Similarly, some care is required for special configuration files that reside somewhere down the hierarchy. Missing parts of the directory structure can be simply created with `mkdir`.
Tip: To add a file to the install user's home directory, place it in `archlive/airootfs/root/`. To add a file to all other users home directories, place it in `archlive/airootfs/etc/skel/`.
> **Note**: Custom files that conflict with those provided by packages will be overwritten unless a package specifies them as backup files.
By default, permissions will be 644 for files and 755 for directories. All of them will be owned by the root user. To set different permissions or ownership for specific files and/or folders, use the `file_permissions` associative array in `profiledef.sh`.
### Adding repositories to the image
To add a repository that can be used in the live environment, create a suitably modified `pacman.conf` and place it in `archlive/airootfs/etc/`.
If the repository also uses a key, place the key in `archlive/airootfs/usr/share/pacman/keyrings/`. The key file name must end with `.gpg`. Additionally, the key must be trusted. This can be accomplished by creating a GnuPG exported trust file in the same directory. The file name must end with `-trusted`. The first field is the key fingerprint, and the second is the trust. You can reference `/usr/share/pacman/keyrings/archlinux-trusted` for an example.
#### archzfs example
The files in this example are:
```
airootfs
├── etc
│ ├── pacman.conf
│ └── pacman.d
│ └── archzfs_mirrorlist
└── usr
└── share
└── pacman
└── keyrings
├── archzfs.gpg
└── archzfs-trusted
```
`airootfs/etc/pacman.conf`:
```ini
[archzfs]
Include = /etc/pacman.d/archzfs_mirrorlist
```
`airootfs/etc/pacman.d/archzfs_mirrorlist`:
```
Server = https://archzfs.com/$repo/$arch
Server = https://mirror.sum7.eu/archlinux/archzfs/$repo/$arch
Server = https://mirror.biocrafting.net/archlinux/archzfs/$repo/$arch
Server = https://mirror.in.themindsmaze.com/archzfs/$repo/$arch
Server = https://zxcvfdsa.com/archzfs/$repo/$arch
```
`airootfs/usr/share/pacman/keyrings/archzfs-trusted`:
```
DDF7DB817396A49B2A2723F7403BD972F75D9D76:4:
```
`archzfs.gpg` itself can be obtained directly from the repository site at https://archzfs.com/archzfs.gpg.
### Kernel
Although both archiso's included profiles only have linux, ISOs can be made to include other or even multiple kernels.
First, edit `packages.x86_64` to include kernel package names that you want. When mkarchiso runs, it will include all `work_dir/airootfs/boot/vmlinuz-*` and `work_dir/boot/initramfs-*.img` files in the ISO (and additionally in the FAT image used for UEFI booting).
mkinitcpio presets by default will build fallback initramfs images. For an ISO, the main initramfs image would not typically include the autodetect hook, thus making an additional fallback image unnecessary. To prevent the creation of an fallback initramfs image, so that it does not take up space or slow down the build process, place a custom preset in `archlive/airootfs/etc/mkinitcpio.d/pkgbase.preset`. For example, for linux-lts:
`archlive/airootfs/etc/mkinitcpio.d/linux-lts.preset`:
```
PRESETS=('archiso')
ALL_kver='/boot/vmlinuz-linux-lts'
ALL_config='/etc/mkinitcpio.conf'
archiso_image="/boot/initramfs-linux-lts.img"
```
Finally create boot loader configuration to allow booting the kernel(s).
### Boot loader
Archiso supports syslinux for BIOS booting and GRUB or systemd-boot for UEFI booting. Refer to the articles of the boot loaders for information on their configuration syntax.
mkarchiso expects that GRUB configuration is in the `grub` directory, systemd-boot configuration is in the `efiboot` directory and syslinux configuration in the `syslinux` directory.
### UEFI Secure Boot
If you want to make your archiso bootable on a UEFI Secure Boot enabled environment, you must use a signed boot loader.
### systemd units
To enable systemd services/sockets/timers for the live environment, you need to manually create the symbolic links just as `systemctl enable` does it.
For example, to enable `gpm.service`, which contains `WantedBy=multi-user.target`, run:
```sh
mkdir -p archlive/airootfs/etc/systemd/system/multi-user.target.wants
ln -s /usr/lib/systemd/system/gpm.service archlive/airootfs/etc/systemd/system/multi-user.target.wants/
```
The required symlinks can be found out by reading the systemd unit, or if you have the service installed, by enabling it and observing the systemctl output.
### Login manager
Starting X at boot is done by enabling your login manager's systemd service. If you do not know which `.service` to enable, you can easily find out in case you are using the same program on the system you build your ISO on. Just use:
```sh
ls -l /etc/systemd/system/display-manager.service
```
Now create the same symlink in `archlive/airootfs/etc/systemd/system/`.
### Changing automatic login
The configuration for getty's automatic login is located under `airootfs/etc/systemd/system/getty@tty1.service.d/autologin.conf`.
You can modify this file to change the auto login user:
```ini
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin username --noclear %I 38400 linux
```
Or remove `autologin.conf` altogether to disable auto login.
If you are using the serial console, create `airootfs/etc/systemd/system/serial-getty@ttyS0.service.d/autologin.conf` with the following content instead:
```ini
[Service]
ExecStart=
ExecStart=-/sbin/agetty -o '-p -- \\u' --noclear --autologin root --keep-baud 115200,57600,38400,9600 - $TERM
```
### Users and passwords
To create a user which will be available in the live environment, you must manually edit `archlive/airootfs/etc/passwd`, `archlive/airootfs/etc/shadow`, `archlive/airootfs/etc/group` and `archlive/airootfs/etc/gshadow`.
> **Note**: If these files exist, they must contain the root user and group.
For example, to add a user `archie`. Add them to `archlive/airootfs/etc/passwd` following the passwd syntax:
```
root:x:0:0:root:/root:/usr/bin/zsh
archie:x:1000:1000::/home/archie:/usr/bin/zsh
```
> **Note**: The passwd file must end with a newline.
Add the user to `archlive/airootfs/etc/shadow` following the syntax of shadow. If you want to define a password for the user, generate a password hash with `openssl passwd -6` and add it to the file. For example:
```
root::14871::::::
archie:$6$randomsalt$cij4/pJREFQV/NgAgh9YyBIoCRRNq2jp5l8lbnE5aLggJnzIRmNVlogAg8N6hEEecLwXHtMQIl2NX2HlDqhCU1:14871::::::
```
Otherwise, you may keep the password field empty, meaning that the user can log in with no password.
Add the user's group and the groups which they will part of to `archlive/airootfs/etc/group` according to group syntax. For example:
```
root:x:0:root
adm:x:4:archie
wheel:x:10:archie
uucp:x:14:archie
archie:x:1000:
```
Create the appropriate `archlive/airootfs/etc/gshadow` according to gshadow:
```
root:!*::root
archie:!*::
```
Make sure `/etc/shadow` and `/etc/gshadow` have the correct permissions:
`archlive/profiledef.sh`:
```
file_permissions=(
...
["/etc/shadow"]="0:0:0400"
["/etc/gshadow"]="0:0:0400"
)
```
After package installation, mkarchiso will create all specified home directories for users listed in `archlive/airootfs/etc/passwd` and copy `work_directory/x86_64/airootfs/etc/skel/*` to them. The copied files will have proper user and group ownership.
### Changing the distribution name used in the ISO
Start by copying the file `/etc/os-release` into the `etc/` folder in the rootfs. Then, edit the file accordingly. You can also change the name inside of GRUB and syslinux.
### Adjusting the size of the root file system
When installing packages in the live environment, for example on hardware requiring DKMS modules, the default size of the root file system might not allow the download and installation of such packages due to its size.
To adjust the size on the fly:
```sh
mount -o remount,size=SIZE /run/archiso/cowspace
```
To adjust the size at the bootloader stage (as a kernel cmdline by pressing `e` or `Tab`) use the boot option:
```sh
cow_spacesize=SIZE
```
To adjust the size while building an image add the boot option to:
- `efiboot/loader/entries/*.cfg`
- `grub/*.cfg`
- `syslinux/*.cfg`
## Build the ISO
Build an ISO which you can then burn to CD or USB by running:
```sh
mkarchiso -v -w /path/to/work_dir -o /path/to/out_dir /path/to/profile/
```
Replace `/path/to/profile/` with the path to your custom profile, or with `/usr/share/archiso/configs/releng/` if you are building an unmodified profile.
When run, the script will download and install the packages you specified to `work_directory/x86_64/airootfs`, create the kernel and init images, apply your customizations and finally build the ISO into the output directory.
> **Tip**: If memory allows, it is preferred to place the working directory on `tmpfs`.
```sh
mkdir ./work
mount -t tmpfs -o size=1G tmpfs ./work
mkarchiso -v -w ./work -o /path/to/out_dir /path/to/profile/
umount -r ./work
```
### Removal of work directory
> **Warning**: If mkarchiso is interrupted, run `findmnt` to make sure there are no mount binds before deleting it - otherwise, you may lose data (e.g. an external device mounted at `/run/media/user/label` gets bound within `work/x86_64/airootfs/run/media/user/label` during the build process).
The temporary files are copied into work directory. After successfully building the ISO , the work directory and its contents can be deleted. E.g.:
```sh
rm -rf /path/to/work_dir
```

View file

@ -14,8 +14,6 @@ obj: meta/collection
- [MergerFS](MergerFS.md)
- [LVM](./LVM.md)
- [LUKS](./LUKS.md)
- [tmpFS](./tmpFS.md)
- [overlayfs](./overlayfs.md)
## Network
- [SSHFS](SSHFS.md)

View file

@ -1,60 +0,0 @@
---
obj: filesystem
arch-wiki: https://wiki.archlinux.org/title/Overlay_filesystem
source: https://docs.kernel.org/filesystems/overlayfs.html
wiki: https://en.wikipedia.org/wiki/OverlayFS
rev: 2024-12-19
---
# OverlayFS
Overlayfs allows one, usually read-write, directory tree to be overlaid onto another, read-only directory tree. All modifications go to the upper, writable layer. This type of mechanism is most often used for live CDs but there is a wide variety of other uses.
The implementation differs from other "union filesystem" implementations in that after a file is opened all operations go directly to the underlying, lower or upper, filesystems. This simplifies the implementation and allows native performance in these cases.
## Usage
To mount an overlay use the following mount options:
```sh
mount -t overlay overlay -o lowerdir=/lower,upperdir=/upper,workdir=/work /merged
```
> **Note**:
> - The working directory (`workdir`) needs to be an empty directory on the same filesystem as the upper directory.
> - The lower directory can be read-only or could be an overlay itself.
> - The upper directory is normally writable.
> - The workdir is used to prepare files as they are switched between the layers.
The lower directory can actually be a list of directories separated by `:`, all changes in the merged directory are still reflected in upper.
### Read-only overlay
Sometimes, it is only desired to create a read-only view of the combination of two or more directories. In that case, it can be created in an easier manner, as the directories `upper` and `work` are not required:
```sh
mount -t overlay overlay -o lowerdir=/lower1:/lower2 /merged
```
When `upperdir` is not specified, the overlay is automatically mounted as read-only.
## Example:
```sh
mount -t overlay overlay -o lowerdir=/lower1:/lower2:/lower3,upperdir=/upper,workdir=/work /merged
```
> **Note**: The order of lower directories is the rightmost is the lowest, thus the upper directory is on top of the first directory in the left-to-right list of lower directories; NOT on top of the last directory in the list, as the order might seem to suggest.
The above example will have the order:
- /upper
- /lower1
- /lower2
- /lower3
To add an overlayfs entry to `/etc/fstab` use the following format:
```
# /etc/fstab
overlay /merged overlay noauto,x-systemd.automount,lowerdir=/lower,upperdir=/upper,workdir=/work 0 0
```
The `noauto` and `x-systemd.automount` mount options are necessary to prevent systemd from hanging on boot because it failed to mount the overlay. The overlay is now mounted whenever it is first accessed and requests are buffered until it is ready.

View file

@ -1,30 +0,0 @@
---
obj: filesystem
wiki: https://en.wikipedia.org/wiki/Tmpfs
arch-wiki: https://wiki.archlinux.org/title/Tmpfs
---
# tmpFS
tmpfs is a temporary filesystem that resides in memory and/or swap partition(s). Mounting directories as tmpfs can be an effective way of speeding up accesses to their files, or to ensure that their contents are automatically cleared upon reboot.
## Usage
**Create a tmpfs**:
`mount -t tmpfs -o [OPTIONS] tmpfs [MOUNT_POINT]`
**Resize a tmpfs**:
`mount -t tmpfs -o remount,size=<NEW_SIZE> tmpfs [MOUNT_POINT]`
### Options
| **Option** | **Description** |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `size=bytes` | Specify an upper limit on the size of the filesystem. Size is given in bytes, rounded up to entire pages. A `k`, `m`, or `g` suffix can be used for Ki, Mi, or Gi. Use `%` to specify a percentage of physical RAM. Default: 50%. Set to `0` to remove the limit. |
| `nr_blocks=blocks` | Similar to `size`, but in blocks of `PAGE_CACHE_SIZE`. Accepts `k`, `m`, or `g` suffixes, but not `%`. |
| `nr_inodes=inodes` | Sets the maximum number of inodes. Default is half the number of physical RAM pages or the number of lowmem RAM pages (whichever is smaller). Use `k`, `m`, or `g` suffixes, but `%` is not supported. Set to `0` to remove the limit. |
| `noswap` | Disables swap. Remounts must respect the original settings. By default, swap is enabled. |
| `mode=mode` | Sets the initial permissions of the root directory. |
| `gid=gid` | Sets the initial group ID of the root directory. |
| `uid=uid` | Sets the initial user ID of the root directory. |
| `huge=huge_option` | Sets the huge table memory allocation policy for all files (if `CONFIG_TRANSPARENT_HUGEPAGE` is enabled). Options: `never` (default), `always`, `within_size`, `advise`, `deny`, or `force`. |
| `mpol=mpol_option` | Sets NUMA memory allocation policy (if `CONFIG_NUMA` is enabled). Options: `default`, `prefer:node`, `bind:nodelist`, `interleave`, `interleave:nodelist`, or `local`. Example: `mpol=bind:0-3,5,7,9-15`. |

View file

@ -1,7 +1,5 @@
---
obj: concept
arch-wiki: https://wiki.archlinux.org/title/Mkinitcpio
rev: 2024-12-16
---
# mkinitcpio
@ -10,11 +8,20 @@ The initial ramdisk is in essence a very small environment (early userspace) whi
## Configuration
The primary configuration file for _mkinitcpio_ is `/etc/mkinitcpio.conf`. Additionally, preset definitions are provided by kernel packages in the `/etc/mkinitcpio.d` directory (e.g. `/etc/mkinitcpio.d/linux.preset`).
- `MODULES` : Kernel modules to be loaded before any boot hooks are run.
- `BINARIES` : Additional binaries to be included in the initramfs image.
- `FILES` : Additional files to be included in the initramfs image.
- `HOOKS` : Hooks are scripts that execute in the initial ramdisk.
- `COMPRESSION` : Used to compress the initramfs image.
`MODULES`
Kernel modules to be loaded before any boot hooks are run.
`BINARIES`
Additional binaries to be included in the initramfs image.
`FILES`
Additional files to be included in the initramfs image.
`HOOKS`
Hooks are scripts that execute in the initial ramdisk.
`COMPRESSION`
Used to compress the initramfs image.
### MODULES
The `MODULES` array is used to specify modules to load before anything else is done.
@ -54,28 +61,3 @@ The default `HOOKS` setting should be sufficient for most simple, single disk se
| **lvm2** | Adds the device mapper kernel module and the `lvm` tool to the image. |
| **fsck** | Adds the fsck binary and file system-specific helpers to allow running fsck against your root device (and `/usr` if separate) prior to mounting. If added after the **autodetect** hook, only the helper specific to your root file system will be added. Usage of this hook is **strongly** recommended, and it is required with a separate `/usr` partition. It is highly recommended that if you include this hook that you also include any necessary modules to ensure your keyboard will work in early userspace. |
| **filesystems** | This includes necessary file system modules into your image. This hook is **required** unless you specify your file system modules in `MODULES`. |
### UKI
A Unified Kernel Image (UKI) is a single executable file that can be directly booted by UEFI firmware or automatically sourced by boot-loaders.
In essence, a UKI combines all the necessary components for the operating system to start up, including:
- EFI stub loader
- Kernel command line
- Microcode updates
- Initramfs image (initial RAM file system)
- Kernel image itself
- Splash screen
To enable the UKI edit `/etc/mkinitcpio.d/linux.preset`:
```sh
default_uki="/boot/EFI/Linux/arch-linux.efi"
fallback_uki="/boot/EFI/Linux/arch-linux-fallback.efi"
```
Build the Unified Kernel Image:
```sh
mkinitcpio --allpresets
```

View file

@ -1,57 +0,0 @@
---
obj: application
repo: https://github.com/Foxboron/sbctl
rev: 2024-12-16
---
# sbctl (Secure Boot Manager)
sbctl intends to be a user-friendly secure boot key manager capable of setting up secure boot, offer key management capabilities, and keep track of files that needs to be signed in the boot chain.
## Usage
Install the necessary packages:
```sh
pacman -S sbctl sbsigntools
```
Check that Secure Boot "Setup Mode" is "Enabled" in UEFI:
```sh
sbctl status
```
Create your own signing keys:
```sh
sbctl create-keys
```
Sign the systemd bootloader:
```sh
sbctl sign -s \
-o /usr/lib/systemd/boot/efi/systemd-bootx64.efi.signed \
/usr/lib/systemd/boot/efi/systemd-bootx64.efi
```
Enroll your custom keys:
```sh
sbctl enroll-keys
# Enroll and include Microsoft Keys
sbctl enroll-keys --microsoft
```
Sign EFI files:
```sh
sbctl sign -s /boot/EFI/Linux/arch-linux.efi
sbctl sign -s /boot/EFI/Linux/arch-linux-fallback.efi
sbctl sign -s /efi/EFI/systemd/systemd-bootx64.efi
sbctl sign -s /efi/EFI/Boot/bootx64.efi
```
Verify signature of EFI files:
```sh
sbctl verify
```
Resign everything:
```sh
sbctl sign-all
```

View file

@ -12,7 +12,6 @@ systemd is a suite of basic building blocks for a [Linux](../Linux.md) system. I
See also:
- [Systemd-Timers](Systemd-Timers.md)
- [systemd-boot](systemd-boot.md)
- [systemd-cryptenroll](systemd-cryptenroll.md)
## Using Units
Units commonly include, but are not limited to, services (_.service_), mount points (_.mount_), devices (_.device_) and sockets (_.socket_).

View file

@ -1,7 +1,6 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Systemd-boot
rev: 2024-12-17
---
# Systemd Boot
@ -21,8 +20,7 @@ bootctl update
```
## Configuration
The loader configuration is stored in the file `_esp_/loader/loader.conf`.
The loader configuration is stored in the file `_esp_/loader/loader.conf`
Example:
```
default arch.conf
@ -32,7 +30,7 @@ editor no
```
### Adding loaders
_systemd-boot_ will search for boot menu items in `_esp_/loader/entries/*.conf`.
_systemd-boot_ will search for boot menu items in `_esp_/loader/entries/*.conf`
Values:
- `title` : Name
@ -59,18 +57,4 @@ systemctl reboot --boot-loader-entry=arch-custom.conf
Firmware Setup:
```shell
systemctl reboot --firmware-setup
```
## Keybindings
While the menu is shown, the following keys are active:
| Key | Description |
| ------------- | ----------------------------------------------------------------------------------- |
| `Up` / `Down` | Select menu entry |
| `Enter` | Boot the selected entry |
| `d` | select the default entry to boot (stored in a non-volatile EFI variable) |
| `t` / `T` | adjust the timeout (stored in a non-volatile EFI variable) |
| `e` | edit the option line (kernel command line) for this bootup to pass to the EFI image |
| `Q` | quit |
| `v` | show the systemd-boot and UEFI version |
| `P` | print the current configuration to the console |
```

View file

@ -1,130 +0,0 @@
---
obj: application
arch-wiki: https://wiki.archlinux.org/title/Systemd-cryptenroll
rev: 2024-12-16
---
# systemd-cryptenroll
systemd-cryptenroll allows enrolling smartcards, FIDO2 tokens and Trusted Platform Module security chips into LUKS devices, as well as regular passphrases. These devices are later unlocked by `systemd-cryptsetup@.service`, using the enrolled tokens.
## Usage
### **List keyslots**
systemd-cryptenroll can list the keyslots in a LUKS device, similar to cryptsetup luksDump, but in a more user-friendly format.
```sh
$ systemd-cryptenroll /dev/disk
SLOT TYPE
0 password
1 tpm2
```
### **Erasing keyslots**
```sh
systemd-cryptenroll /dev/disk --wipe-slot=SLOT
```
Where `SLOT` can be:
- A single keyslot index
- A type of keyslot, which will erase all keyslots of that type. Valid types are `empty`, `password`, `recovery`, `pkcs11`, `fido2`, `tpm2`
- A combination of all of the above, separated by commas
- The string `all`, which erases all keyslots on the device. This option can only be used when enrolling another device or passphrase at the same time.
The `--wipe-slot` operation can be used in combination with all enrollment options, which is useful to update existing device enrollments:
```sh
systemd-cryptenroll /dev/disk --wipe-slot=fido2 --fido2-device=auto
```
### **Enrolling passphrases**
#### Regular password
This is equivalent to `cryptsetup luksAddKey`.
```sh
systemd-cryptenroll /dev/disk --password
```
#### Recovery key
Recovery keys are mostly identical to passphrases, but are computer-generated instead of being chosen by a human, and thus have a guaranteed high entropy. The key uses a character set that is easy to type in, and may be scanned off screen via a QR code.
A recovery key is designed to be used as a fallback if the hardware tokens are unavailable, and can be used in place of regular passphrases whenever they are required.
```sh
systemd-cryptenroll /dev/disk --recovery-key
```
### Enrolling hardware devices
The `--type-device` options must point to a valid device path of their respective type. A list of available devices can be obtained by passing the list argument to this option. Alternatively, if you only have a single device of the desired type connected, the auto option can be used to automatically select it.
#### PKCS#11 tokens or smartcards
The token or smartcard must contain a RSA key pair, which will be used to encrypt the generated key that will be used to unlock the volume.
```sh
systemd-cryptenroll /dev/disk --pkcs11-token-uri=device
```
#### FIDO2 tokens
Any FIDO2 token that supports the "hmac-secret" extension can be used with systemd-cryptenroll. The following example would enroll a FIDO2 token to an encrypted LUKS2 block device, requiring only user presence as authentication.
```sh
systemd-cryptenroll /dev/disk --fido2-device=device --fido2-with-client-pin=no
```
In addition, systemd-cryptenroll supports using the token's built-in user verification methods:
- `--fido2-with-user-presence` defines whether to verify the user presence (i.e. by tapping the token) before unlocking, defaults to `yes`
- `--fido2-with-user-verification` defines whether to require user verification before unlocking, defaults to `no`
By default, the cryptographic algorithm used when generating a FIDO2 credential is es256 which denotes Elliptic Curve Digital Signature Algorithm (ECDSA) over NIST P-256 with SHA-256. If desired and provided by the FIDO2 token, a different cryptographic algorithm can be specified during enrollment.
Suppose that a previous FIDO2 token has already been enrolled and the user wishes to enroll another, the following generates an eddsa credential which denotes EdDSA over Curve25519 with SHA-512 and authenticates the device with a previous enrolled token instead of a password.
```sh
systemd-cryptenroll /dev/disk --fido2-device=device --fido2-credential-algorithm=eddsa --unlock-fido2-device=auto
```
#### Trusted Platform Module
systemd-cryptenroll has native support for enrolling LUKS keys in TPMs. It requires the following:
- `tpm2-tss` must be installed,
- A LUKS2 device (currently the default type used by cryptsetup),
- If you intend to use this method on your root partition, some tweaks need to be made to the initramfs
To begin, run the following command to list your installed TPMs and the driver in use:
```sh
systemd-cryptenroll --tpm2-device=list
```
> **Tip**: If your computer has multiple TPMs installed, specify the one you wish to use with `--tpm2-device=/path/to/tpm2_device` in the following steps.
A key may be enrolled in both the TPM and the LUKS volume using only one command. The following example generates a new random key, adds it to the volume so it can be used to unlock it in addition to the existing keys, and binds this new key to PCR 7 (Secure Boot state):
```sh
systemd-cryptenroll --tpm2-device=auto /dev/sdX
```
where `/dev/sdX` is the full path to the encrypted LUKS volume. Use `--unlock-key-file=/path/to/keyfile` if the LUKS volume is unlocked by a keyfile instead of a passphrase.
> Note: It is possible to require a PIN to be entered in addition to the TPM state being correct. Simply add the option `--tpm2-with-pin=yes` to the command above and enter the PIN when prompted.
To check that the new key was enrolled, dump the LUKS configuration and look for a systemd-tpm2 token entry, as well as an additional entry in the Keyslots section:
```sh
cryptsetup luksDump /dev/sdX
```
To test that the key works, run the following command while the LUKS volume is closed:
```sh
systemd-cryptsetup attach mapping_name /dev/sdX none tpm2-device=auto
```
where `mapping_name` is your chosen name for the volume once opened.
##### Modules
If your TPM requires a kernel module, edit `/etc/mkinitcpio.conf` and edit the `MODULES` line to add the module used by your TPM. For instance:
```sh
MODULES=(tpm_tis)
```