knowledge/technology/linux/filesystems/ZFS.md

12 KiB

arch-wiki website repo obj
https://wiki.archlinux.org/title/ZFS https://openzfs.org/wiki/Main_Page https://github.com/openzfs/zfs filesystem

ZFS

ZFS is an advanced filesystem.

ZPool

Create a pool*

zpool create -f -m <mount> <pool> [raidz(2|3)|mirror] <ids>

Pool Status

zpool list
zpool status -v

Import Pool

zpool import -l -d /dev/disk/by-id <pool> # Import pool
zpool import -a # Import all available pools
zpool import -a -f # Import all available pools with force option

Export Pool

zpool export <pool>

Extend Pool

zpool add <pool> <device-id>

Destroy pool / dataset

zpool destroy <pool>
zfs destroy <pool>/<dataset>

Rename a pool

zpool export oldname
zpool import oldname newname

Upgrade pool

zpool upgrade <pool>

Automount

Specify this for zfs-import-cache.service to pick up the pool

zpool set cachefile=/etc/zfs/zpool.cache <pool>

Datasets

Dataset Managment

Create Dataset

zfs create <nameofzpool>/<nameofdataset>

Create encrypted dataset:

openssl rand 32 > /path/to/key # Generate key

zfs create -o encryption=on -o keyformat=raw -o keylocation=file://$KEYLOCATION "$DATASET"

Change encryption keys:

zfs change-key -l [-o keylocation=value] [-o keyformat=value] [-o pbkdf2iters=value] filesystem

Load all encryption keys

zfs load-key -a

Load all keys at boot with systemd
/etc/systemd/system/zfs-load-key.service:

[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Destroy dataset:

# Destroy dataset
zfs destroy <pool>/<dataset>

# Destroy Dataset recursively with all subdatasets
zfs destroy -r <pool>/<dataset>

Mount dataset:

zfs mount <pool>/<dataset>
zfs unmount <pool>/<dataset>

Snapshots

Create snapshot:

zfs snapshot <pool>/<dataset>@<snapshot>

Rollback snapshot:

# Rollback to snapshot and destroy any newer snapshots
zfs rollback -r <pool>/<dataset>@<snapshot>

Show differences between snapshots:

zfs diff -F tank/test@before tank/test

The -F flag shows more information about the type of files that have changed:

Symbol Meaning
B Block device
C Character device
/ Directory
> Door
` `
@ Symbolic link
P Event port
= Socket
F Regular file

Send and receive:

zfs send -v -w <pool>/<dataset>@<snapshot> | zfs recv <pool>/<newdataset>

Properties

Get properties:

zfs get <property> <pool>/<dataset>

Set properties

zfs set <property>=<value> <pool>/<dataset>

Available Properties

PROPERTY EDIT INHERIT VALUES
available NO NO <size>
clones NO NO <dataset>[,...]
compressratio NO NO <1.00x or higher if compressed>
createtxg NO NO <uint64>
creation NO NO <date>
encryptionroot NO NO <filesystem | volume>
filesystem_count NO NO <count>
guid NO NO <uint64>
keystatus NO NO none / unavailable / available
logicalreferenced NO NO <size>
logicalused NO NO <size>
mounted NO NO yes / no
origin NO NO <snapshot>
redact_snaps NO NO <snapshot>[,...]
referenced NO NO <size>
snapshot_count NO NO <count>
type NO NO filesystem / volume / snapshot / bookmark
used NO NO <size>
usedbychildren NO NO <size>
usedbydataset NO NO <size>
usedbyrefreservation NO NO <size>
usedbysnapshots NO NO <size>
written NO NO <size>
atime YES YES on / off
casesensitivity NO YES sensitive / insensitive / mixed
checksum YES YES on / off / fletcher2 / fletcher4 / sha256 / sha512 / skein / edonr / blake3
compression YES YES on / off / lzjb / gzip / gzip-[1-9] / zle / lz4 / zstd / zstd-[1-19] / zstd-fast / zstd-fast-[1-10,20,30,40,50,60,70,80,90,100,500,1000]
copies YES YES 1 / 2 / 3
dedup YES YES on / off / verify / sha256[,verify] / sha512[,verify] / skein[,verify] / edonr,verify / blake3[,verify]
encryption NO YES on / off / aes-128-ccm / aes-192-ccm / aes-256-ccm / aes-128-gcm / aes-192-gcm / aes-256-gcm
keyformat NO NO none / raw / hex / passphrase
keylocation YES NO prompt / <file URI> / <https URL> / <http URL>
mountpoint YES YES <path> / legacy / none
pbkdf2iters NO NO <iters>
quota YES NO <size> / none
readonly YES YES on / off
relatime YES YES on / off
reservation YES NO <size> / none
snapdir YES YES hidden / visible

Scrub

Like Btrfs ZFS can heal itself with scrubbing

Start a scrub:

zpool scrub <pool>

Cancel a scrub:

zpool scrub -s <pool>

Autoscrubbing with systemd:

/etc/systemd/system/zfs-scrub@.timer

[Unit]
Description=Monthly zpool scrub on %i

[Timer]
OnCalendar=monthly
AccuracySec=1h
Persistent=true

[Install]
WantedBy=multi-user.target

/etc/systemd/system/zfs-scrub@.service

[Unit]
Description=zpool scrub on %i

[Service]
Nice=19
IOSchedulingClass=idle
KillSignal=SIGINT
ExecStart=/usr/bin/zpool scrub %i

[Install]
WantedBy=multi-user.target