Initialise

Pool

zpool create pool /dev/sdd /dev/sde

The pool names mirror, raidz, draid, spare and log are reserved, as are names beginning with mirror, raidz, draid, and spare.

Pool Size with Different Disk Sizes

When creating a ZFS pool with disks of different sizes, the total usable size of the pool depends on the RAID configuration and the size of the smallest disk in the pool. ZFS aligns the storage capacity of all disks in a vdev to the smallest disk in that vdev.

Example:

If you create a pool with the following disks:

  • /dev/sdd (2TB)
  • /dev/sde (1TB)

The total usable size will be based on the smallest disk (1TB). For example:

  • In a RAID1 (mirror) configuration, the pool size will be 1TB (mirrored across both disks).
  • In a RAID-Z1 configuration with three disks (e.g., 2TB, 1TB, 1TB), the pool size will be approximately 2TB (smallest disk size × (number of disks - 1)).

Volume

zfs create pool/volume1
zfs create pool/volume2

To create a ZFS volume with specific properties, you can use the following command:

zfs create -V 10G pool/volume3

This creates a ZFS volume named volume3 with a size of 10GB in the pool.

Set Quota

zfs set quota=10G pool/volume2

Encryption

Create

dd if=/dev/random of=/root/key bs=32 count=1
zfs create -o encryption=aes-128-gcm -o keyformat=raw -o keylocation=file:///root/key pool/encryptedvolume

Lock

zfs unload-key pool/encryptedvolume

Unlock

zfs load-key pool/encryptedvolume

Change key

zfs change-key pool/encryptedvolume

Encryption Algorithm Comparison

Encryption Algorithm Options:

  1. off: No encryption is used. Data is stored in plaintext.
  2. on: AES-128-CCM (Cipher Block Chaining Message Authentication Code) is used by default.
  3. aes-128-ccm: AES-128-CCM (Cipher Block Chaining Message Authentication Code) with a 128-bit key size.
  4. aes-192-ccm: AES-192-CCM (Cipher Block Chaining Message Authentication Code) with a 192-bit key size.
  5. aes-256-ccm: AES-256-CCM (Cipher Block Chaining Message Authentication Code) with a 256-bit key size.
  6. aes-128-gcm: AES-128-GCM (Galois/Counter Mode) with a 128-bit key size.
  7. aes-192-gcm: AES-192-GCM (Galois/Counter Mode) with a 192-bit key size.
  8. aes-256-gcm: AES-256-GCM (Galois/Counter Mode) with a 256-bit key size.

Comparison:

Encryption AlgorithmKey SizeModePerformanceSecurity
off--FastestNone
on (default)128CCMMediumGood
aes-128-ccm128CCMMediumGood
aes-192-ccm192CCMMediumBetter
aes-256-ccm256CCMMediumBest
aes-128-gcm128GCMFastGood
aes-192-gcm192GCMFastBetter
aes-256-gcm256GCMFastBest

Recommendations:

  • If you don’t need encryption, use off.
  • For most use cases, aes-256-gcm provides a good balance between security and performance.
  • If you prefer a more conservative approach, use aes-256-ccm for better security at the cost of slightly slower performance.
  • For high-performance environments, aes-128-gcm or aes-256-gcm may be suitable, as GCM is generally faster than CCM.

Keep in mind that the performance difference between CCM and GCM modes is usually negligible for most workloads. The choice of encryption algorithm and key size ultimately depends on your specific security requirements, performance needs, and organizational policies.

Volume Configurations

Concat

zpool create pool /dev/sdd /dev/sde /dev/sdf

RAID0 (Striping)

zpool create pool stripe /dev/sdd /dev/sde

RAID1 (Mirroring)

zpool create pool mirror /dev/sdd /dev/sde

RAID10 (Striped Mirrors)

zpool create pool mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg
zpool create pool 

RAID01 (Mirrored Stripes)

Note: This configuration is discouraged. Added for completeness.

zpool create pool raidz mirror /dev/sdd /dev/sde /dev/sdf /dev/sdg

RAID-Z1 (Single Parity)

To create a RAID-Z1 pool with three devices:

zpool create pool raidz /dev/sdd /dev/sde /dev/sdf

RAID-Z2 (Double Parity)

To create a RAID-Z2 pool with five devices:

zpool create pool raidz2 /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh

RAID-Z3 (Triple Parity)

To create a RAID-Z3 pool with six devices:

zpool create pool raidz3 /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi

These configurations provide varying levels of redundancy and fault tolerance, with RAID-Z1 allowing one drive failure, RAID-Z2 allowing two, and RAID-Z3 allowing three.

RAID-Z with Mirrors

To create a ZFS pool that combines RAID-Z with mirrored vdevs, you can use the following command:

zpool create pool raidz mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi

In this configuration:

  • Each pair of devices (/dev/sdd and /dev/sde, /dev/sdf and /dev/sdg, /dev/sdh and /dev/sdi) is mirrored.
  • The mirrors are then combined into a RAID-Z configuration, providing redundancy and fault tolerance.

This setup offers a balance between performance, storage efficiency, and redundancy. It allows for the failure of one disk per mirror without data loss.

Concatenated RAID10

To create a ZFS pool with concatenated RAID10, where multiple mirrored vdevs are concatenated:

zpool create pool mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi

In this setup:

  • Each pair of devices (/dev/sdd and /dev/sde, /dev/sdf and /dev/sdg, /dev/sdh and /dev/sdi) is mirrored.
  • The mirrored vdevs are concatenated to form the pool.

This configuration provides redundancy (one disk failure per mirror) and improved read performance due to striping across mirrors.


RAID-Z1 with Concatenation

To create a ZFS pool that combines RAID-Z1 with concatenated vdevs:

zpool create pool raidz /dev/sdd /dev/sde /dev/sdf raidz /dev/sdg /dev/sdh /dev/sdi

In this setup:

  • The first three devices (/dev/sdd, /dev/sde, /dev/sdf) form a RAID-Z1 vdev.
  • The next three devices (/dev/sdg, /dev/sdh, /dev/sdi) form another RAID-Z1 vdev.
  • These RAID-Z1 vdevs are concatenated to form the pool.

This configuration provides redundancy (one disk failure per RAID-Z1 vdev) and increased storage capacity compared to a single RAID-Z1 vdev.

Examples

zpool create pool /dev/sdd
zpool add pool mirror /dev/sde /dev/sdf
zpool add pool raidz /dev/sdg /dev/sdh /dev/sdi
zpool add pool raidz2 /dev/sdj /dev/sdk /dev/sdl /dev/sdm

Size is /dev/sdd + min(/dev/sde, /dev/sdf) + 2 * min(/dev/sdg, /dev/sdh, /dev/sdi) + 2 * min(/dev/sdj, /dev/sdk, /dev/sdl, /dev/sdm)

zpool create pool mirror 
zpool add pool /dev/sdd
zpool add pool mirror /dev/sde /dev/sdf
zpool add pool raidz /dev/sdg /dev/sdh /dev/sdi
zpool add pool raidz2 /dev/sdj /dev/sdk /dev/sdl /dev/sdm

ZFS Pool with Another Pool as a Virtual Device (Nested Pools)

zpool create pool1 /dev/sdd
zpool create pool2 mirror /dev/sde /dev/sdf
zpool create pool3 raidz /dev/sdg /dev/sdh /dev/sdi
zpool create pool4 raidz2 /dev/sdj /dev/sdk /dev/sdl /dev/sdm
zpool create pool mirror pool1 pool2 pool3 pool4

Size of pool1 is /dev/sdd Size of pool2 is min(/dev/sde, /dev/sdf) Size of pool3 is 2 * min(/dev/sdg, /dev/sdh, /dev/sdi) Size of pool4 is 2 * min(/dev/sdj, /dev/sdk, /dev/sdl, /dev/sdm) Size of pool is min(pool1, pool2, pool3, pool4)

Size is min(/dev/sdd, min(/dev/sde, /dev/sdf), 2 * min(/dev/sdg, /dev/sdh, /dev/sdi), 2 * min(/dev/sdj, /dev/sdk, /dev/sdl, /dev/sdm))

Replace a device

To replace a failed device in a ZFS pool, use the following command:

zpool replace pool _old_device_ _new_device_

Example:

If /dev/sdd has failed and you want to replace it with /dev/sdj:

zpool replace pool /dev/sdd /dev/sdj

After replacing the device, ZFS will begin resilvering the data onto the new device. You can monitor the progress using:

zpool status

Once the resilvering process is complete, the pool will be healthy again.

Create ZFS volume

zfs create pool/volume1
zfs create pool/volume2

Compression

zfs set compression=lz4 pool
zfs set compression=lz4 pool/volume

Compression Algorithms

Compression AlgorithmDescriptionCompression RatioCompression SpeedDecompression Speed
offNo compression1:1--
lzjbSimple, fast compression2:1 - 3:1FastFast
gzipStandard compression (level 6)3:1 - 5:1MediumMedium
gzip-1Low compression, high speed1.5:1 - 2:1Very FastVery Fast
gzip-2Low compression, high speed1.5:1 - 2:1Very FastVery Fast
gzip-3Low compression, medium speed2:1 - 3:1FastFast
gzip-4Medium compression, medium speed2.5:1 - 3.5:1MediumMedium
gzip-5Medium compression, medium speed3:1 - 4:1MediumMedium
gzip-6Standard compression (default)3:1 - 5:1MediumMedium
gzip-7High compression, low speed4:1 - 6:1SlowSlow
gzip-8High compression, low speed5:1 - 7:1SlowSlow
gzip-9Very high compression, very low speed6:1 - 10:1Very SlowVery Slow
zleZero-length encoding (no compression)1:1--
lz4Fast compression, moderate ratio2:1 - 3:1Very FastVery Fast
zstdBalanced compression, high ratio3:1 - 5:1MediumMedium
zstd-1Low compression, high speed1.5:1 - 2:1Very FastVery Fast
zstd-2Low compression, high speed1.5:1 - 2:1Very FastVery Fast
zstd-3Low compression, medium speed2:1 - 3:1FastFast
zstd-4Medium compression, medium speed2.5:1 - 3.5:1MediumMedium
zstd-5Medium compression, medium speed3:1 - 4:1MediumMedium
zstd-6Standard compression (default)3:1 - 5:1MediumMedium
zstd-7High compression, low speed4:1 - 6:1SlowSlow
zstd-8High compression, low speed5:1 - 7:1SlowSlow
zstd-9Very high compression, low speed6:1 - 10:1SlowSlow
zstd-10Very high compression, very low speed7:1 - 12:1Very SlowVery Slow
zstd-11Very high compression, very low speed8:1 - 15:1Very SlowVery Slow
zstd-12Very high compression, very low speed9:1 - 18:1Very SlowVery Slow
zstd-13Very high compression, very low speed10:1 - 20:1Very SlowVery Slow
zstd-14Very high compression, very low speed12:1 - 25:1Very SlowVery Slow
zstd-15Very high compression, very low speed15:1 - 30:1Very SlowVery Slow
zstd-16Very high compression, very low speed18:1 - 35:1Very SlowVery Slow
zstd-17Very high compression, very low speed20:1 - 40:1Very SlowVery Slow
zstd-18Very high compression, very low speed25:1 - 50:1Very SlowVery Slow
zstd-19Very high compression, very low speed30:1 - 60:1Very SlowVery Slow
zstd-fastFast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-1Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-2Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-3Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-4Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-5Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-6Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-7Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-8Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-9Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-10Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-20Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-30Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-40Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-50Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-60Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-70Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-80Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-90Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-100Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-500Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast
zstd-fast-1000Fast compression, low ratio1.5:1 - 2:1Very FastVery Fast

Note: The compression ratio and speed are approximate

In general:

  • lzjb and lz4 are fast and simple compression algorithms with moderate compression ratios.
  • gzip and zstd are more complex compression algorithms with higher compression ratios, but may be slower.
  • zstd-fast and zstd-fast-X are faster versions of zstd with lower compression ratios.
  • zle is not a compression algorithm, but a zero-length encoding scheme.
  • off means no compression is used.

When choosing a compression algorithm, consider the following factors:

  • Data type: Different compression algorithms may perform better on different types of data (e.g., text, images, video).
  • Compression ratio: Higher compression ratios can save more space, but may require more processing power and time.
  • Compression speed: Faster compression algorithms can reduce the time required for data transfer and processing.
  • Decompression speed: Faster decompression algorithms can improve the performance of applications that need to access compressed data.
  • System resources: More complex compression algorithms may require more CPU, memory, and disk resources.

Import ZFS pool

zpool import pool -f

Encrypted pool

zpool import pool -f
zfs load-key pool

Snapshots

Create

zfs snapshot pool/volume@snapshot1

List

zfs list

To display specific datasets:

zfs list pool/volume1

You can also use the -t option to filter the output by dataset type (e.g., filesystem, volume, snapshot):

zfs list -t snapshot

This command lists only snapshots.

Destroy

zfs destroy pool/volume@snapshot1

Rollback

zfs rollback pool/volume@snapshot1

Clone

zfs clone pool/volume@snapshot1 pool/volume_clone

Promote

zfs promote pool/volume_clone

Rename

zfs rename pool/volume1 pool/volume2

Backup and Restore

zfs send and zfs receive commands in the ZFS enable efficient data transfer between ZFS datasets, either locally within the same system or over a network between different systems. They are commonly used for data backup, replication, and migration purposes.

send

The zfs send command is used to generate a stream representation of a ZFS snapshot. This stream can then be written to a file, piped to another command, or sent over the network to another system for receipt by zfs receive. The snapshot is a read-only copy of the dataset at a particular point in time, and it’s this snapshot that you’re effectively sending when you use zfs send.

Here’s a basic syntax of zfs send:

zfs send [-v] [-i snapshot_name] snapshot_name
  • -v increases verbosity.
  • -i specifies an incremental send, starting from a given snapshot. This means instead of sending the entire dataset, only the differences since the specified snapshot are sent.

Example:

zfs send pool/volume1@snap1 > /tmp/volume1-snap1.zfs

This command sends the @snap1 snapshot of the pool/volume1 dataset to a file named volume1-snap1.zfs.

receive

The zfs receive command is used to receive a stream generated by zfs send and apply it to a ZFS dataset. This command can be used to restore a dataset from a previous snapshot, replicate datasets between systems, or clone a dataset on the same or different system.

The basic syntax of zfs receive is:

zfs receive [-vF] filesystem|volume
  • -v increases verbosity.
  • -F forces the rollback of the target dataset to the most recent snapshot before applying the new stream. This is important to prevent data loss if the target dataset has been modified.

Example:

zfs receive -F pool/restore < /tmp/volume1-snap1.zfs

This command applies the snapshot stored in volume1-snap1.zfs to the pool/restore dataset, overwriting any existing data.

Sending and Receiving Over a Network

You can also use these commands in combination with network tools like ssh to transfer data between systems. For example:

zfs send pool/volume1@snap1 | ssh user@remotehost "zfs receive -F remotehost/pool/restore"

This command sends the snapshot @snap1 of pool/volume1 over the network to remotehost, where it’s received and applied to remotehost/pool/restore.

Incremental Sends and Receives

zfs send and zfs receive also support incremental transfers, where only the changes since the last send are sent or received. This is particularly useful for periodic backups, reducing the amount of data that needs to be transferred.

zfs send -i pool/volume1@snap1 pool/volume1@snap2 | ssh user@remotehost "zfs receive -F remotehost/pool/restore"

This command transfers only the changes between @snap1 and @snap2 of the dataset pool/volume1 to the remote host, where it’s applied to remotehost/pool/restore.

These commands are powerful tools for data management in ZFS environments, allowing for efficient, incremental backups, and replication between systems.