Initialise
Pool
zpool create pool /dev/sdd /dev/sde
The pool names mirror, raidz, draid, spare and log are reserved, as are names beginning with mirror, raidz, draid, and spare.
Pool Size with Different Disk Sizes
When creating a ZFS pool with disks of different sizes, the total usable size of the pool depends on the RAID configuration and the size of the smallest disk in the pool. ZFS aligns the storage capacity of all disks in a vdev to the smallest disk in that vdev.
Example:
If you create a pool with the following disks:
/dev/sdd
(2TB)/dev/sde
(1TB)
The total usable size will be based on the smallest disk (1TB). For example:
- In a RAID1 (mirror) configuration, the pool size will be 1TB (mirrored across both disks).
- In a RAID-Z1 configuration with three disks (e.g., 2TB, 1TB, 1TB), the pool size will be approximately 2TB (smallest disk size × (number of disks - 1)).
Volume
zfs create pool/volume1
zfs create pool/volume2
To create a ZFS volume with specific properties, you can use the following command:
zfs create -V 10G pool/volume3
This creates a ZFS volume named volume3
with a size of 10GB in the pool
.
Set Quota
zfs set quota=10G pool/volume2
Encryption
Create
dd if=/dev/random of=/root/key bs=32 count=1
zfs create -o encryption=aes-128-gcm -o keyformat=raw -o keylocation=file:///root/key pool/encryptedvolume
Lock
zfs unload-key pool/encryptedvolume
Unlock
zfs load-key pool/encryptedvolume
Change key
zfs change-key pool/encryptedvolume
Encryption Algorithm Comparison
Encryption Algorithm Options:
- off: No encryption is used. Data is stored in plaintext.
- on: AES-128-CCM (Cipher Block Chaining Message Authentication Code) is used by default.
- aes-128-ccm: AES-128-CCM (Cipher Block Chaining Message Authentication Code) with a 128-bit key size.
- aes-192-ccm: AES-192-CCM (Cipher Block Chaining Message Authentication Code) with a 192-bit key size.
- aes-256-ccm: AES-256-CCM (Cipher Block Chaining Message Authentication Code) with a 256-bit key size.
- aes-128-gcm: AES-128-GCM (Galois/Counter Mode) with a 128-bit key size.
- aes-192-gcm: AES-192-GCM (Galois/Counter Mode) with a 192-bit key size.
- aes-256-gcm: AES-256-GCM (Galois/Counter Mode) with a 256-bit key size.
Comparison:
Encryption Algorithm | Key Size | Mode | Performance | Security |
---|---|---|---|---|
off | - | - | Fastest | None |
on (default) | 128 | CCM | Medium | Good |
aes-128-ccm | 128 | CCM | Medium | Good |
aes-192-ccm | 192 | CCM | Medium | Better |
aes-256-ccm | 256 | CCM | Medium | Best |
aes-128-gcm | 128 | GCM | Fast | Good |
aes-192-gcm | 192 | GCM | Fast | Better |
aes-256-gcm | 256 | GCM | Fast | Best |
Recommendations:
- If you don’t need encryption, use off.
- For most use cases, aes-256-gcm provides a good balance between security and performance.
- If you prefer a more conservative approach, use aes-256-ccm for better security at the cost of slightly slower performance.
- For high-performance environments, aes-128-gcm or aes-256-gcm may be suitable, as GCM is generally faster than CCM.
Keep in mind that the performance difference between CCM and GCM modes is usually negligible for most workloads. The choice of encryption algorithm and key size ultimately depends on your specific security requirements, performance needs, and organizational policies.
Volume Configurations
Concat
zpool create pool /dev/sdd /dev/sde /dev/sdf
RAID0 (Striping)
zpool create pool stripe /dev/sdd /dev/sde
RAID1 (Mirroring)
zpool create pool mirror /dev/sdd /dev/sde
RAID10 (Striped Mirrors)
zpool create pool mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg
zpool create pool
RAID01 (Mirrored Stripes)
Note: This configuration is discouraged. Added for completeness.
zpool create pool raidz mirror /dev/sdd /dev/sde /dev/sdf /dev/sdg
RAID-Z1 (Single Parity)
To create a RAID-Z1 pool with three devices:
zpool create pool raidz /dev/sdd /dev/sde /dev/sdf
RAID-Z2 (Double Parity)
To create a RAID-Z2 pool with five devices:
zpool create pool raidz2 /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
RAID-Z3 (Triple Parity)
To create a RAID-Z3 pool with six devices:
zpool create pool raidz3 /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
These configurations provide varying levels of redundancy and fault tolerance, with RAID-Z1 allowing one drive failure, RAID-Z2 allowing two, and RAID-Z3 allowing three.
RAID-Z with Mirrors
To create a ZFS pool that combines RAID-Z with mirrored vdevs, you can use the following command:
zpool create pool raidz mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi
In this configuration:
- Each pair of devices (
/dev/sdd
and/dev/sde
,/dev/sdf
and/dev/sdg
,/dev/sdh
and/dev/sdi
) is mirrored. - The mirrors are then combined into a RAID-Z configuration, providing redundancy and fault tolerance.
This setup offers a balance between performance, storage efficiency, and redundancy. It allows for the failure of one disk per mirror without data loss.
Concatenated RAID10
To create a ZFS pool with concatenated RAID10, where multiple mirrored vdevs are concatenated:
zpool create pool mirror /dev/sdd /dev/sde mirror /dev/sdf /dev/sdg mirror /dev/sdh /dev/sdi
In this setup:
- Each pair of devices (
/dev/sdd
and/dev/sde
,/dev/sdf
and/dev/sdg
,/dev/sdh
and/dev/sdi
) is mirrored. - The mirrored vdevs are concatenated to form the pool.
This configuration provides redundancy (one disk failure per mirror) and improved read performance due to striping across mirrors.
RAID-Z1 with Concatenation
To create a ZFS pool that combines RAID-Z1 with concatenated vdevs:
zpool create pool raidz /dev/sdd /dev/sde /dev/sdf raidz /dev/sdg /dev/sdh /dev/sdi
In this setup:
- The first three devices (
/dev/sdd
,/dev/sde
,/dev/sdf
) form a RAID-Z1 vdev. - The next three devices (
/dev/sdg
,/dev/sdh
,/dev/sdi
) form another RAID-Z1 vdev. - These RAID-Z1 vdevs are concatenated to form the pool.
This configuration provides redundancy (one disk failure per RAID-Z1 vdev) and increased storage capacity compared to a single RAID-Z1 vdev.
Examples
zpool create pool /dev/sdd
zpool add pool mirror /dev/sde /dev/sdf
zpool add pool raidz /dev/sdg /dev/sdh /dev/sdi
zpool add pool raidz2 /dev/sdj /dev/sdk /dev/sdl /dev/sdm
Size is /dev/sdd + min(/dev/sde, /dev/sdf) + 2 * min(/dev/sdg, /dev/sdh, /dev/sdi) + 2 * min(/dev/sdj, /dev/sdk, /dev/sdl, /dev/sdm)
zpool create pool mirror
zpool add pool /dev/sdd
zpool add pool mirror /dev/sde /dev/sdf
zpool add pool raidz /dev/sdg /dev/sdh /dev/sdi
zpool add pool raidz2 /dev/sdj /dev/sdk /dev/sdl /dev/sdm
ZFS Pool with Another Pool as a Virtual Device (Nested Pools)
zpool create pool1 /dev/sdd
zpool create pool2 mirror /dev/sde /dev/sdf
zpool create pool3 raidz /dev/sdg /dev/sdh /dev/sdi
zpool create pool4 raidz2 /dev/sdj /dev/sdk /dev/sdl /dev/sdm
zpool create pool mirror pool1 pool2 pool3 pool4
Size of pool1 is /dev/sdd Size of pool2 is min(/dev/sde, /dev/sdf) Size of pool3 is 2 * min(/dev/sdg, /dev/sdh, /dev/sdi) Size of pool4 is 2 * min(/dev/sdj, /dev/sdk, /dev/sdl, /dev/sdm) Size of pool is min(pool1, pool2, pool3, pool4)
Size is min(/dev/sdd, min(/dev/sde, /dev/sdf), 2 * min(/dev/sdg, /dev/sdh, /dev/sdi), 2 * min(/dev/sdj, /dev/sdk, /dev/sdl, /dev/sdm))
Replace a device
To replace a failed device in a ZFS pool, use the following command:
zpool replace pool _old_device_ _new_device_
Example:
If /dev/sdd
has failed and you want to replace it with /dev/sdj
:
zpool replace pool /dev/sdd /dev/sdj
After replacing the device, ZFS will begin resilvering the data onto the new device. You can monitor the progress using:
zpool status
Once the resilvering process is complete, the pool will be healthy again.
Create ZFS volume
zfs create pool/volume1
zfs create pool/volume2
Compression
zfs set compression=lz4 pool
zfs set compression=lz4 pool/volume
Compression Algorithms
Compression Algorithm | Description | Compression Ratio | Compression Speed | Decompression Speed |
---|---|---|---|---|
off | No compression | 1:1 | - | - |
lzjb | Simple, fast compression | 2:1 - 3:1 | Fast | Fast |
gzip | Standard compression (level 6) | 3:1 - 5:1 | Medium | Medium |
gzip-1 | Low compression, high speed | 1.5:1 - 2:1 | Very Fast | Very Fast |
gzip-2 | Low compression, high speed | 1.5:1 - 2:1 | Very Fast | Very Fast |
gzip-3 | Low compression, medium speed | 2:1 - 3:1 | Fast | Fast |
gzip-4 | Medium compression, medium speed | 2.5:1 - 3.5:1 | Medium | Medium |
gzip-5 | Medium compression, medium speed | 3:1 - 4:1 | Medium | Medium |
gzip-6 | Standard compression (default) | 3:1 - 5:1 | Medium | Medium |
gzip-7 | High compression, low speed | 4:1 - 6:1 | Slow | Slow |
gzip-8 | High compression, low speed | 5:1 - 7:1 | Slow | Slow |
gzip-9 | Very high compression, very low speed | 6:1 - 10:1 | Very Slow | Very Slow |
zle | Zero-length encoding (no compression) | 1:1 | - | - |
lz4 | Fast compression, moderate ratio | 2:1 - 3:1 | Very Fast | Very Fast |
zstd | Balanced compression, high ratio | 3:1 - 5:1 | Medium | Medium |
zstd-1 | Low compression, high speed | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-2 | Low compression, high speed | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-3 | Low compression, medium speed | 2:1 - 3:1 | Fast | Fast |
zstd-4 | Medium compression, medium speed | 2.5:1 - 3.5:1 | Medium | Medium |
zstd-5 | Medium compression, medium speed | 3:1 - 4:1 | Medium | Medium |
zstd-6 | Standard compression (default) | 3:1 - 5:1 | Medium | Medium |
zstd-7 | High compression, low speed | 4:1 - 6:1 | Slow | Slow |
zstd-8 | High compression, low speed | 5:1 - 7:1 | Slow | Slow |
zstd-9 | Very high compression, low speed | 6:1 - 10:1 | Slow | Slow |
zstd-10 | Very high compression, very low speed | 7:1 - 12:1 | Very Slow | Very Slow |
zstd-11 | Very high compression, very low speed | 8:1 - 15:1 | Very Slow | Very Slow |
zstd-12 | Very high compression, very low speed | 9:1 - 18:1 | Very Slow | Very Slow |
zstd-13 | Very high compression, very low speed | 10:1 - 20:1 | Very Slow | Very Slow |
zstd-14 | Very high compression, very low speed | 12:1 - 25:1 | Very Slow | Very Slow |
zstd-15 | Very high compression, very low speed | 15:1 - 30:1 | Very Slow | Very Slow |
zstd-16 | Very high compression, very low speed | 18:1 - 35:1 | Very Slow | Very Slow |
zstd-17 | Very high compression, very low speed | 20:1 - 40:1 | Very Slow | Very Slow |
zstd-18 | Very high compression, very low speed | 25:1 - 50:1 | Very Slow | Very Slow |
zstd-19 | Very high compression, very low speed | 30:1 - 60:1 | Very Slow | Very Slow |
zstd-fast | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-1 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-2 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-3 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-4 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-5 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-6 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-7 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-8 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-9 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-10 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-20 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-30 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-40 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-50 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-60 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-70 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-80 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-90 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-100 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-500 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
zstd-fast-1000 | Fast compression, low ratio | 1.5:1 - 2:1 | Very Fast | Very Fast |
Note: The compression ratio and speed are approximate
In general:
- lzjb and lz4 are fast and simple compression algorithms with moderate compression ratios.
- gzip and zstd are more complex compression algorithms with higher compression ratios, but may be slower.
- zstd-fast and zstd-fast-X are faster versions of zstd with lower compression ratios.
- zle is not a compression algorithm, but a zero-length encoding scheme.
- off means no compression is used.
When choosing a compression algorithm, consider the following factors:
- Data type: Different compression algorithms may perform better on different types of data (e.g., text, images, video).
- Compression ratio: Higher compression ratios can save more space, but may require more processing power and time.
- Compression speed: Faster compression algorithms can reduce the time required for data transfer and processing.
- Decompression speed: Faster decompression algorithms can improve the performance of applications that need to access compressed data.
- System resources: More complex compression algorithms may require more CPU, memory, and disk resources.
Import ZFS pool
zpool import pool -f
Encrypted pool
zpool import pool -f
zfs load-key pool
Snapshots
Create
zfs snapshot pool/volume@snapshot1
List
zfs list
To display specific datasets:
zfs list pool/volume1
You can also use the -t
option to filter the output by dataset type (e.g., filesystem, volume, snapshot):
zfs list -t snapshot
This command lists only snapshots.
Destroy
zfs destroy pool/volume@snapshot1
Rollback
zfs rollback pool/volume@snapshot1
Clone
zfs clone pool/volume@snapshot1 pool/volume_clone
Promote
zfs promote pool/volume_clone
Rename
zfs rename pool/volume1 pool/volume2
Backup and Restore
zfs send
and zfs receive
commands in the ZFS enable efficient data transfer between ZFS datasets, either locally within the same system or over a network between different systems. They are commonly used for data backup, replication, and migration purposes.
send
The zfs send
command is used to generate a stream representation of a ZFS snapshot. This stream can then be written to a file, piped to another command, or sent over the network to another system for receipt by zfs receive
. The snapshot is a read-only copy of the dataset at a particular point in time, and it’s this snapshot that you’re effectively sending when you use zfs send
.
Here’s a basic syntax of zfs send
:
zfs send [-v] [-i snapshot_name] snapshot_name
-v
increases verbosity.-i
specifies an incremental send, starting from a given snapshot. This means instead of sending the entire dataset, only the differences since the specified snapshot are sent.
Example:
zfs send pool/volume1@snap1 > /tmp/volume1-snap1.zfs
This command sends the @snap1
snapshot of the pool/volume1
dataset to a file named volume1-snap1.zfs
.
receive
The zfs receive
command is used to receive a stream generated by zfs send
and apply it to a ZFS dataset. This command can be used to restore a dataset from a previous snapshot, replicate datasets between systems, or clone a dataset on the same or different system.
The basic syntax of zfs receive
is:
zfs receive [-vF] filesystem|volume
-v
increases verbosity.-F
forces the rollback of the target dataset to the most recent snapshot before applying the new stream. This is important to prevent data loss if the target dataset has been modified.
Example:
zfs receive -F pool/restore < /tmp/volume1-snap1.zfs
This command applies the snapshot stored in volume1-snap1.zfs
to the pool/restore
dataset, overwriting any existing data.
Sending and Receiving Over a Network
You can also use these commands in combination with network tools like ssh
to transfer data between systems. For example:
zfs send pool/volume1@snap1 | ssh user@remotehost "zfs receive -F remotehost/pool/restore"
This command sends the snapshot @snap1
of pool/volume1
over the network to remotehost
, where it’s received and applied to remotehost/pool/restore
.
Incremental Sends and Receives
zfs send
and zfs receive
also support incremental transfers, where only the changes since the last send are sent or received. This is particularly useful for periodic backups, reducing the amount of data that needs to be transferred.
zfs send -i pool/volume1@snap1 pool/volume1@snap2 | ssh user@remotehost "zfs receive -F remotehost/pool/restore"
This command transfers only the changes between @snap1
and @snap2
of the dataset pool/volume1
to the remote host, where it’s applied to remotehost/pool/restore
.
These commands are powerful tools for data management in ZFS environments, allowing for efficient, incremental backups, and replication between systems.