quickreference:zfs
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
quickreference:zfs [2024/02/03 23:04] – rodolico | quickreference:zfs [2025/03/21 23:39] (current) – rodolico | ||
---|---|---|---|
Line 14: | Line 14: | ||
==== Create a zpool ==== | ==== Create a zpool ==== | ||
- | Now that we have ZFS running, we'll create a zpool, the basic container for all of our stuff. In this case, I want to use the raidz2 for redundancy (two drives are used for checksumming). Since I don't know the correct names for everything, I' | + | Now that we have ZFS running, we'll create a zpool, the basic container for all of our stuff. In this case, I want to use the raidz2 for redundancy (two drives are used for checksumming). Since I don't know the correct names for everything, I' |
+ | |||
+ | **Warning**: | ||
+ | |||
+ | **Note**: I've shown three ways to find the drives on the system. The first three commands give redundant information. If you have smartctl on your system, that is probably the easiest, since it has a scan function built in, but geom is FreeBSD' | ||
<code bash> | <code bash> | ||
- | # find the drives on the system | + | # find the drives on the system. Choose ONE of the following |
+ | geom disk list | grep Geom | rev | cut -d' ' -f1 | rev | sort | ||
egrep ' | egrep ' | ||
+ | smartctl --scan | cut -d' ' -f1 | ||
# we want RAID-6, name it storage, and us /dev/da0 through 7 | # we want RAID-6, name it storage, and us /dev/da0 through 7 | ||
zpool create -f storage raidz2 / | zpool create -f storage raidz2 / | ||
+ | </ | ||
+ | |||
+ | You can add extra functionality by creating //intent//, //dedup// and //cache// vdev's at the same time. Following example shows adding a vdev (mirror) to a pool. | ||
+ | <code bash> | ||
+ | zpool create -f -m /storage storage raidz2 da4 da5 da6 \ | ||
+ | da7 da8 da9 dedup mirror da2 da3 | ||
+ | </ | ||
+ | |||
+ | This will create a pool named storage, mounted (-m) at /storage, forced to ignore most drive errors. The pool will be a raidz2 (aka RAID 6) with 6 drives (4-9), and have a dedup vdev consisting of a mirror from da2 and 3. | ||
+ | |||
+ | ==== Set dataset defaults ==== | ||
+ | |||
+ | Your new dataset may not have the default values you want. It is simple enough to set defaults for all child datasets at this point, then any datasets/ | ||
+ | |||
+ | This is the way I have most set up. Modify it for your own use. Pay particular attention to the dedup=on and compress=gzip-9. | ||
+ | * dedup will use more memory and cpu, but will reduce the amount of disk space required if your data contains duplicates (think 10 Devuan Daedalus installations taking up the same amount of space as 1 for the common blocks) | ||
+ | * compress is fine, but gzip-9 will tear up your processor. I use gzip-9 for backup servers, but for servers actually serving all the time (like iSCSI or NFS), I use ztsd or even lz4 to decrease my server load. See the excellent article at https:// | ||
+ | * volmode=full should really be the default (grumble). It simply takes the block of space and exports it for iSCSI. | ||
+ | |||
+ | <code bash> | ||
+ | zfs set atime=off dedup=on compress=gzip-9 volmode=full storage | ||
</ | </ | ||
quickreference/zfs.1707023069.txt.gz · Last modified: 2024/02/03 23:04 by rodolico