This chapter goes through the basic installation of ZFS modules in your Linux distribution of choice. Ubuntu allows for quick install and setup, so we are going to use it as an example.
System Packages
Before going any further, you need to install some packages from the standard distribution repositories.
Virtual Machine
Before buying the hardware and running tests on bare metal, you may want to install and test ZFS within a virtual machine . It is a good idea and I encourage you to do so. You may, in a very simple and efficient way, get used to administering ZFS pools. You may also check which distribution works better for you. There are no requirements to the virtualization engine. You can use VirtualBox, VMware, KVM, Xen, or any other VM you feel comfortable with. Keep in mind that the tool you use should be able to provide your guest machine with virtual disks to play with. While you can create a pool on the files created within the VM, I don’t recommend that way of testing it.
Note
Bear in mind that virtual machines are not suitable for performance testing. Too many factors stand in the way of reliable results.
Ubuntu Server
If, for some reason, you are running Ubuntu prior to 15.10, you will need to add a special PPA repository:
trochej@ubuntuzfs:~$ sudo add-apt-repository ppa:zfs-native/stable
[sudo] password for trochej:
The native ZFS filesystem for Linux. Install the ubuntu-zfs package.
Please join this Launchpad user group if you want to show support for ZoL:
https://launchpad.net/~zfs-native-users
Send feedback or requests for help to this email list:
http://list.zfsonlinux.org/mailman/listinfo/zfs-discuss
Report bugs at:
https://github.com/zfsonlinux/zfs/issues (for the driver itself)
https://github.com/zfsonlinux/pkg-zfs/issues (for the packaging)
The ZoL project home page is:
http://zfsonlinux.org/
More info: https://launchpad.net/~zfs-native/+archive/ubuntu/stable
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmp4_wvpmaf/secring.gpg' created
gpg: keyring `/tmp/tmp4_wvpmaf/pubring.gpg' created
gpg: requesting key F6B0FC61 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmp4_wvpmaf/trustdb.gpg: trustdb created
gpg: key F6B0FC61: public key "Launchpad PPA for Native ZFS for Linux" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
With Ubuntu 15.10 and later, ZFS support packages are already included in the standard repository. You will need to install the following packages :
trochej@ubuntuzfs:~$ sudo apt-get install zfsutils-linux
This will compile the appropriate kernel modules for you. You can later confirm that they were built and in fact loaded by running lsmod:
trochej@ubuntuzfs:~$ sudo lsmod | grep zfs
zfs 2252800 0
zunicode 331776 1 zfs
zcommon 53248 1 zfs
znvpair 90112 2 zfs,zcommon
spl 102400 3 zfs,zcommon,znvpair
zavl 16384 1 zfs
You should be now able to create a pool:
trochej@ubuntuzfs:~$ sudo zpool create -f datapool
mirror /dev/sdb /dev/sdc
mirror /dev/sdd /dev/sde
mirror /dev/sdf /dev/sdg
trochej@ubuntuzfs:~$ sudo zpool status
pool: datapool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
sdf ONLINE 0 0 0
sdg ONLINE 0 0 0
errors: No known data errors
There is another package you will want to install:
trochej@ubuntuzfs:~$ sudo apt-get install zfs-zed
zed is a ZFS Event Daemon. It is a daemon service that will listen to any ZFS-generated kernel event. It’s explained in more detail in the next section.
CentOS
You will need a system information tool that is not installed by default for monitoring, troubleshooting, and testing your setup:
[root@localhost ~]# yum install sysstat
Contrary to Ubuntu, CentOS doesn’t have ZFS packages by default in the repository, neither in its 6.7 nor 7 version. Thus you need to follow the directions here: http://zfsonlinux.org/epel.html .
The installation for CentOS 7 is exactly the same, except for the package names:
[root@CentosZFS ~]# yum localinstall --nogpgcheck https://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@CentosZFS ~]# yum localinstall --nogpgcheck http://archive.zfsonlinux.org/epel/zfs-release.el6.noarch.rpm
[root@CentosZFS ~]# yum install -y kernel-devel zfs
After some time, you should be ready to probe and use ZFS modules:
[root@CentosZFS ~]# modprobe zfs
[root@CentosZFS ~]# lsmod | grep zfs
zfs 2735595 0
zcommon 48128 1 zfs
znvpair 80220 2 zfs,zcommon
spl 90378 3 zfs,zcommon,znvpair
zavl 7215 1 zfs
zunicode 323046 1 zfs
You’re now ready to create a pool on your attached disks:
[root@CentosZFS ~]# zpool create -f datapool mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde
[root@CentosZFS ~]# zpool status
pool: datapool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
errors: No known data errors
This code installed the aforementioned ZED for you.
System Tools
You will need some system tools . Get used to them.
smartctl: The smartmontools package contains two utility programs (smartctl and smartd) to control and monitor storage systems. It uses the Self-Monitoring, Analysis, and Reporting Technology System (SMART) built into most modern ATA/SATA, SCSI/SAS, and NVMe disks.
lsblk: Tells you what block devices you have. It will assist you in identifying the drive names you will use while setting your ZFS pool.
blkid: Helps you identify drives already used by other file systems. You may want to use mount and df for that purpose too.
ZED
As mentioned, zed is a daemon that will listen to kernel events related to ZFS. Upon receiving events, it will conduct any action defined in so-called ZEDLETs —a script or program that will carry on whatever action it’s supposed to do. ZED is a Linux-specific daemon. In illumos distributions, FMA is the layer responsible for carrying out corrective actions.
Writing ZEDLETs is a topic beyond this guide, but the daemon is essential for two important tasks: monitoring and reporting (via mail) pool health and replacing failed drives with hot spares.
Even though it is a ZFS that is responsible for marking a drive as faulty, the replacement action needs to be carried out by a separate entity.
For those actions to work , after installing the daemon, open its configuration file. It’s usually found in /etc/zfs/zed.d/zed.rc:
# zed.rc
# Absolute path to the debug output file.
# ZED_DEBUG_LOG="/tmp/zed.debug.log"
# Email address of the zpool administrator.
# Email will only be sent if ZED_EMAIL is defined.
ZED_EMAIL="[email protected]"
# Email verbosity.
# If set to 0, suppress email if the pool is healthy.
# If set to 1, send email regardless of pool health.
#ZED_EMAIL_VERBOSE=0
# Minimum number of seconds between emails sent for a similar event.
#ZED_EMAIL_INTERVAL_SECS="3600"
# Default directory for zed lock files.
#ZED_LOCKDIR="/var/lock"
# Default directory for zed state files.
#ZED_RUNDIR="/var/run"
# The syslog priority (eg, specified as a "facility.level" pair).
ZED_SYSLOG_PRIORITY="daemon.notice"
# The syslog tag for marking zed events.
ZED_SYSLOG_TAG="zed"
# Replace a device with a hot spare after N I/O errors are detected.
#ZED_SPARE_ON_IO_ERRORS=1
# Replace a device with a hot spare after N checksum errors are detected.
#ZED_SPARE_ON_CHECKSUM_ERRORS=10
Notice ZED_EMAIL, ZED_SPARE_ON_IO_ERRORS, and ZED_SPARE_ON_CHECKSUM_ERRORS. Uncomment them if you want this functionality.
You can view the kernel messages that zed will listen to by using zpool events with or without the -v switch. Without the switch, you will receive a list similar to this one:
trochej@ubuntuzfs:~$ sudo zpool events
TIME CLASS
Feb 15 2016 17:43:08.213103724 resource.fs.zfs.statechange
Feb 15 2016 17:43:08.221103592 resource.fs.zfs.statechange
Feb 15 2016 17:43:08.221103592 resource.fs.zfs.statechange
Feb 15 2016 17:43:08.661096327 ereport.fs.zfs.config.sync
Feb 15 2016 18:07:39.521832629 ereport.fs.zfs.zpool.destroy
Those should be pretty obvious and, in this case, it’s directly related to creation, import, and destruction of a pool.
With the -v switch, the output is more verbose:
trochej@ubuntuzfs:~$ sudo zpool events -v
TIME CLASS
Feb 15 2016 17:43:08.213103724 resource.fs.zfs.statechange
version = 0x0
class = "resource.fs.zfs.statechange"
pool_guid = 0xa5c256340cb6bcbc
pool_context = 0x0
vdev_guid = 0xba85b9116783d317
vdev_state = 0x7
time = 0x56c2001c 0xcb3b46c
eid = 0xa
Feb 15 2016 17:43:08.213103724 resource.fs.zfs.statechange
version = 0x0
class = "resource.fs.zfs.statechange"
pool_guid = 0xa5c256340cb6bcbc
pool_context = 0x0
vdev_guid = 0xbcb660041118eb95
vdev_state = 0x7
time = 0x56c2001c 0xcb3b46c
eid = 0xb