© Damian Wojsław 2017

Damian Wojsław, Introducing ZFS on Linux, https://doi.org/10.1007/978-1-4842-3306-1_6

6. Sharing

Damian Wojsław

(1)ul. Duńska 27i/8, Szczecin, 71-795 Zachodniopomorskie, Poland

Once you have your storage set up and configured the way you like, it is time to start using it. One way is to use the space used by local programs running on the same server as the ZFS pool. This is particularly useful if you intend to host applications such as mail, web pages, or applications (internal or external CRMs, perhaps). On the other hand, you may need to provide common disk space to client machines, for example, workstations that will store data on the server or share documents for editing.

Your choice of connection method, known as the sharing protocol, is dictated by the way you are going to use the space.

Sharing Protocols

As with any storage array, there are two basic ways you can share the disk space: as a character device or a block device. The difference is in how the device is used and relays to the basic two groups of devices in Linux—character devices and block devices. For our needs, the difference can be summed this way: a character device will be, in our context, a file system that can be mounted and used directly to store and retrieve files. A block device is a pseudo-device, a file system, which can only be used by treating it as a hard drive itself.

Given the DYI small storage array, character devices would be one of two popular network file system sharing protocols—NFS or CIFS . Block devices will most likely be the iSCSI protocol . While you may decide to use FC or FCoE protocols, I am not going to cover them here.

The original ZFS implementation allows for quick sharing through NFS and CIFS protocols. The commands are tightly bound to ZFS itself and are represented at a file system or a zvol property. Currently, at the time this guide is written, the native ZFS share commands don’t work with the Linux platform or work unreliably. As with ACLs, you need to use Linux native tools—iSCSAadm, samba, and NFS servers—to provide this functionality.

Note

Please be aware that describing complex NFS, Samba, or iSCSI configurations warrant separate books on their own. Those are out of the scope of this simple guide. There are a number of books and a very large number of tutorials for each of them available on the Internet, in case you need to work on something more complex.

NFS: Linux Server

NFS is a flexible and proven network storage sharing protocol. It was conceived by Sun Microsystems in 1984. It is a networked file system for distributed environments. One quite common use in the Unix world is to host users’ home directories on the NFS server and automount them on given machines when the user logs in. Thus the same home is always available in one, central location (which is easy for backup and restore), but reachable on any workstation that’s configured to use the NFS server.

NFS is quite common in the Unix and Linux world and is a standard way of sharing disk space between server and client machines. On the other hand, if you need to use the disk space from Windows systems, it will be beneficial to configure a SAMBA server.

There are two dominant versions of the NFS protocol: version 3 and version 4. If possible, use version 4, as it is now well supported by major Linux distributions. Version 4 adds many performance and security improvements and made strong security mandatory. I present the steps to install and configure NFSv4. The packages are the same, but some configurations differ. Before you start using NFS on the server and client machines, there are some steps you need to take. First, the packages need to be installed.

Installing Packages on Ubuntu

To install and configure NFS server on Ubuntu, run the following:

trochej@ubuntu:~$ sudo apt-get install nfs-kernel-server
[sudo] password for trochej:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
 keyutils libnfsidmap2 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib
 libtirpc1 nfs-common python python-minimal python2.7 python2.7-minimal rpcbind
Suggested packages:
 watchdog python-doc python-tk python2.7-doc binutils binfmt-support
The following NEW packages will be installed:
 keyutils libnfsidmap2 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib
 libtirpc1 nfs-common nfs-kernel-server python python-minimal python2.7
 python2.7-minimal rpcbind
0 upgraded, 13 newly installed, 0 to remove and 96 not upgraded.
Need to get 4,383 kB of archives.
After this operation, 18.5 MB of additional disk space will be used.
Do you want to continue? [Y/n]

After you press Y and confirm with Enter, the system will print a list of the packages it installs. Your output may vary from what’s shown here, depending on what you have already installed. Assume that the pool tank and file system export exist on the server:

trochej@ubuntu:~$ sudo zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
tank           80K  1.92G    19K  /tank
tank/export    19K  1.92G    19K  /tank/export

Edit the /etc/exports file (it is a listing of directories exported via NFS protocol and various options applied to them) and add this line :

/tank/export       192.168.0.0/24(rw,fsid=0,sync)

This will make the /tank/export file system available to all hosts in the 192.168.0.0 network.

The fsid=0 option tells the NFS server that the directory is a root for other file systems. The rw option sets the file system to read-write. Sync tells the server to only confirm the write when the buffer has been committed to the physical media.

To make this export available over the network, the kernel server needs to be restarted:

sudo systemctl restart nfs-kernel-server

The last thing to make sure of is to change permissions on the exported file system so that the remote server can write to it:

trochej@ubuntu:~$ sudo chmod -R a+rwX /tank/

You can confirm the export by running the exportfs command:

trochej@ubuntu:~$ sudo exportfs
/tank/export    192.168.0.0/24

Installing NFS Client on Ubuntu

To install and configure NFS client on a Ubuntu machine, run:

sudo apt-get install nfs-common

Then test the mount by running this:

sudo mount -t nfs4 -o proto=tcp,port=2049 192.168.0.9:/ /mnt

This will tell the mount command to mount a remote file system running on a 192.168.0.9 server in the /mnt directory. Making it persistent across reboots requires you to add this line to the /etc/fstab file:

192.168.0.9:/   /mnt   nfs4    _netdev,auto  0  0

From now on, your ZFS pool is exported and available remotely to the client machine .

Installing Packages on CentOS

To achieve the same on CentOS, run the following:

[root@centos ~]# yum install nfs-utils

Change permissions on the directory:

[root@centos ~]# chmod -R a+rwX /tank/export

Next, add the appropriate entry to /etc/exports:

[root@centos ~]# cat /etc/exports
/tank/export 192.168.0.0/24(rw,fsid=0,sync)

Finally, restart the NFS server:

[root@centos ~]# systemctl restart nfs-server

Mounting it on the client is similar.

[root@centos ~]# yum install nfs-utils
[root@centos ~]# mount -t nfs4 -o proto=tcp,port=2049 192.168.0.9:/ /mnt

As with Ubuntu, to make the mount automatic on every system boot, add the following line to your /etc/fstab file:

192.168.0.9:/   /mnt   nfs4    _netdev,auto  0  0

This setup is very crude. No security has been applied and absolutely no external user authentication method is in use. Usually, in a production environment, you will want to use some kind of central user database, such as LDAP or Active Directory.

SAMBA

Configuring SAMBA is more complex even for simplest setups. It requires editing appropriate configuration file. In Ubuntu it is /etc/samba/smb.conf

Below I paste absolutely smallest smb.conf file I could figure:

[global]
  workgroup = WORKGROUP
  server string = %h server (Samba, Ubuntu)
  dns proxy = no
  server role = standalone server
  passdb backend = tdbsam


[shared]
  comment = Shared ZFS Pool
  path = /tank/
  browseable = yes
  read only = no
  guest ok = yes
  writeable = yes

The configuration above is absolutely unfit in real world. It offers no way of sensible logging, no security, no password synchronization. Just anonymous access to exported pool. But it serves a purpose of test.

Mounting this on Linux machine is simple:

sudo mount -t cifs //CIFSSERVER/shared /mnt

Where CIFSSERVER is the IP address or resolvable network name of the SAMBA server. Note that once users get involved the line above will have to change.

Mounting this share in Windows machine is as simple as opening Explorer Window, navigating to CIFSSERVER in the network and opening the share. Done.

As with NFS, you will most probably want to involve some additional directory services, kinds of LDAP. You absolutely must not use anonymous shares in real world. Just don’t.

As with NFS, the material to learn is a book on its own and there is abundance of sources on the internet.

Other Sharing Protocols

ZFS allows for even more ways of sharing. Of special interest might be iSCSI or Fiber Channel. SCSI (Small Computer System Interface) is the de facto standard for connecting hard drives to the server in enterprise setups. Currently, the Serial Attached SCSI (commonly known as SAS ) is the technology to use. While the protocol was designed to connect many other peripherals to the computer, in the server rooms it’s dominant for connecting drives.

As noted, ZFS can create file systems that act like directories. You can create block devices, called ZVOLs . They are treated like normal hard drives that can be partitioned and formatted. They can also be exported as physical drives by means of the iSCSI protocol.

iSCSI is an implementation of the SCSI protocol over TCP/IP networks. It allows you to carry out SCSI commands to storage devices over the network, as if they were directly attached to the system.

Two important SCSI (and hence iSCSI) terms are initiator and target. The target is the storage resource; in this scenario, it’s available over the network. The initiator is the iSCSI client. To utilize the storage initiator, you must log in to the target and initiate a session. If configured like this, it can force authentication of client to the server.

Using the iSCSI protocol on Linux platform is pretty easy. First you need to create ZVOLs and export each of them as a LUN (logical unit).

First, let’s create ZVOLs to be used as virtual drives. Those will be vol01, vol02, vol03, and vol04 living in the data pool.

sudo zfs create -V 5gb data/vol01
sudo zfs create -V 5gb data/vol02
sudo zfs create -V 5gb data/vol03
sudo zfs create -V 5gb data/vol04

The next step is to create four LUNs that will present ZVOLs to the client machines:

sudo tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2016.temp:storage.lun01
sudo tgtadm --lld iscsi --op new --mode target --tid 2 -T iqn.2016.temp:storage.lun02
sudo tgtadm --lld iscsi --op new --mode target --tid 3 -T iqn.2016.temp:storage.lun03
sudo tgtadm --lld iscsi --op new --mode target --tid 4 -T iqn.2016.temp:storage.lun04

Once you’re done, the ZVOLs must be exported as LUNs via the previously configured targets:

sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/zvol/data/vol01
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/zvol/data/vol02
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/zvol/data/vol03
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 4 --lun 1 -b /dev/zvol/data/vol04
sudo tgtadm --lld iscsi --mode target --op bind --tid 1 -I ALL
sudo tgtadm --lld iscsi --mode target --op bind --tid 2 -I ALL
sudo tgtadm --lld iscsi --mode target --op bind --tid 3 -I ALL
sudo tgtadm --lld iscsi --mode target --op bind --tid 4 -I ALL
sudo tgt-admin --dump | sudo tee /etc/tgt/targets.conf

You can confirm the configuration by running the tgadm command. The following output has been cut for brevity:

trochej@hypervizor:~$ sudo tgtadm --mode tgt --op show
Target 1: iqn.2016.temp:storage.lun01
        System information:
                Driver: iscsi
                State: ready
        I_T nexus information:
        LUN information:
                LUN: 0
                        Type: controller
                        SCSI ID: IET     00010000
                        SCSI SN: beaf10
                        Size: 0 MB, Block size: 1
                        Online: Yes
                        Removable media: No
                        Prevent removal: No
                        Readonly: No
                        SWP: No
                        Thin-provisioning: No
                        Backing store type: null
                        Backing store path: None
                        Backing store flags:
                LUN: 1
                        Type: disk
                        SCSI ID: IET     00010001
                        SCSI SN: beaf11
                        Size: 5369 MB, Block size: 512
                        Online: Yes
                        Removable media: No
                        Prevent removal: No
                        Readonly: No
                        SWP: No
                        Thin-provisioning: No
                        Backing store type: rdwr
                        Backing store path: /dev/zvol/data/vol01
                        Backing store flags:
        Account information:
        ACL information:
                ALL

Connecting initiators to targets is done by using the iscsiadm command:

iscsiadm -m discovery -t sendtargets -p 192.168.0.9
192.168.0.9:3260,1 iqn.2016.temp:storage.lun01:target1

This command will print targets configured on the server. To start using them, the client machine needs to log in and start the session:

iscsiadm -m node -T iqn.2016.temp:storage.lun01:target1 --login

You can confirm the disks appearing in the system by grepping:

root@madtower:/home/trochej# dmesg | grep "Attached SCSI disk"
[...]
        [ 3772.041014] sd 5:0:0:1: [sdc] Attached SCSI disk
        [ 3772.041016] sd 4:0:0:1: [sdb] Attached SCSI disk
        [ 3772.047183] sd 6:0:0:1: [sde] Attached SCSI disk
        [ 3772.050148] sd 7:0:0:1: [sdd] Attached SCSI disk
[...]

Having four LUNs available in the system, the only step remaining is to use them as you would use any other physical drive. You can create an LVM pool on them or even on another ZFS pool:

root@madtower:/home/trochej# zpool create -f datapool mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde
root@madtower:/home/trochej# zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
datapool  9.94G  68.5K  9.94G         -     0%     0%  1.00x  ONLINE  -
rpool      444G   133G   311G         -    29%    29%  1.00x  ONLINE  -


root@madtower:/home/trochej# zpool status datapool
  pool: datapool
 state: ONLINE
  scan: none requested
 config:


                NAME         STATE     READ WRITE CKSUM
                datapool    ONLINE        0     0     0
                  mirror-0  ONLINE        0     0     0
                       sdb  ONLINE        0     0     0
                       sdc  ONLINE        0     0     0
                  mirror-1  ONLINE        0     0     0
                       sdd  ONLINE        0     0     0
                       sde  ONLINE        0     0     0


errors: No known data errors

You have a lot of choices when exporting your pool for use by client machines. I’ve only covered three of them as they seem to be most popular.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.237.255