Exporting CephFS via Samba

The Samba project was originally designed to allow clients and servers to talk to the Microsoft SMB protocol. It has since evolved to be able to act as a full Windows domain controller. As Samba can act as a file server for clients talking to the SMB protocol, it can be used to export CephFS to Windows clients.

There is a separate project called CTDB that's used in conjunction with Samba to create a failover cluster to provide highly available SMB shares. CTDB uses the concept of a recovery lock to detect and handle split-brain scenarios. Traditionally, CTDB has used an area of a clustered filesystem to store the recovery lock file; however, this approach does not work very well with CephFS because of the fact that the timings of the recovery sequence conflict with the timings of the OSDs and CephFS MDS failovers. Hence, a RADOS-specific recovery lock was developed that allowed CTDB to store recovery lock information directly in a RADOS object, which avoids the aforementioned issues.

In this example, a two-proxy node cluster will be used to export a directory on CephFS as an SMB share that can be accessed from Windows clients. CTDB will be used to provide fail over functionality. This share will also make use of CephFS snapshots to enable the previous version's functionality in Windows File Explorer.

For this example, you will need two VMs that have functional networking and can reach your Ceph cluster. The VMs can either be manually created, deployed via Ansible in your lab, or installed on the Ceph monitors for testing the Samba software can be

Install the ceph, ctdb, and samba packages on both VMs using the following code:

sudo apt-get install ceph samba ctdb

Copy ceph.conf over from a Ceph monitor node using the following code:

scp mon1:/etc/ceph/ceph.conf /etc/ceph/ceph.conf

Copy the Ceph keyring over from a monitor node using the following code:

scp mon1:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring

Your Samba gateways should now be able to act as clients to your Ceph cluster. This can be confirmed by checking that you can query the Ceph clusters status.

As mentioned previously, CTDB has a Ceph plugin to store the recovery lock directly in a RADOS pool. In some Linux distributions, this plugin may not be distributed along with the Samba and CTDB packages; certainly, in Debian-based distributions, it is not currently included. To work around this and save on having to manually compile, we will borrow a precompiled version from another distribution.

Download the samba-ceph package from the SUSE repositories using the following code:

wget http://widehat.opensuse.org/opensuse/update/leap/42.3/oss/x86_64/samba-ceph-4.6.7+git.51.327af8d0a11-6.1.x86_64.rpm

Install a utility that will extract the contents of RPM packages using the following code:

apt-get install rpm2cpio

Use the rpm2cpio utility to extract the contents of the RPM package that has just been downloaded using the following code:

rpm2cpio samba-ceph-4.6.7+git.51.327af8d0a11-6.1.x86_64.rpm | cpio -i --make-directories

Finally, copy the CTDB RADOS helper into the bin folder on the VM using the following code:

cp usr/lib64/ctdb/ctdb_mutex_ceph_rados_helper /usr/local/bin/

Make sure all of the steps are carried out on both VMs. Now all of the required software is installed, we can proceed with the configuration of Samba and CTDB. Both CTDB and Samba come with example contents in their configuration files. For the purpose of this example, only the bare minimum contents will be shown; it is left as an exercise for the reader if they wish to further explore the range of configuration options available:

nano /etc/samba/smb.conf

nano /etc/ctdb/ctdbd.conf

nano /etc/ctdb/nodes

On each line, enter the IP address of each node participating in the CTDB Samba cluster, as shown in the following screenshot:

The last step is to create a Samba user that can be used to access the share. To do this, use the following code:

smbpasswd -a test

Again, make sure this configuration is repeated across both Samba nodes. Once complete, the CTDB service can be started, which should hopefully form quorum and then launch Samba. You can start the CTDB service using the following code:

systemctl restart ctdb

After a few seconds, CTDB will start to mark the nodes as healthy; this can be confirmed by running the following code:

ctdb status

This should hopefully display a status similar to the following screenshot:

It's normal for the status to be unhealthy for a short period after being started, but if the status stays in this state, check the CTDB logs located at /var/log/ctdb for a possible explanation as to what has gone wrong.

Once CTDB enters a healthy state, you should be able to access the CephFS share from any Windows client.

To provide true HA, you would need a mechanism to steer clients to the active node's IP addresses using something like a load balancer. This is outside the scope of this example.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.242.157