File

As the name indicates, file storage is supported by some form of filesystem that stores files and directories. In the traditional storage scenario, file storage is normally provided via servers acting as file servers or through the use of network-attached storage (NAS). File-based storage can be provided over several protocols and can sit on several different types of filesystems.

The two most common file-access protocols are SMB and NFS, which are widely supported by many clients. SMB is traditionally seen as a Microsoft protocol, being the native file-sharing protocol in Windows, whereas NFS is seen as the protocol used on Unix-based infrastructures.

As we shall see later, both Ceph's RBDs and its own CephFS filesystem can be used as a basis to export file-based storage to clients. RBDs can be mounted on a proxy server where a local filesystem is then placed on top. From here, the exportation as NFS or SMB is very similar to any other server with local storage. When using CephFS, which in itself is a filesystem, there are direct interfaces to both NFS and SMB server software to minimize the number of levels in the stack.

There are a number of advantages to exporting CephFS instead of a filesystem sitting on top of an RBD. These mainly center around simplifying the number of layers that I/Os have to pass through and the number of components in an HA setup. As was discussed earlier, most local filesystems can only be mounted on one server at a time, otherwise corruption will occur. Therefore, when designing an HA solution involving RBDs and local filesystems, care needs to be taken to ensure that the clustering solution won't try and mount the RBD and filesystem across multiple nodes. This is covered in more detail later in this chapter in the section on clustering.

There is, however, one possible reason for wanting to export RBDs formatted with local filesystems: the RBD component of Ceph is much simpler than CephFS in its operation and has been marked as stable for much longer than CephFS. While CephFS has proved to be very stable, thought should be given to the operational side of the solution, and you should ensure that the operator is happy managing CephFS.

To export CephFS via NFS, there are two possible solutions. One is to use the CephFS kernel client and mount the filesystem into the operating system, and then use the kernel-based NFS kernel server to export it to clients. Although this configuration should work perfectly fine, both the kernel-based NFS server and the CephFS client will typically rely on the operator to run a fairly recent kernel to support the latest features.

A much better idea would be to use nfs-ganesha, which has support for directly communicating to CephFS filesystems. As Ganesha runs entirely in user space, there's no requirement for specific kernel versions, and the supported CephFS client functionality can keep up with the current state of the Ceph project. There are also several enhancements in Ganesha that the kernel-based NFS server doesn't support. Additionally, HA NFS should be easier to achieve with Ganesha over the kernel server.

Samba can be used to export CephFS as a Windows-compatible share. Like NFS, Samba also supports the ability to directly communicate with CephFS, and so in most cases, there should be no requirement to have to mount the CephFS filesystem into the OS first. A separate project CTDB can be used to provide HA of the CephFS-backed Samba shares.

Finally, it is worth noting that, although Linux clients can mount CephFS directly, it may still be preferable to export CephFS via NFS or SMB to them. We should do this because, given the way CephFS works, clients are in direct communication with the Ceph cluster, and in some cases, this may not be desirable because of security concerns. By reexporting CephFS via NFS, clients can consume the storage without being directly exposed to the Ceph cluster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.4.174