Triple-O has built-in support to deploy a Ceph cluster and use it as the primary backing store for Cinder and for Glance. In Chapter 1, RDO Installation, there was a ceph
parameter and a storage environment file that were passed to the overcloud deployment command. They set up Ceph as the backing store for your Triple-O deployment. If you left those two options out of the deployment command, Triple-O would default back to LVM as the backing store for Cinder.
This demonstration shows an example starting with an LVM-configured backing store. If you would like to do this example, you will need to start with a deployment that has LVM configured and not Ceph.
Conveniently enough, a simple GlusterFS installation is not extremely complicated to set up. Assume three rpm-based Linux nodes named gluster1
, gluster2
, and gluster3
with an sdb
drive attached for use by a GlusterFS storage cluster. The file system XFS is recommended although an ext4
file system will work fine in some cases. Research the pros and cons of each file system related to GlusterFS before you deploy a production GlusterFS storage cluster. Create a partition and a file system on the sdb
disk. We'll begin our demonstration for this book by mounting the disk, and creating and starting the GlusterFS volumes. The following steps should be performed on each of the GlusterFS nodes:
# mkdir -p /export/sdb1 && mount /dev/sdb1 /export/sdb1 # echo "/dev/vdb1 /export/vdb1 ext4 defaults 0 0" >> /etc/fstab # yum install -y glusterfs{,-server,-fuse,-geo-replication}
# service glusterd start # gluster peer probe gluster2 # gluster peer probe gluster3 # gluster volume create openstack-cinder rep 3 transport tcp gluster1:/export/vdb1 gluster2:/export/vdb1 gluster3:/export/vdb1 # gluster volume start openstack-cinder # gluster volume status
The last command should show you the Gluster volume you just created and the bricks that are being used to store the GlusterFS volume openstack-cinder
. What these commands set up is a three-node Gluster installation where each node is a replica. That means that all the data lives on all three nodes. Now that we have GlusterFS storage available, let's configure Cinder to know about it and present it to the end user as a backing storage option for Cinder volumes.
/etc/cinder/cinder.conf
; make sure that the enable_backends
option is defined with the following values and that the respective configuration sections are defined in the file:enabled_backends=my_lvm,my_glusterfs [my_lvm] volume_group = cinder-volumes volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver volume_backend_name = LVM [my_glusterfs] volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver glusterfs_shares_config = /etc/cinder/shares.conf glusterfs_sparsed_volumes = false volume_backend_name = GLUSTER
The my_lvm
definition preserves the existing LVM setup that has already been used to create a Cinder volume. The my_glusterfs
section defines options to attach to the GlusterFS storage we have just configured. You will also need to edit the /etc/cinder/shares.conf
file. This file defines the connection information to the GlusterFS nodes. Reference the first and second Gluster nodes in the shares.conf
file. It contains the following line:
gluster1:/openstack-cinder -o backupvolfile- server=gluster2:/openstack-cinder.
cinder.conf
file:control# service openstack-cinder-scheduler restart control# service openstack-cinder-volume restart control# mount | grep cinder gluster1:openstack-cinder on /var/lib/cinder/... type fuse.glusterfs gluster2:openstack on /var/lib/cinder/... type fuse.glusterfs
mount
command shown here just verifies that Cinder has automatically mounted the Cinder volumes defined. If you don't see the Cinder volumes mounted, then something has gone wrong. In that case, check the Cinder logs and the Gluster logs for errors to troubleshoot why Cinder couldn't mount the Gluster volume. At this point, the backing stores have been defined, but there is no end user configuration that has been exposed. To present the end user with this new configuration, Cinder type definitions must be created through the API:control# cinder type-create lvm control# cinder type-key lvm set volume_backend_name=LVM control# cinder type-create glusterfs control# cinder type-key glusterfs set volume_backend_name=GLUSTER control# cin der type-list
control# cinder-create --volume-type glusterfs 1 control# cinder list
3.148.117.212