Ceph setup

Triple-O has built-in support to deploy a Ceph cluster and use it as the primary backing store for Cinder and for Glance. In Chapter 1, RDO Installation, there was a ceph parameter and a storage environment file that were passed to the overcloud deployment command. They set up Ceph as the backing store for your Triple-O deployment. If you left those two options out of the deployment command, Triple-O would default back to LVM as the backing store for Cinder.

GlusterFS setup

This demonstration shows an example starting with an LVM-configured backing store. If you would like to do this example, you will need to start with a deployment that has LVM configured and not Ceph.

Conveniently enough, a simple GlusterFS installation is not extremely complicated to set up. Assume three rpm-based Linux nodes named gluster1, gluster2, and gluster3 with an sdb drive attached for use by a GlusterFS storage cluster. The file system XFS is recommended although an ext4 file system will work fine in some cases. Research the pros and cons of each file system related to GlusterFS before you deploy a production GlusterFS storage cluster. Create a partition and a file system on the sdb disk. We'll begin our demonstration for this book by mounting the disk, and creating and starting the GlusterFS volumes. The following steps should be performed on each of the GlusterFS nodes:

  1. Start by preparing the host and installing GlusterFS:
    # mkdir -p /export/sdb1 && mount /dev/sdb1 /export/sdb1
    # echo "/dev/vdb1 /export/vdb1 ext4 defaults 0 0"  >> /etc/fstab
    # yum install -y glusterfs{,-server,-fuse,-geo-replication}
    
  2. The following commands should be run only on one node as they propagate across the GlusterFS storage cluster via the Gluster services:
    # service glusterd start
    # gluster peer probe gluster2
    # gluster peer probe gluster3
    # gluster volume create openstack-cinder rep 3 transport tcp gluster1:/export/vdb1 gluster2:/export/vdb1
    gluster3:/export/vdb1
    # gluster volume start openstack-cinder
    # gluster volume status
    

    The last command should show you the Gluster volume you just created and the bricks that are being used to store the GlusterFS volume openstack-cinder. What these commands set up is a three-node Gluster installation where each node is a replica. That means that all the data lives on all three nodes. Now that we have GlusterFS storage available, let's configure Cinder to know about it and present it to the end user as a backing storage option for Cinder volumes.

  3. Now that GlusterFS is set up, we need to tell Cinder about setting up the Cinder volume types. Let's configure Cinder to use GlusterFS as a backing store. Start by editing /etc/cinder/cinder.conf; make sure that the enable_backends option is defined with the following values and that the respective configuration sections are defined in the file:
    enabled_backends=my_lvm,my_glusterfs
    [my_lvm]
    volume_group = cinder-volumes
    volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
    volume_backend_name = LVM
    [my_glusterfs]
    volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
    glusterfs_shares_config = /etc/cinder/shares.conf
    glusterfs_sparsed_volumes = false
    volume_backend_name = GLUSTER
    

    The my_lvm definition preserves the existing LVM setup that has already been used to create a Cinder volume. The my_glusterfs section defines options to attach to the GlusterFS storage we have just configured. You will also need to edit the /etc/cinder/shares.conf file. This file defines the connection information to the GlusterFS nodes. Reference the first and second Gluster nodes in the shares.conf file. It contains the following line:

    gluster1:/openstack-cinder -o backupvolfile- server=gluster2:/openstack-cinder.
    
  4. Next, you'll need to restart the Cinder services to read the new configurations added to the cinder.conf file:
    control# service openstack-cinder-scheduler restart
    control# service openstack-cinder-volume restart
    control# mount | grep cinder
    gluster1:openstack-cinder on /var/lib/cinder/... type fuse.glusterfs
    gluster2:openstack on /var/lib/cinder/... type fuse.glusterfs
    
  5. The mount command shown here just verifies that Cinder has automatically mounted the Cinder volumes defined. If you don't see the Cinder volumes mounted, then something has gone wrong. In that case, check the Cinder logs and the Gluster logs for errors to troubleshoot why Cinder couldn't mount the Gluster volume. At this point, the backing stores have been defined, but there is no end user configuration that has been exposed. To present the end user with this new configuration, Cinder type definitions must be created through the API:
    control# cinder type-create lvm
    control# cinder type-key lvm set volume_backend_name=LVM
    control# cinder type-create glusterfs
    control# cinder type-key glusterfs set
    volume_backend_name=GLUSTER
    control# cin
    der type-list
    
  6. Now there are two types available that can be specified when a new volume is created. Further, when you list volumes that are in Cinder, they will have a volume type corresponding to which backing store is being used for each volume:
    control# cinder-create --volume-type glusterfs 1
    control# cinder list
    
  7. The next time you create a new volume in the web interface, the two types will be available for selection on the volume creation dialog.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.34.62