In this section, we will deploy a single-node test OpenStack environment that will be used to integrate with Ceph later in this chapter. For OpenStack deployment, we will use the Red Hat distribution of OpenStack known as RDO, which is Red Hat's open source community version of OpenStack. For more information on RDO OpenStack, visit http://openstack.redhat.com/.
To perform single-node OpenStack installation, we will create a new virtual machine named os-node1
. We need to install an OS on this new VM. In our case, we will install CentOS 6.4; you can also choose any other RHEL-based OS instead of CentOS if you like. Proceed with the following steps:
# VboxManage createvm --name os-node1 --ostype RedHat_64 --register # VBoxManage modifyvm os-node1 --memory 4096 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1 # VBoxManage storagectl os-node1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on # VBoxManage storageattach os-node1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium CentOS-6.4-x86_64-bin-DVD1.iso
# VBoxManage storagectl os-node1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on # VBoxManage createhd --filename OS-os-node1.vdi --size 10240 # VBoxManage storageattach os-node1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium OS-os-node1.vdi # VBoxManage startvm os-node1 --type gui
os-node1
./etc/sysconfig/network-scripts/ifcfg-eth2
file and add:ONBOOT=yes BOOTPROTO=dhcp
/etc/sysconfig/network-scripts/ifcfg-eth3
file and add:ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.201 NETMASK=255.255.255.0
/etc/hosts
file and add:192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3 192.168.57.200 ceph-client1 192.168.57.201 os-node1
# ping ceph-node1 # ping ceph-node2 # ping ceph-node3
selinux
to permissive
to avoid complexity:# setenforce 0 # service iptables stop
In this section, we will give you step-by-step instructions to install the Icehouse release of OpenStack RDO. If you are curious to learn about RDO OpenStack installation, visit http://openstack.redhat.com/Quickstart.
# yum update -y
# yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
# yum install -y openstack-packstack
packstack
, which performs absolute hands-free installation of OpenStack:# packstack --allinone
packstack
completes the installation, it will display some additional information, including the OpenStack Horizon dashboard URL and credentials that will be used to operate OpenStack:/root/keystone_rc
file:OpenStack is a modular system, which has unique components for a specific set of tasks. There are several components that require a reliable storage backend such as Ceph and extend full integration to it, as shown in the following diagram. Each of these components uses Ceph in its own way to store block devices and objects. A majority of the cloud deployment based on OpenStack and Ceph uses Cinder, Glance, and Swift integration with Ceph. Keystone integration is used when you need S3-compatible object storage on Ceph backend. Nova integration allows booting from Ceph volume capabilities to your OpenStack cloud.
OpenStack nodes should be Ceph clients in order to be able to access a Ceph cluster. For this, install Ceph packages on OpenStack nodes and make sure they can access Ceph clusters:
# ceph-deploy install os-node1
# ceph-deploy admin os-node1
/etc/ceph
of the OpenStack node, and try to connect to cluster:At this point, your OpenStack node, os-node1, can connect to your Ceph cluster. We will next configure Ceph for OpenStack. To do this, execute the following commands from os-node1, unless otherwise specified:
# ceph osd pool create volumes 128 # ceph osd pool create images 128
# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes,allow rx pool=images' # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
# ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring # chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring # ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring # chown glance:glance /etc/ceph/ceph.client.glance.keyring
# ceph auth get-key client.cinder | tee /tmp/client.cinder.key
# uuidgen
cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>63b033bb-3305-479d-854b-cf3d0cb6a50c</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF
# virsh secret-define --file secret.xml
# virsh secret-set-value --secret 63b033bb-3305-479d-854b-cf3d0cb6a50c --base64 $(cat /tmp/client.cinder.key) && rm /tmp/client.cinder.key secret.xml
At this point, we have covered configuration from Ceph's point of view. Next, we will configure the OpenStack components Cinder, Glance, and Nova to use Ceph. Cinder supports multiple backends. To configure Cinder to use Ceph, edit the Cinder configuration file and define the RBD driver that OpenStack Cinder should use.
To do this, you must also specify the pool name that we created for Cinder volumes previously. On your OpenStack node, edit /etc/cinder/cinder.conf
and perform the following changes:
Options defined in cinder.volume.manager
section of the /etc/cinder/cinder.conf
file and add an RBD driver for Cinder:volume_driver=cinder.volume.drivers.rbd.RBDDriver
Options defined in cinder.volume.drivers.rbd
section of the /etc/cinder/cinder.conf
file and add (replace the secret UUID with your environment's value) the following:rbd_pool=volumes rbd_user=cinder rbd_ceph_conf=/etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot=false rbd_secret_uuid=63b033bb-3305-479d-854b-cf3d0cb6a50c rbd_max_clone_depth=5
Options defined in cinder.common.config
section of the /etc/cinder/cinder.conf
file and add:glance_api_version=2
To boot OpenStack instances directly into Ceph, that is, to boot from volume feature, you must configure the ephemeral backend for Nova. To achieve this, edit /etc/nova/nova.conf
:
Options defined in nova.virt.libvirt.imagebackend
section and add:images_type=rbd images_rbd_pool=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf
Options defined in nova.virt.libvirt.volume
section and add (replace the secret UUID with your environment's value):rbd_user=cinder rbd_secret_uuid=63b033bb-3305-479d-854b-cf3d0cb6a50c
OpenStack Glance is capable of supporting multiple storage backends. In this section, we will learn to configure OpenStack Glance to use Ceph to store Glance images:
/etc/glance/glance-api.conf
file and add:default_store=rbd
statement to the default section of the glance-api.conf
fileRBD Store Options
section of the glance-api.conf
file and add:rbd_store_user=glance rbd_store_pool=images
show_image_direct_url=True
To bring all the changes into effect, you must restart OpenStack services using the following commands:
# service openstack-glance-api restart # service openstack-nova-compute restart # service openstack-cinder-volume restart
You can operate Cinder from either CLI or GUI. We will now test Cinder from each of these interfaces.
keystonerc_admin
file that will be autocreated post OpenStack installation:# source /root/keystonerc_admin
# cinder create --display-name ceph-volume01 --display-description "Cinder volume on CEPH storage" 10
#ceph -s
, where you observe cluster write operations.# cinder list
You can also create and manage your Cinder volumes from the OpenStack Horizon dashboard. Open the web interface of OpenStack Horizon and navigate to the volume section:
rbd_id.volume-00a90cd9-c2ea-4154-b045-6a837ac343da
, for the Cinder volume named ceph-volume01
having the ID 00a90cd9-c2ea-4154-b045-6a837ac343da
in Ceph volume pools:You can use OpenStack Glance to store operating system images for instances. These images will eventually be stored on storage backed by Ceph.
Perform the following steps to test OpenStack Glance:
# rados -p images ls
# wget http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img
# glance image-create --name="ubuntu-precise-image" --is-public=True --disk-format=qcow2 --container-format=ovf < precise-server-cloudimg-amd64-disk1.img
# glance image-list # rados -p images ls | grep -i 249cc4be-474d-4137-80f6-dc03f77b3d49
3.145.75.217