Configuring RBD mirroring

In order to use the RBD mirroring functionality, we will require two Ceph clusters. We could deploy two identical clusters we have been using previously, but the number of VMs involved may exceed the capabilities of what most people's personal machines can run. Therefore, we will modify our vagrant and ansible configuration files to deploy two separate Ceph clusters each with a single monitor and an OSD node.

The required Vagrantfile is very similar to the one used in Chapter 2, Deploying Ceph to deploy your initial test cluster; the hosts part at the top should now look like this:

nodes = [
{ :hostname => 'ansible', :ip => '192.168.0.40', :box => 'xenial64' },
{ :hostname => 'site1-mon1', :ip => '192.168.0.41', :box => 'xenial64' },
{ :hostname => 'site2-mon1', :ip => '192.168.0.42', :box => 'xenial64' },
{ :hostname => 'site1-osd1', :ip => '192.168.0.51', :box => 'xenial64', :ram => 1024, :osd => 'yes' },
{ :hostname => 'site2-osd1', :ip => '192.168.0.52', :box => 'xenial64', :ram => 1024, :osd => 'yes' }
]

For the anisble configuration, we will maintain two separate ansible configuration instances so that each cluster can be deployed seperately. We will then maintain separate hosts files per instance, which we will specify when we run the playbook. To do this, we will not copy the ceph-ansible files into /etc/ansible, but keep them in the home directory.

git clone https://github.com/ceph/ceph-ansible.git

cp -a ceph-ansible ~/ceph-ansible2

Create the same two files called all and Ceph, in the group_vars directory as we did in Chapter 2, Deploying Ceph. This needs to be done in both copies of ceph-ansible:

  1. Create a hosts file in each ansible directory, and place the two hosts in each:

The above image is for host one and the below image is for the second host

  1. Then, run the site.yml playbook under each ceph-ansible instance to deploy our two Ceph clusters:
ansible-playbook -K -i hosts site.yml
  1. Before we can continue with the configuration of the RBD mirroring, we need to adjust the replication level of the default pools to 1, as our clusters only have 1 OSD. Run these commands on both the clusters:

  1. Now, install the RBD mirroring daemon on both the clusters:
sudo apt-get install rbd-mirror

  1. In order for the rbd-mirror daemon to be able to communicate with both clusters, we need to copy ceph.conf and the keyring from both the clusters to each other:
  2. Copy ceph.conf from site1-mon1 to site2-mon1 and call it remote.conf:
  3. Copy ceph.client.admin.keyring from site1-mon1 to site2-mon1 and call it remote.client.admin.keyring:
  4. Repeat these two steps but this time copy the files from site2-mon1 to site1-mon1:
  5. Remember to make sure the keyrings are owned by ceph:ceph:
sudo chown ceph:ceph /etc/ceph/remote.client.admin.keyring
  1. Now, we need to tell Ceph that the pool called rbd should have the mirroring function enabled:
sudo rbd --cluster ceph mirror pool enable rbd image
  1. Repeat this for the target cluster:
sudo rbd --cluster remote mirror pool enable rbd image
  1. Add the target cluster as a peer of the pool mirroring configuration:
sudo rbd --cluster ceph mirror pool peer add rbd client.admin@remote
  1. Run the same command locally on the second Ceph cluster as well:
sudo rbd --cluster ceph mirror pool peer add rbd client.admin@remote
  1. Back on the first cluster, let's create a test RBD to use with our mirroring lab:
sudo rbd create mirror_test --size=1G
  1. Enable the journaling feature on the RBD image:
sudo rbd feature enable rbd/mirror_test journaling
  1. Finally, enable mirroring for the RBD:
sudo rbd mirror image enable rbd/mirror_test

It’s important to note that RBD mirroring works via a pull system. The rbd-mirror daemon needs to run on the cluster that you wish to mirror the RBDs to; it then connects to the source cluster and pulls the RBDs across. If you were intending to implement a two-way replication where each Ceph cluster replicates with each other, then you would run the rbd-mirror daemon on both the clusters. With this in mind, let's enable and start the systemd service for rbd-mirror on your target host:

sudo systemctl enable ceph-rbd-mirror@admin
sudo systemctl start ceph-rbd-mirror@admin

The rbd-mirror daemon will now start processing all the RBD images configured for mirroring on your primary cluster.

We can confirm that everything is working as expected by running the following command on the target cluster:

sudo rbd --cluster remote mirror pool status rbd –verbose

In the previous screenshot, we can see that our mirror_test RBD is in a up+replaying state; this means that mirroring is in progress, and we can see via entries_behind_master that it is currently up-to-date.

Also note the difference in the output of the RBD info commands on either of the clusters. On the source cluster, the primary status is true, which allows you to determine which cluster the RBD is the master state and can be used by clients. This also confirms that although we only created the RBD on the primary cluster, it has been replicated to the secondary one.

The source cluster is shown here:

The target cluster is shown here:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.141.115