Demonstration

This feature requires the Kraken release or newer of Ceph. If you have deployed your test cluster with the Ansible and the configuration provided, you will be running Ceph Jewel release. The following steps show how to use Ansible to perform a rolling upgrade of your cluster to the Kraken release. We will also enable options to enable experimental options such as BlueStore and support for partial overwrites on erasure-coded pools.

Edit your group_vars/ceph variable file and change the release version from Jewel to Kraken.

Also, add the following:

ceph_conf_overrides:
global:
enable_experimental_unrecoverable_data_corrupting_features:
"debug_white_box_testing_ec_overwrites bluestore"

And to correct a small bug when using Ansible to deploy Ceph Kraken, add debian_ceph_packages:
- ceph
- ceph-common
- ceph-fuse

To the bottom of the file, run the following Ansible playbook:

    ansible-playbook -K infrastructure-playbooks/rolling_update.yml

The preceding command gives the following output:

Ansible will prompt you to make sure that you want to carry out the upgrade. Once you confirm by entering yes, the upgrade process will begin.

Once Ansible has finished, all the stages should be successful, as shown in the following screenshot:

Your cluster has now been upgraded to Kraken and can be confirmed by running ceph -v on one of your VMs running Ceph:

As a result of enabling the experimental options in the configuration file, every time you now run a Ceph command, you will be presented with the following warning:

This is designed as a safety warning to stop you running these options in a live environment, as they may cause irreversible data loss. As we are doing this on a test cluster, it is fine to ignore, but it should be a stark warning not to run this anywhere near live data. 

The next command that is required to be run is to enable the experimental flag, which allows partial overwrites on erasure-coded pools:

    ceph osd pool get ecpool debug_white_box_testing_ec_overwrites
true
Do not run this on production clusters.

Double check you still have your erasure pool called ecpool and the default rbd pool:

    # ceph osd lspools
0 rbd,1 ecpool,

Now, create rbd. Notice that the actual RBD header object still has to live on a replica pool, but by providing an additional parameter, we can tell Ceph to store data for this RBD on an erasure-coded pool:

    rbd create Test_On_EC --data-pool=ecpool --size=1G

The command should return without error and you now have an erasure-coded backed RBD image. You should now be able to use this image with any librbd application.

Partial overwrites on erasure pools require BlueStore to operate efficiently. Whilst filestore will work, performance will be extremely poor.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.80.34