Creating an erasure-coded pool

Let's bring our test cluster up again and switch into superuser mode in Linux, so we don't have to keep prepending sudo to our commands.

Erasure-coded pools are controlled by the use of erasure profiles; these controls how many shards each object is broken up into including the split between data and erasure shards. The profiles also include configuration to determine what erasure code plugin is used to calculate the hashes.

The following plugins are available to use:

  • Jerasure
  • ISA
  • LRC
  • SHEC

To see a list of the erasure profiles, run the following command:

    # ceph osd erasure-code-profile ls

You can see there is a default profile in a fresh installation of Ceph:

Let's see what configuration options it contains using the following command:

    # ceph osd erasure-code-profile get default

The default profile specifies that it will use the Jerasure plugin with the Reed-Solomon error-correcting codes and will split objects into 2 data shards and 1 erasure shard:

This is almost perfect for our test cluster; however, for the purpose of this exercise, we will create a new profile using the following commands:

    # ceph osd erasure-code-profile set example_profile k=2 m=1
plugin=jerasure technique=reed_sol_van
# ceph osd erasure-code-profile ls

You can see our new example_profile has been created:

Now, let's create our erasure-coded pool with this profile:

    # ceph osd pool create ecpool 128 128 erasure example_profile

The preceding command gives the following output:

The preceding command instructs Ceph to create a new pool called ecpool with 128 PGs. It should be an erasure-coded pool and should use the example_profile we previously created.

Let's create an object with a small text string inside it and then prove the data has been stored by reading it back:

    # echo "I am test data for a test object" | rados --pool
ecpool put Test1 –
# rados --pool ecpool get Test1 -

That proves that the erasure-coded pool is working, but it's hardly the most exciting of discoveries:

Let's have a look to see if we can see what's happening at a lower level.

First, find out what PG is holding the object we just created:

    # ceph osd map ecpool Test1

The result of the preceding command tells us that the object is stored in PG 3.40 on OSDs 1, 2, and 0 in this example Ceph cluster. That's pretty obvious as we only have three OSDs, but in larger clusters that is a very useful piece of information:

The PGs will likely be different on your test cluster, so make sure the PG folder structure matches the output of the preceding ceph osd map command.

We can now look at the folder structure of the OSDs and see how the object has been split using the following commands:

    ls -l /var/lib/ceph/osd/ceph-2/current/1.40s0_head/

The preceding command gives the following output:

    # ls -l /var/lib/ceph/osd/ceph-1/current/1.40s1_head/

The preceding command gives the following output:

    # ls -l /var/lib/ceph/osd/ceph-0/current/1.40s2_head/
total 4

The preceding command gives the following output:

Notice how the PG directory names have been appended with the shard number and replicated pools just have the PG number as their directory name. If you examine the contents of the object files, you will see our text string that we entered into the object when we created it. However, due to the small size of the text string, Ceph has padded out the second shard with null characters and the erasure shard, hence it will contain the same as the first. You can repeat this example with a new object containing larger amounts of text to see how Ceph splits the text into the shards and calculates the erasure code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.139.42