Ceph pools

The concept of a pool is not new in storage systems. Enterprise storage systems are managed by creating several pools; Ceph also provides easy storage management by means of storage pools. A Ceph pool is a logical partition to store objects. Each pool in Ceph holds a number of placement groups, which in turn holds a number of objects that are mapped to OSDs across clusters. Hence, every single pool is distributed across cluster nodes, and thus this provides resilience. The initial Ceph deployment creates a default pool based on your requirement; it is recommended that you should create pools other than the default one.

A pool ensures data availability by creating the desired number of object copies, that is, replicas or erasure codes. The erasure coding (EC) feature has been recently added to Ceph, starting with the Ceph Firefly release. Erasure coding is a method of data protection in which data is broken into fragments, encoded, and then stored in a distributed manner. Ceph, being distributed in nature, makes use of EC amazingly well.

At the time of pool creation, we can define the replica size; the default replica size is 2. The pool replication level is very flexible; at any point in time, we can change it. We can also define the erasure code ruleset at the time of pool creation, which provides the same level of reliability but less amount of space as compared to the replication method.

Note

A pool can be created with either replication or erasure coding, but not both at the same time.

A Ceph pool is mapped with a CRUSH ruleset when data is written to a pool; it is identified by the CRUSH ruleset for the placement of objects and its replica inside the cluster. The CRUSH ruleset provides new capabilities to Ceph pools. For example, we can create a faster pool, also known as cache pool, out of SSD disk drives, or a hybrid pool out of SSD and SAS or SATA disk drivers.

A Ceph pool also supports snapshot features. We can use the ceph osd pool mksnap command to take snapshots of a particular pool, and we can restore these when necessary. In addition to this, a Ceph pool allows us to set ownership and access to objects. A user ID can be assigned as the owner of a pool. This is very useful in several scenarios where we need to provide restrictive access to a pool.

Pool operations

Performing Ceph pool operations is one of the day-to-day jobs for a Ceph admin. Ceph provides the rich cli tools for pool creation and management. We will learn about Ceph pool operation in the following section.

Creating and listing pools

Creating a Ceph pool requires a pool name, PG and PGP numbers, and a pool type which is either replicated or erasure. The default is replicated. Let's start creating a pool:

  1. Create a pool, web-services, with 128 PG and PGP numbers. This will create a replicated pool as it's the default option.
    # ceph osd pool create web-services 128 128
    
  2. The listing of pools can be done in two ways. However, the output of the third command will provide us more information such as the pool ID, replication size, CRUSH ruleset, and PG and PGP numbers:
    # ceph osd lspools
    # rados lspools
    # ceph osd dump | grep -i pool
    
    Creating and listing pools
  3. The default replication size for a Ceph pool created with Ceph Emperor or an earlier release is 2; we can change the replication size using the following command:
    # ceph osd pool set web-services size 3
    # ceph osd dump | grep -i pool
    
    Creating and listing pools

    Note

    For Ceph Emperor and earlier releases, the default replication size for a pool was 2; this default replication size has been changed to 3 starting from Ceph Firefly.

  4. Rename a pool, as follows:
    # ceph osd pool rename web-services frontend-services
    # ceph osd lspools
    
  5. Ceph pools support snapshots; we can restore objects from a snapshot in the event of a failure. In the following example, we will create an object in a pool and then take pool snapshots. After this, we will intentionally remove the object from the pool and try to restore the object from its snapshot:
    # rados –p frontend-services put object1 /etc/hosts
    # rados –p frontend-services ls
    # rados mksnap snapshot01 -p frontend-services
    # rados lssnap -p frontend-services
    # rados  -p frontend-services rm object1
    # rados -p frontend-services listsnaps object1
    # rados rollback -p frontend-services object1 snapshot01
    # rados -p frontend-services ls
    
    Creating and listing pools
  6. Removing a pool will also remove all its snapshots. After removing a pool, you should delete CRUSH rulesets if you created them manually. If you created users with permissions strictly for a pool that no longer exists, you should consider deleting these users too:
    # ceph osd pool delete frontend-services frontend-services --yes-i-really-really-mean-it
    
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.171.153