The concept of a pool is not new in storage systems. Enterprise storage systems are managed by creating several pools; Ceph also provides easy storage management by means of storage pools. A Ceph pool is a logical partition to store objects. Each pool in Ceph holds a number of placement groups, which in turn holds a number of objects that are mapped to OSDs across clusters. Hence, every single pool is distributed across cluster nodes, and thus this provides resilience. The initial Ceph deployment creates a default pool based on your requirement; it is recommended that you should create pools other than the default one.
A pool ensures data availability by creating the desired number of object copies, that is, replicas or erasure codes. The erasure coding (EC) feature has been recently added to Ceph, starting with the Ceph Firefly release. Erasure coding is a method of data protection in which data is broken into fragments, encoded, and then stored in a distributed manner. Ceph, being distributed in nature, makes use of EC amazingly well.
At the time of pool creation, we can define the replica size; the default replica size is 2
. The pool replication level is very flexible; at any point in time, we can change it. We can also define the erasure code ruleset at the time of pool creation, which provides the same level of reliability but less amount of space as compared to the replication method.
A Ceph pool is mapped with a CRUSH ruleset when data is written to a pool; it is identified by the CRUSH ruleset for the placement of objects and its replica inside the cluster. The CRUSH ruleset provides new capabilities to Ceph pools. For example, we can create a faster pool, also known as cache pool, out of SSD disk drives, or a hybrid pool out of SSD and SAS or SATA disk drivers.
A Ceph pool also supports snapshot features. We can use the ceph osd pool mksnap
command to take snapshots of a particular pool, and we can restore these when necessary. In addition to this, a Ceph pool allows us to set ownership and access to objects. A user ID can be assigned as the owner of a pool. This is very useful in several scenarios where we need to provide restrictive access to a pool.
Performing Ceph pool operations is one of the day-to-day jobs for a Ceph admin. Ceph provides the rich cli
tools for pool creation and management. We will learn about Ceph pool operation in the following section.
Creating a Ceph pool requires a pool name, PG and PGP numbers, and a pool type which is either replicated or erasure. The default is replicated. Let's start creating a pool:
web-services
, with 128
PG and PGP numbers. This will create a replicated pool as it's the default option.# ceph osd pool create web-services 128 128
# ceph osd lspools # rados lspools # ceph osd dump | grep -i pool
2
; we can change the replication size using the following command:# ceph osd pool set web-services size 3 # ceph osd dump | grep -i pool
# ceph osd pool rename web-services frontend-services # ceph osd lspools
# rados –p frontend-services put object1 /etc/hosts # rados –p frontend-services ls # rados mksnap snapshot01 -p frontend-services # rados lssnap -p frontend-services # rados -p frontend-services rm object1 # rados -p frontend-services listsnaps object1 # rados rollback -p frontend-services object1 snapshot01 # rados -p frontend-services ls
# ceph osd pool delete frontend-services frontend-services --yes-i-really-really-mean-it
18.189.171.153