Managing a Ceph pool using Proxmox GUI

All Ceph pool-related tasks can be performed through the Datacenter | node | Ceph | Pools menu. The pool interface shows information about existing pools, such as the name, replica number, PG number, and per-pool percentage used. Once a pool is created, it cannot be modified or changed in any way from the Proxmox GUI. But a pool can be edited through the CLI. If you are going to strictly use the Proxmox GUI to perform all Ceph-related tasks, then a new pool needs to be created if existing pool configuration needs to be changed, such as changing the replica size or increasing the PG number. When the Ceph cluster is created, a default pool named rbd is created with replica size 3 and a total of 64 PGs. This PG number of the rbd pool is too low to store any data. So we can create a new pool or we can modify this pool through CLI. When an existing pool holds a lot of data, changing the pool configuration through CLI is the way to go, or else all data will need to be moved to the new pool, which can take a very long time depending on the amount of data being stored. 

Replica size is the second most important configuration for a Ceph pool. Basically, replica size defines how many times data will be replicated before it is distributed among OSDs on different nodes. Keep in mind that a higher replica size will consume higher network bandwidth and higher disk storage due to increased replication. For a smaller cluster, a replica size of 2 is best suited from a performance standpoint. However, in a large Ceph cluster with lots of drives and nodes, using a replica size of 3 is recommended.

For the pool rbd in our example Ceph cluster, we are going to change the default replica size of 3 to 2 using the following command:

# ceph osd pool set rbd size 2

We are also going to change the minimum size, or min_size, value of the pool. The minimum replica size defines the minimum replicated data that must exist in order for the pool to operate. For example, in the default pool rbd, the minimum size is 2. So if multiple HDD failures occur where a set of OSDs that hold two data replicas goes down, the cluster will not come online. But if the minimum size is 1, then as long as the Ceph cluster can see one data replica anywhere in the cluster, even in the case of multiple OSD failures, the cluster will still operate. A minimum size of 1 will ensure that there is always at least one copy of data at all times. We can change the minimum size of a pool using the following command format:

# ceph osd pool set rbd min_size 1

We are going to increase the PG number of the default pool rbd in order to make it usable to store virtual machine data.

Refer to the Ceph PG calculator at the following link to calculate the number of PGs you need for your Ceph cluster:

 http://ceph.com/pgcalc/

There are two values that need to be set for the PG number of a pool: the actual PG number and the effective PG number. This value is defined with the option pgp_num. The pgp_num must be equal or less than pg_num. We are going to increase the PG number to 256 for our default pool rbd using the following command:

# ceph osd pool set rbd pg_num 256
# ceph osd pool set rbd pgp_num 256

When changing PG values, it is very important to keep in mind that it is a very intensive process. The Ceph cluster will be under load during this process. When changing the PG value from low to high, it is a wise idea to do it in steps, using smaller PG values incrementally. This is not a problem for a brand new Ceph cluster which is not serving any users yet. But on an established Ceph cluster with many active users, the performance will be noticeable and may cause service interruption. 

The replica size, minimum replica size, and PG value are the most important values for a Ceph pool. Changes in these values have the most impact on overall cluster performance and reliability. So to recap, let's run these commands for a hypothetical pool named vm_store. We are going to change the replica size to 3, minimum replica size to 1, PG number to 1024, and effective PG number to 1024 using the following commands:

# ceph osd pool set vm_store size 3
# ceph osd pool set vm_store min_size size 1
# ceph osd pool set vm_store pg_num 1024
# ceph osd pool set vm_store pgp_num 1024

The following screenshot shows the pool status for our default pool rbd in our example cluster after making necessary changes through CLI:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.193.232