Bringing an OSD out and down from a Ceph cluster

Before proceeding with a cluster's size reduction or scaling it down, make sure the cluster has enough free space to accommodate all the data present on the node you are moving out. The cluster should not be at its near-to-full ratio.

From the ceph-client1 node, generate some load on the Ceph cluster. This is an optional step to demonstration on-the-fly scale-down operations of a Ceph cluster. Make sure the host running the VirtualBox environment has adequate disk space since we will write data to a Ceph cluster.

# dd if=/dev/zero of=/mnt/ceph-vol1/file1 count=3000 bs=1M

As we need to scale down the cluster, we will remove ceph-node4 and all of its associated OSDs out of the cluster. Ceph OSDs should be set out so that Ceph can perform data recovery. From any of the Ceph nodes, take the OSDs out of the cluster:

# ceph osd out osd.9
# ceph osd out osd.10
# ceph osd out osd.11
Bringing an OSD out and down from a Ceph cluster

As soon as you mark OSDs out of the cluster, Ceph will start rebalancing the cluster by migrating the placement groups out of the OSDs that were made out to other OSDs inside the cluster. Your cluster state will become unhealthy for some time, but it will be good to serve data to clients. Based on the number of OSDs removed, there might be some drop in cluster performance till the time recovery is completed. Once the cluster is healthy again, it should perform as usual. Have a look at the following screenshot:

Bringing an OSD out and down from a Ceph cluster

In the preceding screenshot, you can see that the cluster is under a recovery mode while also serving data to clients at the same time. You can observe the recovery process using the following command:

# ceph -w

As we marked osd.9, osd.10, and osd.11 out of the cluster, they are not a member of the cluster, but their services still run. Next, log in on a ceph-node4 machine and stop the OSD services:

# service ceph stop osd.9
# service ceph stop osd.10
# service ceph stop osd.11

Once OSDs are down, check the OSD tree, as shown in the following screenshot. You will observe that the OSDs are down and out:

Bringing an OSD out and down from a Ceph cluster
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.212.71