Adding OSD nodes to a Ceph cluster

Adding an OSD node to a Ceph cluster is an online process. To demonstrate this, we require a new virtual machine named ceph-node4 with three disks; we will add this node to our existing Ceph cluster.

Create a new node ceph-node4 with three disks (OSDs). You can follow the process of creating a new virtual machine with disks, OS configuration, and Ceph installation as mentioned in Chapter 2, Ceph Instant Deployment, and Chapter 5, Deploying Ceph – the Way You Should Know.

Once you have the new node ready for addition to a Ceph cluster, check the current Ceph OSD details:

# ceph osd tree

This is what you will get once this command is run:

Adding OSD nodes to a Ceph cluster

Expanding a Ceph cluster is an online process, and to demonstrate this, we will perform some operations on our Ceph cluster; we will also expand the cluster in parallel. In Chapter 5, Deploying Ceph – the Way You Should Know, we deployed the Ceph RADOS block device on a ceph-client1 machine. We will use the same machine to generate traffic to our Ceph cluster. Make sure that ceph-client1 has mounted RBD:

# df -h /mnt/ceph-vol1
Adding OSD nodes to a Ceph cluster

Log in on ceph-node1 from a separate cli terminal and list the disks available to add as OSDs for ceph-node4. The ceph-node4 machine should have Ceph installed and the ceph.conf file copied to it. You will notice three disks, sdb, sdc, and sdd, listed when you execute the following command:

# ceph-deploy disk list ceph-node4

As mentioned earlier, scaling up a Ceph cluster is a seamless and online process. To demonstrate this, we will generate some load to the cluster and perform the scaling-up operation simultaneously. Note that this is an optional step.

Make sure the host running the VirtualBox environment has adequate disk space as we will write data to the Ceph cluster. Open the ceph-client1 cli terminal and generate some write traffic to the Ceph cluster. As soon as you start generating traffic to the cluster, start expanding it by performing next steps.

# dd if=/dev/zero of=/mnt/ceph-vol1/file1 count=10240 bs=1M

Switch to the ceph-node1 cli terminal and expand the Ceph cluster by adding ceph-node4 disks as new Ceph OSDs:

# ceph-deploy disk zap ceph-node4:sdb ceph-node4:sdc ceph-node4:sdd
# ceph-deploy osd create ceph-node4:sdb ceph-node4:sdc ceph-node4:sdd

By the time OSD addition is under progress, you should monitor your Ceph cluster status from a separate terminal window. You will notice that the Ceph cluster performs the write operation while simultaneously scaling out its capacity:

# watch ceph status

Finally, once ceph-node4 disk addition is complete; you can check your Ceph cluster status using the preceding command. The following is what you will see after running this command:

Adding OSD nodes to a Ceph cluster

At this point, if you list out all OSDs, it will give you a better understanding:

# ceph osd tree

This command outputs some valuable information related to OSD, such as OSD weight, which the Ceph node hosts. The OSD, status of OSD (up/down), and OSD IN/OUT status are represented by 1 or 0. Have a look at the following screenshot:

Adding OSD nodes to a Ceph cluster
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.97.202