Scaling out your Ceph cluster

From the roots, Ceph has been designed to grow from a few nodes to several hundreds, and it's supposed to scale on the fly without any downtime. In this recipe, we will dive deep into the Ceph scale-out feature by adding MON, OSD, MDS, and RGW nodes.

Adding the Ceph OSD

Adding an OSD node to the Ceph cluster is an online process. To demonstrate this, we would require a new virtual machine named ceph-node4 with three disks that will act as OSDs. This new node will then be added to our existing Ceph cluster.

How to do it…

Run the following commands from ceph-node1 until otherwise specified:

  1. Create a new node, ceph-node4, with three disks (OSD). You can follow the process of creating a new virtual machine with disks and the OS configuration, as mentioned in the Setting up virtual infrastructure recipe in Chapter 1, Ceph – Introduction and Beyond, and make sure ceph-node1 can ssh into ceph-node4.

    Before adding the new node to the Ceph cluster, let's check the current OSD tree. As shown in the following screenshot, the cluster has three nodes and a total of nine OSDs:

    # ceph osd tree
    
    How to do it…
  2. Make sure that the new nodes have the Ceph packages installed. It's a recommended practice to keep all cluster nodes on the same Ceph version. From ceph-node1, install the Ceph packages on ceph-node4:
    # ceph-deploy install ceph-node4 --release giant
    

    Note

    We are intentionally installing the Ceph Giant release here so that one can learn how to upgrade the Ceph cluster from Giant to Hammer later in this chapter.

  3. List the disks of ceph-node4:
    # ceph-deploy disk list ceph-node4
    
  4. Let's add disks from ceph-node4 to our existing Ceph cluster:
    # ceph-deploy disk zap ceph-node4:sdb ceph-node4:sdc ceph-node4:sdd
    # ceph-deploy osd create ceph-node4:sdb ceph-node4:sdc ceph-node4:sdd
    
  5. As soon as you add new OSDs to the Ceph cluster, you will notice that the Ceph cluster starts rebalancing existing data to the new OSDs. You can monitor rebalancing using the following command; after a while, you will notice that your Ceph cluster becomes stable:
    # watch ceph -s
    
  6. Finally, once the ceph-node4 disks' addition is complete, you will notice the cluster's new storage capacity:
    # rados df
    
  7. Check the OSD tree; it will give you a better understanding of your cluster. You should notice the new OSDs under ceph-node4, which have been recently added:
    # ceph osd tree
    
    How to do it…

This command outputs some valuable information such as OSD weight, which Ceph node hosts which OSD, the UP/DOWN status of an OSD, and the OSD IN/OUT status represented by 1 or 0.

Just now, we have learned how to add a new node to the existing Ceph cluster. It's a good time to understand that as the number of OSDs increases, choosing the right value for the PG becomes more important because it has a significant influence on the behavior of the cluster. Increasing the PG count on a large cluster can be an expensive operation. I encourage you to take a look at http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups for any updated information on Placement Groups (PGs).

Adding the Ceph MON

In an environment where you have deployed a large Ceph cluster, you might want to increase your monitor count. Like in an OSD, adding new monitors to the Ceph cluster is an online process. In this recipe, we will configure ceph-node4 as a monitor node.

Since this is a test Ceph cluster, we will add ceph-node4 as the fourth monitor node, however, in production setup, you should always have an odd number of monitor nodes in your Ceph cluster; this improves resiliency.

How to do it...

  1. To configure ceph-node4 as a monitor node, execute the following command from ceph-node1:
    # ceph-deploy mon create ceph-node4
    
  2. Once ceph-node4 is configured as a monitor node, check the Ceph status to see the cluster status. Please notice that ceph-node4 is your new monitor node.
    How to do it...
  3. Check the Ceph monitor status and notice ceph-node4 as the new Ceph monitor.
    How to do it...

Adding the Ceph RGW

For an object storage use case, you have to deploy the Ceph RGW component, and to make your object storage service highly available and performing, you should deploy more than one instance of the Ceph RGW. A Ceph object storage service can easily scale from one to several nodes of RGW. The following diagram shows how multiple RGW instances can be deployed and scaled to provide the HA object storage service:

Adding the Ceph RGW

Scaling RGW is the same as adding additional RGW nodes; please refer to the Installing Rados Gateway recipe from Chapter 3, Working with Ceph Object Storage, to add more RGW nodes to your Ceph environment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.159.178