From the roots, Ceph has been designed to grow from a few nodes to several hundreds, and it's supposed to scale on the fly without any downtime. In this recipe, we will dive deep into the Ceph scale-out feature by adding MON, OSD, MDS, and RGW nodes.
Adding an OSD node to the Ceph cluster is an online process. To demonstrate this, we would require a new virtual machine named ceph-node4
with three disks that will act as OSDs. This new node will then be added to our existing Ceph cluster.
Run the following commands from ceph-node1
until otherwise specified:
ceph-node4
, with three disks (OSD). You can follow the process of creating a new virtual machine with disks and the OS configuration, as mentioned in the Setting up virtual infrastructure recipe in Chapter 1, Ceph – Introduction and Beyond, and make sure ceph-node1
can ssh into ceph-node4
.Before adding the new node to the Ceph cluster, let's check the current OSD tree. As shown in the following screenshot, the cluster has three nodes and a total of nine OSDs:
# ceph osd tree
ceph-node1
, install the Ceph packages on ceph-node4
:# ceph-deploy install ceph-node4 --release giant
ceph-node4
:# ceph-deploy disk list ceph-node4
ceph-node4
to our existing Ceph cluster:# ceph-deploy disk zap ceph-node4:sdb ceph-node4:sdc ceph-node4:sdd # ceph-deploy osd create ceph-node4:sdb ceph-node4:sdc ceph-node4:sdd
# watch ceph -s
ceph-node4
disks' addition is complete, you will notice the cluster's new storage capacity:# rados df
ceph-node4
, which have been recently added:# ceph osd tree
This command outputs some valuable information such as OSD weight, which Ceph node hosts which OSD, the UP/DOWN
status of an OSD, and the OSD IN/OUT
status represented by 1 or 0.
Just now, we have learned how to add a new node to the existing Ceph cluster. It's a good time to understand that as the number of OSDs increases, choosing the right value for the PG becomes more important because it has a significant influence on the behavior of the cluster. Increasing the PG count on a large cluster can be an expensive operation. I encourage you to take a look at http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups for any updated information on Placement Groups (PGs).
In an environment where you have deployed a large Ceph cluster, you might want to increase your monitor count. Like in an OSD, adding new monitors to the Ceph cluster is an online process. In this recipe, we will configure ceph-node4
as a monitor node.
Since this is a test Ceph cluster, we will add ceph-node4
as the fourth monitor node, however, in production setup, you should always have an odd number of monitor nodes in your Ceph cluster; this improves resiliency.
ceph-node4
as a monitor node, execute the following command from ceph-node1
:# ceph-deploy mon create ceph-node4
ceph-node4
is configured as a monitor node, check the Ceph status to see the cluster status. Please notice that ceph-node4
is your new monitor node.ceph-node4
as the new Ceph monitor.For an object storage use case, you have to deploy the Ceph RGW component, and to make your object storage service highly available and performing, you should deploy more than one instance of the Ceph RGW. A Ceph object storage service can easily scale from one to several nodes of RGW. The following diagram shows how multiple RGW instances can be deployed and scaled to provide the HA object storage service:
Scaling RGW is the same as adding additional RGW nodes; please refer to the Installing Rados Gateway recipe from Chapter 3, Working with Ceph Object Storage, to add more RGW nodes to your Ceph environment.
18.119.159.178