Scaling up your cluster

Scaling up the Ceph cluster is one of the important tasks for the Ceph administrator. This includes adding more monitor and OSD nodes to your cluster. We recommend that you use an odd number of monitor nodes for high availability and quorum maintenance; however, this is not mandatory. Scaling up and scaling down operation for monitor and OSD nodes are absolute online operations and does not cost downtime. In our test deployment, we have a single node, ceph-node1, which acts as monitor and OSD nodes. Let's now add two more monitors to our Ceph cluster.

Adding monitors

Proceed with the following steps:

  1. Log in to ceph-node2 and create directories:
    # mkdir -p /var/lib/ceph/mon/ceph-ceph-node2 /tmp/ceph-node2
    
  2. Edit the /etc/ceph/ceph.conf file and add the new monitor information under [mon] section:
    [mon.ceph-node2]
    mon_addr = 192.168.57.102:6789
    host = ceph-node2
    
  3. Extract keyring information from the Ceph cluster:
    # ceph auth get mon. -o /tmp/ceph-node2/monkeyring
    
    Adding monitors
  4. Retrieve the monitor map from the Ceph cluster:
    # ceph mon getmap -o /tmp/ceph-node2/monmap
    
  5. Build a fresh monitor, fs, using key and existing monmap:
    # ceph-mon -i ceph-node2 --mkfs --monmap /tmp/ceph-node2/monmap --keyring /tmp/ceph-node2/monkeyring
    
  6. Add the new monitor to the cluster:
    # ceph mon add ceph-node2 192.168.57.102:6789
    
    Adding monitors
  7. Once the monitor is added, check the cluster status. You will notice that we now have two monitors in our Ceph cluster. You can ignore the clock skew warning as of now, or you can configure NTP on all your nodes so that they can be synced. We have already discussed NTP configuration in the Scaling up your Ceph cluster – monitor and OSD addition section of Chapter 2, Ceph Instant Deployment.
    Adding monitors
  8. Repeat the same steps for adding ceph-node3 as your third monitor. Once you add your third monitor, check your cluster status and you will notice the third monitor in the Ceph cluster:
    Adding monitors

Adding OSDs

It is easy to scale up your cluster by adding OSDs on the fly. Earlier in this chapter, we learned to create OSD; this process is similar to scaling up your cluster for adding more OSDs. Log in to the node, which needs its disks to be added to cluster and perform the following operations:

  1. List the available disks:
    # ceph-disk list
    
  2. Label the disk with GPT:
    # parted /dev/sdb mklabel GPT
    # parted /dev/sdc mklabel GPT
    # parted /dev/sdd mklabel GPT
    
  3. Prepare the disk with the required filesystem and instruct it to connect to the cluster by providing the cluster uuid:
    # ceph-disk prepare --cluster ceph --cluster-uuid 07a92ca3-347e-43db-87ee-e0a0a9f89e97 --fs-type xfs /dev/sdb
    # ceph-disk prepare --cluster ceph --cluster-uuid 07a92ca3-347e-43db-87ee-e0a0a9f89e97 --fs-type xfs /dev/sdc
    # ceph-disk prepare --cluster ceph --cluster-uuid 07a92ca3-347e-43db-87ee-e0a0a9f89e97 --fs-type xfs /dev/sdd
    
  4. Activate the disk so that Ceph can start OSD services and help OSD to join the cluster:
    # ceph-disk activate /dev/sdb1
    # ceph-disk activate /dev/sdc1
    # ceph-disk activate /dev/sdd1
    
  5. Repeat these steps for all other nodes for which you want to add their disk to cluster. Finally, check your cluster status; you will notice the disks will be in UP and IN states:
    Adding OSDs
  6. Check your cluster OSD tree; this will give you information about OSD and its physical node:
    Adding OSDs
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.88.62