Scaling up your Ceph cluster – monitor and OSD addition

Now we have a single-node Ceph cluster. We should scale it up to make it a distributed, reliable storage cluster. To scale up a cluster, we should add more monitor nodes and OSD. As per our plan, we will now configure ceph-node2 and ceph-node3 machines as monitor as well as OSD nodes.

Adding the Ceph monitor

A Ceph storage cluster requires at least one monitor to run. For high availability, a Ceph storage cluster relies on an odd number of monitors that's more than one, for example, 3 or 5, to form a quorum. It uses the Paxos algorithm to maintain quorum majority. Since we already have one monitor running on ceph-node1, let's create two more monitors for our Ceph cluster:

  1. The firewall rules should not block communication between Ceph monitor nodes. If they do, you need to adjust the firewall rules in order to let monitors form a quorum. Since this is our test setup, let's disable firewall on all three nodes. We will run these commands from the ceph-node1 machine, unless otherwise specified:
    # service iptables stop
    # chkconfig iptables off
    # ssh ceph-node2 service iptables stop
    # ssh ceph-node2 chkconfig iptables off
    # ssh ceph-node3 service iptables stop
    # ssh ceph-node3 chkconfig iptables off
    
  2. Deploy a monitor on ceph-node2 and ceph-node3:
    # ceph-deploy mon create ceph-node2
    # ceph-deploy mon create ceph-node3
    
  3. The deploy operation should be successful; you can then check your newly added monitors in the Ceph status:
    Adding the Ceph monitor
  4. You might encounter warning messages related to clock skew on new monitor nodes. To resolve this, we need to set up Network Time Protocol (NTP) on new monitor nodes:
    # chkconfig ntpd on
    # ssh ceph-node2  chkconfig ntpd on
    # ssh ceph-node3  chkconfig ntpd on
    # ntpdate pool.ntp.org
    # ssh ceph-node2 ntpdate pool.ntp.org
    # ssh ceph-node3 ntpdate pool.ntp.org
    # /etc/init.d/ntpd start
    # ssh ceph-node2 /etc/init.d/ntpd start
    # ssh ceph-node3 /etc/init.d/ntpd start
    

Adding the Ceph OSD

At this point, we have a running Ceph cluster with three monitors OSDs. Now we will scale our cluster and add more OSDs. To accomplish this, we will run the following commands from the ceph-node1 machine, unless otherwise specified.

We will follow the same method for OSD addition that we used earlier in this chapter:

# ceph-deploy disk list ceph-node2 ceph-node3
# ceph-deploy disk zap ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
# ceph-deploy disk zap ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd

# ceph-deploy osd create ceph-node2:sdb ceph-node2:sdc ceph-node2:sdd
# ceph-deploy osd create ceph-node3:sdb ceph-node3:sdc ceph-node3:sdd
# ceph status

Check the cluster status for a new OSD. At this stage, your cluster will be healthy with nine OSDs in and up:

Adding the Ceph OSD
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.139.224