From zero to Ceph – deploying your first Ceph cluster

To deploy our first Ceph cluster, we will use the ceph-deploy tool to install and configure Ceph on all three virtual machines. The ceph-deploy tool is a part of the Ceph software-defined storage, which is used for easier deployment and management of your Ceph storage cluster.

Since we created three virtual machines that run CentOS 6.4 and have connectivity with the Internet as well as private network connections, we will configure these machines as Ceph storage clusters as mentioned in the following diagram:

From zero to Ceph – deploying your first Ceph cluster
  1. Configure ceph-node1 for an SSH passwordless login to other nodes. Execute the following commands from ceph-node1:
    • While configuring SSH, leave the paraphrase empty and proceed with the default settings:
      # ssh-keygen
      
    • Copy the SSH key IDs to ceph-node2 and ceph-node3 by providing their root passwords. After this, you should be able to log in on these nodes without a password:
      # ssh-copy-id ceph-node2
      
  2. Installing and configuring EPEL on all Ceph nodes:
    1. Install EPEL which is the repository for installing extra packages for your Linux system by executing the following command on all Ceph nodes:
      # rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
      
    2. Make sure the baserul parameter is enabled under the /etc/yum.repos.d/epel.repo file. The baseurl parameter defines the URL for extra Linux packages. Also make sure the mirrorlist parameter must be disabled (commented) under this file. Problems been observed during installation if the mirrorlist parameter is enabled under epel.repo file. Perform this step on all the three nodes.
  3. Install ceph-deploy on the ceph-node1 machine by executing the following command from ceph-node1:
    # yum install ceph-deploy
    
  4. Next, we will create a Ceph cluster using ceph-deploy by executing the following command from ceph-node1:
    # ceph-deploy new ceph-node1
    ## Create a directory for ceph
    # mkdir /etc/ceph
    # cd /etc/ceph
    

    The new subcommand of ceph-deploy deploys a new cluster with ceph as the cluster name, which is by default; it generates a cluster configuration and keying files. List the present working directory; you will find the ceph.conf and ceph.mon.keyring files.

    Note

    In this testing, we will intentionally install the Emperor release (v0.72) of Ceph software, which is not the latest release. Later in this book, we will demonstrate the upgradation of Emperor to Firefly release of Ceph.

  5. To install Ceph software binaries on all the machines using ceph-deploy; execute the following command from ceph-node1:
    ceph-deploy install --release emperor ceph-node1 ceph-node2 ceph-node3
    

    The ceph-deploy tool will first install all the dependencies followed by the Ceph Emperor binaries. Once the command completes successfully, check the Ceph version and Ceph health on all the nodes, as follows:

    # ceph –v
    
  6. Create your first monitor on ceph-node1:
    # ceph-deploy mon create-initial
    

    Once monitor creation is successful, check your cluster status. Your cluster will not be healthy at this stage:

    # ceph status
    
  7. Create an object storage device (OSD) on the ceph-node1 machine, and add it to the Ceph cluster executing the following steps:
    1. List the disks on VM:
      # ceph-deploy disk list ceph-node1
      

      From the output, carefully identify the disks (other than OS-partition disks) on which we should create Ceph OSD. In our case, the disk names will ideally be sdb, sdc, and sdd.

    2. The disk zap subcommand will destroy the existing partition table and content from the disk. Before running the following command, make sure you use the correct disk device name.
      # ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
      
    3. The osd create subcommand will first prepare the disk, that is, erase the disk with a filesystem, which is xfs by default. Then, it will activate the disk's first partition as data partition and second partition as journal:
      # ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
      
    4. Check the cluster status for new OSD entries:
      # ceph status
      

      At this stage, your cluster will not be healthy. We need to add a few more nodes to the Ceph cluster so that it can set up a distributed, replicated object storage, and hence become healthy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.167.195