Configuring Ceph client

Any regular Linux host (RHEL or Debian-based) can act as a Ceph client. The Client interacts with the Ceph storage cluster over the network to store or retrieve user data. Ceph RBD support has been added to the Linux mainline kernel, starting with 2.6.34 and later versions.

How to do it…

As we have done earlier, we will set up a Ceph client machine using Vagrant and VirtualBox. We will use the same Vagrantfile that we cloned in the last chapter. Vagrant will then launch an Ubuntu 14.04 virtual machine that we will configure as a Ceph client:

  1. From the directory where we have cloned ceph-cookbook git repository, launch the client virtual machine using Vagrant:
    $ vagrant status client-node1
    $ vagrant up client-node1
    
    How to do it…
  2. Log in to client-node1:
    $ vagrant ssh client-node1
    

    Note

    The username and password that Vagrant uses to configure virtual machines is vagrant, and Vagrant has sudo rights. The default password for root user is vagrant.

  3. Check OS and kernel release (this is optional):
    $ lsb_release -a
    $ uname -r
    
  4. Check for RBD support in the kernel:
    $ sudo modprobe rbd
    
    How to do it…
  5. Allow ceph-node1 monitor machine to access client-node1 over ssh. To do this, copy root ssh keys from ceph-node1 to client-node1 Vagrant user. Execute the following commands from ceph-node1 machine until otherwise specified:
    ## Login to ceph-node1 machine
    $ vagrant ssh ceph-node1
    $ sudo su -
    # ssh-copy-id vagrant@client-node1
    

    Provide a one-time Vagrant user password, that is, vagrant, for client-node1. Once the ssh keys are copied from ceph-node1 to client-node1, you should able to log in to client-node1 without a password.

  6. Use ceph-deploy utility from ceph-node1 to install Ceph binaries on client-node1:
    # cd /etc/ceph
    # ceph-deploy --username vagrant install client-node1
    
    How to do it…
  7. Copy the Ceph configuration file (ceph.conf) to client-node1:
    # ceph-deploy --username vagrant config push client-node1
    
  8. The client machine will require Ceph keys to access the Ceph cluster. Ceph creates a default user, client.admin, which has full access to the Ceph cluster. It's not recommended to share client.admin keys with client nodes. The better approach is to create a new Ceph user with separate keys and allow access to specific Ceph pools.

    In our case, we will create a Ceph user, client.rbd, with access to the rbd pool. By default, Ceph block devices are created on the rbd pool:

    # ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=rbd'
    
    How to do it…
  9. Add the key to client-node1 machine for client.rbd user:
    # ceph auth get-or-create client.rbd | ssh vagrant@client-node1 sudo tee /etc/ceph/ceph.client.rbd.keyring
    
    How to do it…
  10. By this step, client-node1 should be ready to act as a Ceph client. Check the cluster status from the client-node1 machine by providing the username and secret key:
    $ vagrant ssh client-node1
    $ sudo su -
    # cat /etc/ceph/ceph.client.rbd.keyring >> /etc/ceph/keyring
    ### Since we are not using the default user client.admin we need to supply username that will connect to Ceph cluster.
    # ceph -s --name client.rbd
    
    How to do it…
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.119.170