To deploy our first Ceph cluster, we will use the ceph-deploy tool to install and configure Ceph on all three virtual machines. The ceph-deploy tool is a part of the Ceph software-defined storage, which is used for easier deployment and management of your Ceph storage cluster.
Since we created three virtual machines that run CentOS 6.4 and have connectivity with the Internet as well as private network connections, we will configure these machines as Ceph storage clusters as mentioned in the following diagram:
# ssh-keygen
# ssh-copy-id ceph-node2
# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
baserul
parameter is enabled under the /etc/yum.repos.d/epel.repo
file. The baseurl
parameter defines the URL for extra Linux packages. Also make sure the mirrorlist
parameter must be disabled (commented) under this file. Problems been observed during installation if the mirrorlist
parameter is enabled under epel.repo
file. Perform this step on all the three nodes.# yum install ceph-deploy
# ceph-deploy new ceph-node1 ## Create a directory for ceph # mkdir /etc/ceph # cd /etc/ceph
The new
subcommand of ceph-deploy
deploys a new cluster with ceph
as the cluster name, which is by default; it generates a cluster configuration and keying files. List the present working directory; you will find the ceph.conf
and ceph.mon.keyring
files.
ceph-deploy install --release emperor ceph-node1 ceph-node2 ceph-node3
The ceph-deploy tool will first install all the dependencies followed by the Ceph Emperor binaries. Once the command completes successfully, check the Ceph version and Ceph health on all the nodes, as follows:
# ceph –v
# ceph-deploy mon create-initial
Once monitor creation is successful, check your cluster status. Your cluster will not be healthy at this stage:
# ceph status
# ceph-deploy disk list ceph-node1
From the output, carefully identify the disks (other than OS-partition disks) on which we should create Ceph OSD. In our case, the disk names will ideally be sdb, sdc, and sdd.
disk zap
subcommand will destroy the existing partition table and content from the disk. Before running the following command, make sure you use the correct disk device name.# ceph-deploy disk zap ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
osd create
subcommand will first prepare the disk, that is, erase the disk with a filesystem, which is xfs by default. Then, it will activate the disk's first partition as data partition and second partition as journal:# ceph-deploy osd create ceph-node1:sdb ceph-node1:sdc ceph-node1:sdd
# ceph status
At this stage, your cluster will not be healthy. We need to add a few more nodes to the Ceph cluster so that it can set up a distributed, replicated object storage, and hence become healthy.
3.137.167.195