In this chapter, we will cover the following topics:
We can test deploy Ceph in a sandbox environment using Oracle VirtualBox virtual machines. This virtual setup can help us discover and perform experiments with Ceph storage clusters as if we are working in a real environment. Since Ceph is an open source software-defined storage deployed on top of commodity hardware in a production environment, we can imitate a fully functioning Ceph environment on virtual machines, instead of real-commodity hardware, for our testing purposes.
Oracle VirtualBox is a free software available at http://www.virtualbox.org for Windows, Mac OS X, and Linux. We must fulfil system requirements for the VirtualBox software so that it can function properly during our testing. The Ceph test environment that we will create on VirtualBox virtual machines will be used for the rest of the chapters in this book. We assume that your host operating system is a Unix variant; for Microsoft windows, host machines use an absolute path to run the VBoxManage
command, which is by default c:Program FilesOracleVirtualBoxVBoxManage.exe
.
The system requirement for VirtualBox depends upon the number and configuration of virtual machines running on top of it. Your VirtualBox host should require an x86-type processor (Intel or AMD), a few gigabytes of memory (to run three Ceph virtual machines), and a couple of gigabytes of hard drive space. To begin with, we must download VirtualBox from http://www.virtualbox.org/ and then follow the installation procedure once this has been downloaded. We will also need to download the CentOS 6.4 Server ISO image from http://vault.centos.org/6.4/isos/.
To set up our sandbox environment, we will create a minimum of three virtual machines; you can create even more machines for your Ceph cluster based on the hardware configuration of your host machine. We will first create a single VM and install OS on it; after this, we will clone this VM twice. This will save us a lot of time and increase our productivity. Let's begin by performing the following steps to create the first virtual machine:
The VirtualBox host machine used throughout in this demonstration is a Mac OS X which is a UNIX-type host. If you are performing these steps on a non-UNIX machine that is, on Windows-based host then keep in mind that virtualbox hostonly
adapter name will be something like VirtualBox Host-Only Ethernet Adapter #<adapter number>
. Please run these commands with the correct adapter names. On windows-based hosts, you can check VirtualBox networking options in Oracle VM VirtualBox Manager by navigating to File | VirtualBox Settings | Network | Host-only Networks.
For UNIX-based VirtualBox hosts
# VBoxManage hostonlyif remove vboxnet1 # VBoxManage hostonlyif create # VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.1 --netmask 255.255.255.0
For Windows-based VirtualBox hosts
# VBoxManage.exe hostonlyif remove "VirtualBox Host-Only Ethernet Adapter" # VBoxManage.exe hostonlyif create # VBoxManage hostonlyif ipconfig "VirtualBox Host-Only Ethernet Adapter" --ip 192.168.57.1 --netmask 255.255.255.0
The following is the step-by-step process to create virtual machines using CLI commands:
# VBoxManage createvm --name ceph-node1 --ostype RedHat_64 --register # VBoxManage modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1
For Windows VirtualBox hosts:
# VBoxManage.exe modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 "VirtualBox Host-Only Ethernet Adapter"
# VBoxManage storagectl ceph-node1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on # VBoxManage storageattach ceph-node1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium CentOS-6.4-x86_64-bin-DVD1.iso
# VBoxManage storagectl ceph-node1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on # VBoxManage createhd --filename OS-ceph-node1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium OS-ceph-node1.vdi
# VBoxManage createhd --filename ceph-node1-osd1.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium ceph-node1-osd1.vdi
# VBoxManage createhd --filename ceph-node1-osd2.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium ceph-node1-osd2.vdi
# VBoxManage createhd --filename ceph-node1-osd3.vdi --size 10240 # VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium ceph-node1-osd3.vdi
# VBoxManage startvm ceph-node1 --type gui
/etc/sysconfig/network and change the hostname parameter HOSTNAME=ceph-node1
/etc/sysconfig/network-scripts/ifcfg-eth0
file and add:ONBOOT=yes BOOTPROTO=dhcp
/etc/sysconfig/network-scripts/ifcfg-eth1
file and add:ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.101 NETMASK=255.255.255.0
/etc/hosts
file and add:192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3
# ssh [email protected]
# VBoxManage clonevm --name ceph-node2 ceph-node1 --register
# VBoxManage clonevm --name ceph-node3 ceph-node1 --register
# VBoxManage startvm ceph-node1 # VBoxManage startvm ceph-node2 # VBoxManage startvm ceph-node3
/etc/sysconfig/network
and change the hostname parameter:HOSTNAME=ceph-node2
/etc/sysconfig/network-scripts/ifcfg-<first interface name>
file and add:DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a >
/etc/sysconfig/network-scripts/ifcfg-<second interface name>
file and add:DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.102 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a >
192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3
After performing these changes, you should restart your virtual machine to bring the new hostname into effect. The restart will also update your network configurations.
/etc/sysconfig/network
and change the hostname parameter:HOSTNAME=ceph-node3
/etc/sysconfig/network-scripts/ifcfg-<first interface name>
file and add:DEVICE=<correct device name of your first network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=dhcp HWADDR= <correct MAC address of your first network interface, check ifconfig -a >
/etc/sysconfig/network-scripts/ifcfg-<second interface name>
file and add:DEVICE=<correct device name of your second network interface, check ifconfig -a> ONBOOT=yes BOOTPROTO=static IPADDR=192.168.57.103 NETMASK=255.255.255.0 HWADDR= <correct MAC address of your second network interface, check ifconfig -a >
192.168.57.101 ceph-node1 192.168.57.102 ceph-node2 192.168.57.103 ceph-node3
After performing these changes, you should restart your virtual machine to bring a new hostname into effect; the restart will also update your network configurations.
At this point, we prepare three virtual machines and make sure each VM communicates with each other. They should also have access to the Internet to install Ceph packages.
18.227.72.15