In the last recipe, we made all the preparations required for deploying VSM. In this recipe, we will learn how to automatically deploy VSM on all the nodes.
vsm-controller
node as cephuser
and get VSM:$ wget https://github.com/01org/virtual-storage-manager/releases/download/v2.0.0/2.0.0-216_centos7.tar.gz
VSM is also available for the Ubuntu OS and can be downloaded from https://github.com/01org/virtual-storage-manager.
$ tar -xvf 2.0.0-216_centos7.tar.gz $ cd 2.0.0-216 $ ls -la
installrc
file:AGENT_ADDRESS_LIST="192.168.123.101 192.168.123.102 192.168.123.103" CONTROLLER_ADDRESS="192.168.123.100"
installrc
file:$ cat installrc | egrep -v "#|^$"
manifest
folder, create directories using the name of the management IP of the vsm-controller
and vsm-nodes
:$ cd manifest $ mkdir 192.168.123.100 192.168.123.101 192.168.123.102 192.168.123.103
192.168.123.100/cluster.manifest
, which is the vsm-controller
node:$ cp cluster.manifest.sample 192.168.123.100/cluster.manifest
$ vim 192.168.123.100/cluster.manifest
You should know that in a production environment, it's recommended that you have separate networks for Ceph Management, Ceph Public, and Ceph Cluster traffic. Using the cluster.manifest
file, VSM can be instructed to use these different networks for your Ceph cluster:
manifest/server.manifest.sample
file and make the following changes:192.168.123.100
, under the [vsm_controller_ip]
section.[sata_device]
and [journal_device]
, as shown in the following screenshot. Make sure that the sata_device
and journal_device
names are separated by a space:10krpm_sas
disk OSDs and journals by disk path, comment the lines %osd-by-path-1% %journal-by-path-1%
from the [10krpm_sas] section, as shown in the following screenshot:manifest/server.manifest.sample
file, verify all the changes:$ cat server.manifest.sample | egrep -v "#|^$"
manifest/server.manifest.sample
file that we edited in the previous steps to all the vsm-nodes
, that is, vsm-node {1,2,3}
:$ cp server.manifest.sample 192.168.123.101/server.manifest $ cp server.manifest.sample 192.168.123.102/server.manifest $ cp server.manifest.sample 192.168.123.103/server.manifest
$ tree
install.sh
file:$ cd .. $ chmod +x install.sh
install.sh
file with the --check-dependence-package
parameter, which downloads packages that are necessary for the VSM installation from https//github.com/01org/vsm-dependencies:$ ./install.sh -u cephuser -v 2.0 --check-dependence-package
The VSM installation will take several minutes. The installer process might require you to input the cephuser
password for the vsm-controller
node. In that case, please input cephuser
as the password.
In case you encounter any errors and wish to restart the VSM installation, it is recommended that you clean your system before you retry it. Execute the uninstall.sh
script file for a system cleanup.
You can also review the author's version of the VSM installation by checking the installation log file located in the ceph-cookbook
repository path: ceph-cookbook/vsm/vsm_install_log
.
admin
by executing get_pass.sh
on the vsm-controller
node:$ ./get_pass.sh
https://192.168.123.100/dashboard/vsm
, with the user, admin
, and password that we extracted in the last step.vsm-dashboard
landing page looks like this:The VSM Cluster monitoring option shows some nice graphs for IPOS, Latency, Bandwidth, and CPU utilization, which gives you the big picture of what's going on in your cluster.
18.116.50.87