Installing VSM

In the last recipe, we made all the preparations required for deploying VSM. In this recipe, we will learn how to automatically deploy VSM on all the nodes.

How to do it…

  1. In this demonstration, we will use CentOS7 as the base operating system; let's download the VSM repository for CentOS7. Log in to the vsm-controller node as cephuser and get VSM:
    $ wget https://github.com/01org/virtual-storage-manager/releases/download/v2.0.0/2.0.0-216_centos7.tar.gz
    

    Note

    VSM is also available for the Ubuntu OS and can be downloaded from https://github.com/01org/virtual-storage-manager.

  2. Extract VSM:
    $ tar -xvf 2.0.0-216_centos7.tar.gz
    $ cd 2.0.0-216
    $ ls -la
    
    How to do it…
  3. Set the Controller node and agent node's address; add the following lines to the installrc file:
    AGENT_ADDRESS_LIST="192.168.123.101 192.168.123.102 192.168.123.103"
    CONTROLLER_ADDRESS="192.168.123.100"
    
  4. Verify the installrc file:
    $ cat installrc | egrep -v "#|^$"
    
    How to do it…
  5. In the manifest folder, create directories using the name of the management IP of the vsm-controller and vsm-nodes:
    $ cd manifest
    $ mkdir 192.168.123.100 192.168.123.101 192.168.123.102 192.168.123.103
    
  6. Copy the sample cluster manifest file to 192.168.123.100/cluster.manifest, which is the vsm-controller node:
    $ cp cluster.manifest.sample 192.168.123.100/cluster.manifest
    
  7. Edit the cluster manifest file that we added in the last step with the following changes:
    $ vim 192.168.123.100/cluster.manifest
    
    How to do it…

    You should know that in a production environment, it's recommended that you have separate networks for Ceph Management, Ceph Public, and Ceph Cluster traffic. Using the cluster.manifest file, VSM can be instructed to use these different networks for your Ceph cluster:

  8. Edit the manifest/server.manifest.sample file and make the following changes:
    1. Add the VSM controller IP, 192.168.123.100, under the [vsm_controller_ip] section.
    2. Add a disk device name for [sata_device] and [journal_device], as shown in the following screenshot. Make sure that the sata_device and journal_device names are separated by a space:
      How to do it…

      Note

      The server.manifest file provides several configuration options for different types of disks. In a production environment, it's recommended that you use the correct disk type based on your hardware.

    3. If you are not using 10krpm_sas disk OSDs and journals by disk path, comment the lines %osd-by-path-1% %journal-by-path-1% from the [10krpm_sas] section, as shown in the following screenshot:
    How to do it…
  9. Once you have made changes to the manifest/server.manifest.sample file, verify all the changes:
    $ cat server.manifest.sample | egrep -v "#|^$"
    
    How to do it…
  10. Copy the manifest/server.manifest.sample file that we edited in the previous steps to all the vsm-nodes, that is, vsm-node {1,2,3}:
    $ cp server.manifest.sample 192.168.123.101/server.manifest
    $ cp server.manifest.sample 192.168.123.102/server.manifest
    $ cp server.manifest.sample 192.168.123.103/server.manifest
    
  11. Verify the manifest directory structure:
    $ tree
    
    How to do it…
  12. To begin the VSM installation, add the execute permission to the install.sh file:
    $ cd ..
    $ chmod +x install.sh
    
  13. Finally, install VSM by running the install.sh file with the --check-dependence-package parameter, which downloads packages that are necessary for the VSM installation from https//github.com/01org/vsm-dependencies:
    $ ./install.sh -u cephuser -v 2.0 --check-dependence-package
    
    How to do it…

    Note

    The VSM installation will take several minutes. The installer process might require you to input the cephuser password for the vsm-controller node. In that case, please input cephuser as the password.

    In case you encounter any errors and wish to restart the VSM installation, it is recommended that you clean your system before you retry it. Execute the uninstall.sh script file for a system cleanup.

    You can also review the author's version of the VSM installation by checking the installation log file located in the ceph-cookbook repository path: ceph-cookbook/vsm/vsm_install_log.

  14. Once the VSM installation is finished, extract the password for the user admin by executing get_pass.sh on the vsm-controller node:
    $ ./get_pass.sh
    
    How to do it…
  15. Finally, log in to the VSM dashboard, https://192.168.123.100/dashboard/vsm, with the user, admin, and password that we extracted in the last step.
    How to do it…
  16. The vsm-dashboard landing page looks like this:
    How to do it…

The VSM Cluster monitoring option shows some nice graphs for IPOS, Latency, Bandwidth, and CPU utilization, which gives you the big picture of what's going on in your cluster.

How to do it…
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.110.155