© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
A. MarkelovCertified OpenStack Administrator Study Guide Certification Study Companion Serieshttps://doi.org/10.1007/978-1-4842-8804-7_9

9. Block Storage

Andrey Markelov1  
(1)
Stockholm, Sweden
 

This chapter covers 10% of the Certified OpenStack Administrator exam requirements. Block storage is an essential topic for data reliability in the private cloud. As you will learn in this chapter, you cannot build a highly available static solution without using block storage.

Cinder’s Architecture and Components

Instances use an ephemeral volume by default. This kind of volume does not save the changes made on it and reverts to its original state when the current user relinquishes control. One of the methods for storing data permanently in the OpenStack cloud is using a block storage service named Cinder. This service is similar to the Amazon EBS service in its functions.

Figure 9-1 shows the main components of Cinder.

A diagram of cinder architecture explains the block storage services of OpenStack.

Figure 9-1

Cinder architecture

The OpenStack block storage service consists of four services implemented as GNU/Linux daemons.
  • cinder-api is an API service that provides an HTTP endpoint for API requests. At the time of this writing, two API versions are supported and required for the cloud. Cinder provides six endpoints. The cinder-api verifies the identity requirements for an incoming request and then routes them to the cinder-volume for action through the message broker.

  • cinder-scheduler reads requests from the message queue and selects the optimal storage provider node to create or manage the volume.

  • cinder-volume works with the storage back end through the drivers. cinder-volume gets requests from the scheduler and responds to read and write requests sent to the block storage service to maintain the state. You can use several back ends at the same time. For each back end, you need one or more dedicated cinder-volume services.

  • cinder-backup works with the backup back end through the driver architecture.

Cinder uses block storage providers for storage. A list of supported drivers is at https://docs.openstack.org/cinder/latest/reference/support-matrix.html. There are a lot of storage providers for Cinder, such as LVM/iSCSI, Ceph, NFS, Swift, and vendor-specific storage from EMC, HPE, IBM, and others.

Let’s look at these services in the OpenStack node.
# systemctl | grep cinder | grep running
  openstack-cinder-api.service                                                        loaded active running   OpenStack Cinder API Server
  openstack-cinder-backup.service                                                     loaded active running   OpenStack Cinder Backup Server
  openstack-cinder-scheduler.service                                                  loaded active running   OpenStack Cinder Scheduler Server
  openstack-cinder-volume.service                                                     loaded active running   OpenStack Cinder Volume Server
You can use the openstack volume service list command to query the status of Cinder services.
$ source keystonerc_admin
$ openstack volume service list
+------------------+--------------------+------+---------+-------+---------------------------+
| Binary           | Host               | Zone | Status  | State | Updated At                 |
+------------------+--------------------+------+---------+-------+---------------------------+
| cinder-scheduler | rdo.test.local     | nova | enabled | up    | 2022-07-25T11:49:23.000000 |
| cinder-volume    | rdo.test.local@lvm | nova | enabled | up    | 2022-07-25T11:49:25.000000 |
| cinder-backup    | rdo.test.local     | nova | enabled | up    | 2022-07-25T11:49:29.000000 |
+------------------+--------------------+------+---------+-------+---------------------------+
After examining the environment, you can see that all services run on one host. In the production environment, it is more common to have cinder-volume service running on separate storage nodes. In test environments, Cinder uses the Linux Logical Volume Manager (LVM) back end and the iSCSI target provided by Targetcli (http://linux-iscsi.org/wiki/Targetcli).
# systemctl | grep target.service
  target.serviceloaded active exited    Restore LIO kernel target configuration
Now let’s look through the Cinder main configuration file, /etc/cinder/cinder.conf. Table 9-1 shows the main configuration options available from the config file.
Table 9-1

Main Configuration Options from /etc/cinder/cinder.conf

Example of Config Options

Description

[DEFAULT]

my_ip = 192.168.122.10

Default Cinder host name or IP

backup_driver=cinder.backup.drivers.swift.SwiftBackupDriver

Driver to use for backups

[keystone_authtoken]

www_authenticate_uri=http://192.168.122.10:5000/

auth_type=password

auth_url=http://192.168.122.10:5000

username=cinder

password=password

user_domain_name=Default

project_name=services

project_domain_name=Default

Authentication parameters: endpoints and other parameters like default project name, domain name, project name for services, and account information for Cinder user

[DEFAULT]

backup_swift_url=http://192.168.122.10:8080/v1/AUTH_

backup_swift_container=volumebackups

backup_swift_object_size = 52428800

backup_swift_retry_attempts = 3

backup_swift_retry_backoff = 2

The URL of the Swift endpoint and other Swift parameters, such as the name of the Swift container to use, maximum object size, the number of retries to make for Swift operations, and the back-off time in seconds between Swift retries

[DEFAULT]

enabled_backends = lvm

A list of back-end names to use

[database]

connection = mysql+pymysql:// cinder:[email protected]/cinder

The connection string used to connect to the database

transport_url=rabbit://guest:[email protected]:5672/

The RabbitMQ broker address, port, user name, and password

[lvm]

target_helper=lioadm

iSCSI target user-land tool to use (older tgtadm is the default; lioadm used for modern LIO iSCSI support)

[lvm]

volume_group=cinder-volumes

target_ip_address=192.168.122.10

volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver

volumes_dir=/var/lib/cinder/volumes

volume_backend_name=lvm

LVM back-end options: name of LVM volume group, iSCSI target IP address, volume driver, volume configuration file storage directory, and the back-end name for a given driver implementation

Managing Volume and Mount It to a Nova Instance

Let’s start our example with volume creation. Two CLI commands can be used: openstack or cinder. Also, you can use the Horizon web client. Here is an example using the cinder command.
$ source keystonerc_demo
$ cinder create --display-name apresstest1 1
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| cluster_name                   | None                                 |
| consistencygroup_id            | None                                 |
| consumes_quota                 | True                                 |
| created_at                     | 2022-07-25T12:56:57.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| group_id                       | None                                 |
| id                             | 0f812c6f-5531-42e5-b0ff-e9f2e6492e19 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | apresstest1                          |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 27cdeded89d24fb49c11030b8cc87f15     |
| provider_id                    | None                                 |
| replication_status             | None                                 |
| service_uuid                   | None                                 |
| shared_targets                 | True                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | a037e26c68ba406eaf6c3a1ec87227de     |
| volume_type                    | iscsi                                |
| volume_type_id                 | c046dd89-6319-4486-bdf1-455cbb2099f9 |
+--------------------------------+--------------------------------------+
The next example shows the use of the universal openstack command.
$ openstack volume create --size 1 apresstest2
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2022-07-25T12:57:56.570242           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 2f223049-85da-4d72-a18d-d3212a173b94 |
| migration_status    | None                                 |
| multiattach         | False                                |
| name                | apresstest2                          |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | iscsi                                |
| updated_at          | None                                 |
| user_id             | a037e26c68ba406eaf6c3a1ec87227de     |
+---------------------+--------------------------------------+
Now you can check to ensure both volumes were created and are available.
$ openstack volume list
+--------------------------------------+-------------+-----------+------+-------------+
| ID                                   | Name        | Status    | Size | Attached to |
+--------------------------------------+-------------+-----------+------+-------------+
| 2f223049-85da-4d72-a18d-d3212a173b94 | apresstest2 | available |    1 |             |
| 0f812c6f-5531-42e5-b0ff-e9f2e6492e19 | apresstest1 | available |    1 |             |
+--------------------------------------+-------------+-----------+------+-------------+
As mentioned, Cinder uses Linux LVM in test environments. You can easily check this fact by using the lvs command. As shown next, there are two LVM volumes in the cinder-volumes group with the names that contain the OpenStack’s volumes’ IDs.
# lvs
...
  volume-0f812c6f-5531-42e5-b0ff-e9f2e6492e19 cinder-volumes Vwi-a-tz--   1.00g cinder-volumes-pool        0.00
  volume-2f223049-85da-4d72-a18d-d3212a173b94 cinder-volumes Vwi-a-tz--   1.00g cinder-volumes-pool        0.00
...
Note

The lvs command reports information about logical volumes. LVM is a common way to create the abstraction level of block devices for modern GNU/Linux distributions. LVM can create, delete, resize, mirror, or snapshot logical volumes. Logical volumes are created from volume groups, and volume groups are usually created from physical devices. If you are unfamiliar with LVM, you can start from a manual page for LVM (man lvm in the Linux prompt).

You can also manage existing and create new volumes from within the Horizon web interface. Go to Project ➤ Volume ➤ Volumes if you are working as a user or Admin ➤ Volume ➤ Volumes if you want to see all the volumes as an administrator. In each case, different subsets of options are available. Examples of the different web interface screenshots are shown in Figures 9-2 and 9-3.

A screenshot of the OpenStack dashboard depicts the volumes in regular users.

Figure 9-2

Volumes in regular users Horizon web interface view

A screenshot of the OpenStack dashboard depicts the volumes for admin users.

Figure 9-3

Volumes for admin users in the Horizon web interface view

Deleting a volume is as easy as creating one. For example, the openstack CLI command can delete a volume, as shown in the following code.
$ openstack volume delete apresstest2
Figure 9-4 shows the volume creation dialog used in the Horizon user interface. In the drop-down menu, you can see additional options for creating the image. You can create a volume from another volume or from the image by creating a volume from scratch. For these actions, the --image and --source options of the openstack CLI command are used.

A screenshot of the OpenStack dashboard depicts the pop-up menu to create a volume.

Figure 9-4

Creation of a volume from the Horizon web interface view

Here is an example of creating a volume from Glance’s image.
$ openstack volume create --size 1 --image cirros-0.5.2-x86_64 apresstest3
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2022-07-25T13:19:09.353608           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | d5e3b873-44ed-4235-a492-8acef2807d67 |
| multiattach         | False                                |
| name                | apresstest3                          |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | iscsi                                |
| updated_at          | None                                 |
| user_id             | a20b5a5995b740ff90034297335b330a     |
+---------------------+--------------------------------------+

You can use the openstack volume show command with the image name or ID to look at the volume properties.

Volumes are useless by themselves. Let’s try to start a new VM instance and access a volume from within this VM.
$ openstack server create --image cirros-0.5.2-x86_64 --flavor m1.tiny --network demo-net --security-group apress-sgroup --key-name apresskey1 apressinst
...
$ openstack server list
+--------------------------------------+------------+--------+----------------------+---------------------+---------+
| ID                                   | Name       | Status | Networks             | Image               | Flavor  |
+--------------------------------------+------------+--------+----------------------+---------------------+---------+
| 2f1c85bd-c680-4e7b-afa4-2367b15c9fb8 | apressinst | ACTIVE | demo-net=172.16.0.42 | cirros-0.5.2-x86_64 | m1.tiny |
+--------------------------------------+------------+--------+----------------------+---------------------+---------+
Now you can attach the apresstest1 volume to the apresstestinstance1 instance.
$ openstack server add volume apressinst apresstest3
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| ID                    | d5e3b873-44ed-4235-a492-8acef2807d67 |
| Server ID             | 2f1c85bd-c680-4e7b-afa4-2367b15c9fb8 |
| Volume ID             | d5e3b873-44ed-4235-a492-8acef2807d67 |
| Device                | /dev/vdb                             |
| Tag                   | None                                 |
| Delete On Termination | False                                |
+-----------------------+--------------------------------------+
$ openstack volume list
+--------------------------------------+-------------+-----------+------+-------------------------------------+
| ID                                   | Name        | Status    | Size | Attached to                         |
+--------------------------------------+-------------+-----------+------+-------------------------------------+
| d5e3b873-44ed-4235-a492-8acef2807d67 | apresstest3 | in-use    |    1 | Attached to apressinst on /dev/vdb  |
+--------------------------------------+-------------+-----------+------+-------------------------------------+
You can use one of the openstack server remove volume commands to detach a volume, as shown in the following.
$ openstack server remove volume apressinst apresstest3

Creating Volume Group for Block Storage

One of the Certified OpenStack Administrator exam objectives is to create the LVM volume group for block storage. It is very easy, but you need to be aware of the hard disk partitions and the LVM hierarchy.

Let’s assume that you do not have free space in your current storage. First, you need to add a new block device (virtual hard drive, in this case) to the controller VM. Usually, you must reboot the VM after that.

Then you need to find a new device name. A device name refers to the entire disk. Device names can be /dev/sda, /dev/sdb, and so on when using the virtualization-aware disk driver. For example, if you use the native KVM-based virtualization in GNU/Linux, this code shows the following device name.
# fdisk -l | grep [vs]d
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
/dev/vda1  *       2048   2099199   2097152   1G 83 Linux
/dev/vda2       2099200 209715199 207616000  99G 8e Linux LVM
Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
You can see that the new 20 GB /dev/vdb disk has no partitions. Let’s create one partition for the whole disk.
# fdisk /dev/vdb
Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x26a41428.
Command (m for help): n [ENTER]
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p  [ENTER]
Partition number (1-4, default 1): [ENTER]
First sector (2048-41943039, default 2048): [ENTER]
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-41943039, default 41943039): [ENTER]
Partition 1 of type Linux and of size 96.7 GiB is set
Before saving changes to the partition table, you must change the partition type number from 83 (Linux) to 8e (Linux LVM).
Command (m for help): t  [ENTER]
Selected partition 1
Hex code (type L to list all codes): 8e  [ENTER]
Changed type of partition 'Linux' to 'Linux LVM'
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
# partprobe
Now you can create the new volume group for the LVM back end.
# vgcreate cinder-volumes-2 /dev/vdb1
  Physical volume "/dev/vdb1" successfully created
  Volume group "cinder-volumes-2" successfully created

The new volume group, cinder-volumes-2, is used later in this chapter.

Managing Quotas

It is possible to add quotas for Cinder volumes. Default quotas for new projects are in the Cinder configuration file. Some of them are shown in Table 9-2.
Table 9-2

Quota Configuration Options from /etc/cinder/cinder.conf

Example of Config Options

Description

quota_volumes = 10

The number of volumes allowed per project

quota_snapshots = 10

The number of volume snapshots allowed per project

quota_gigabytes = 1000

The total amount of storage, in gigabytes, allowed for volumes and snapshots per project

quota_backups = 10

The number of volume backups allowed per project

quota_backup_gigabytes = 1000

The total amount of storage, in gigabytes, allowed for backups per project

You can show or modify Cinder quotes using the cinder CLI command or the Horizon web interface. In Horizon, all quotas for projects that exist can be found by going to Identity ➤ Projects. Then you need to choose Modify Quotas from the drop-down menu to the right of the project name. You need to know the project ID if you work from the command line.
$ openstack project list
+----------------------------------+----------+
| ID                               | Name     |
+----------------------------------+----------+
| 27cdeded89d24fb49c11030b8cc87f15 | admin    |
| 3a9a59175cce4a74a72c882947e8bc86 | apress   |
| 53d4fd6c5b1d44e89e604957c4df4fc2 | services |
| 9e0c535c2240405b989afa450681df18 | demo     |
+----------------------------------+----------+
Then you can show the quotas for the demo project.
$ cinder quota-show 9e0c535c2240405b989afa450681df18
+-----------------------+-------+
| Property              | Value |
+-----------------------+-------+
| backup_gigabytes      | 1000  |
| backups               | 10    |
| gigabytes             | 1000  |
| gigabytes___DEFAULT__ | -1    |
| gigabytes_iscsi       | -1    |
| groups                | 10    |
| per_volume_gigabytes  | -1    |
| snapshots             | 10    |
| snapshots___DEFAULT__ | -1    |
| snapshots_iscsi       | -1    |
| volumes               | 10    |
| volumes___DEFAULT__   | -1    |
| volumes_iscsi         | -1    |
+-----------------------+-------+
The results show the current usage of the demo project’s quota.
$ cinder quota-usage 9e0c535c2240405b989afa450681df18
+-----------------------+--------+----------+-------+-----------+
| Type                  | In_use | Reserved | Limit | Allocated |
+-----------------------+--------+----------+-------+-----------+
| backup_gigabytes      | 0      | 0        | 1000  |           |
| backups               | 0      | 0        | 10    |           |
| gigabytes             | 2      | 0        | 1000  |           |
| gigabytes___DEFAULT__ | 0      | 0        | -1    |           |
| gigabytes_iscsi       | 2      | 0        | -1    |           |
| groups                | 0      | 0        | 10    |           |
| per_volume_gigabytes  | 0      | 0        | -1    |           |
| snapshots             | 0      | 0        | 10    |           |
| snapshots___DEFAULT__ | 0      | 0        | -1    |           |
| snapshots_iscsi       | 0      | 0        | -1    |           |
| volumes               | 2      | 0        | 10    |           |
| volumes___DEFAULT__   | 0      | 0        | -1    |           |
| volumes_iscsi         | 2      | 0        | -1    |           |
+-----------------------+--------+----------+-------+-----------+
You need a quota name and the suggested number to update Cinder service quotas for a selected project.
$ cinder quota-update --snapshots 17 9e0c535c2240405b989afa450681df18
+-----------------------+-------+
| Property              | Value |
+-----------------------+-------+
| backup_gigabytes      | 1000  |
| backups               | 10    |
| gigabytes             | 1000  |
| gigabytes___DEFAULT__ | -1    |
| gigabytes_iscsi       | -1    |
| groups                | 10    |
| per_volume_gigabytes  | -1    |
| snapshots             | 17    |
| snapshots___DEFAULT__ | -1    |
| snapshots_iscsi       | -1    |
| volumes               | 10    |
| volumes___DEFAULT__   | -1    |
| volumes_iscsi         | -1    |
+-----------------------+-------+
To remove all quotas for the project, use the quota-delete command.
$ cinder quota-delete 9e0c535c2240405b989afa450681df18

Backing up and Restoring Volumes and Snapshots

You can create a whole volume backup or incremental backup (starting from the Liberty release). Then you can restore a volume from a backup if the backup’s associated metadata exists in the Cinder database.

You can use the openstack volume backup create command.
$ openstack volume backup create apresstest1
+-------+--------------------------------------+
| Field | Value                                |
+-------+--------------------------------------+
| id    | 9dffa586-53b4-40d4-967a-87b910dd3dbb |
| name  | None                                 |
+-------+--------------------------------------+
It is possible to check the status of existing backups using the following command.
$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID                                   | Name | Description | Status    | Size |
+--------------------------------------+------+-------------+-----------+------+
| 9dffa586-53b4-40d4-967a-87b910dd3dbb | None | None        | creating  |    1 |
+--------------------------------------+------+-------------+-----------+------+
$ openstack volume backup list
+--------------------------------------+------+-------------+-----------+------+
| ID                                   | Name | Description | Status    | Size |
+--------------------------------------+------+-------------+-----------+------+
| 9dffa586-53b4-40d4-967a-87b910dd3dbb | None | None        | available |    1 |
+--------------------------------------+------+-------------+-----------+------+
All backups go to the Swift object storage by default. You can check the volumesbackup container and objects inside this container.
$ swift list
apress_cont1
apress_cont2
volumebackups
$ swift list volumesbackup
volume_41b8ab16-a8e0-412f-8d37-f235f3036264/20220726121502/az_nova_backup_9dffa586-53b4-40d4-967a-87b910dd3dbb-00001
volume_41b8ab16-a8e0-412f-8d37-f235f3036264/20220726121502/az_nova_backup_9dffa586-53b4-40d4-967a-87b910dd3dbb-00002
...
Restoration of an existing backup is similar to the backup procedure.
$ openstack volume backup restore 9dffa586-53b4-40d4-967a-87b910dd3dbb apresstest1
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| backup_id   | 9dffa586-53b4-40d4-967a-87b910dd3dbb |
| volume_id   | 41b8ab16-a8e0-412f-8d37-f235f3036264 |
| volume_name | apresstest1                          |
+-------------+--------------------------------------+

Managing Volume Snapshots

Using volume snapshots is another way to create a backup of an existing volume. Volume snapshots provide a way to obtain a nondisruptive copy of the volume. Snapshot is stored in Cinder’s back-end storage system, as opposed to Swift object storage in cases of backups. In the default installation, LVM takes care of creating snapshots. Do not confuse Cinder snapshots with Nova snapshots. You can use a snapshot when a VM uses the volume, but from a consistency point of view, it is best if the volume is not connected to an instance when the snapshot is taken. It is possible to create new volumes from snapshots.

Let’s look at some examples of how to work with Cinder snapshots. First, you need to know the volume ID or name.
$ openstack volume list
+--------------------------------------+-------------+-----------+------+-------------+
| ID                                   | Name        | Status    | Size | Attached to |
+--------------------------------------+-------------+-----------+------+-------------+
| 41b8ab16-a8e0-412f-8d37-f235f3036264 | apresstest1 | available |    1 |             |
+--------------------------------------+-------------+-----------+------+-------------+
Next, you can enter a command to create a snapshot.
$ openstack volume snapshot create --volume apresstest1 apresstest1_snap1
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| created_at  | 2022-07-26T13:04:23.663393           |
| description | None                                 |
| id          | 1c2daa2e-8b23-4992-acb9-286c0a8c589a |
| name        | apresstest1_snap1                    |
| properties  |                                      |
| size        | 1                                    |
| status      | creating                             |
| updated_at  | None                                 |
| volume_id   | 41b8ab16-a8e0-412f-8d37-f235f3036264 |
+-------------+--------------------------------------+
Then you should make sure that a snapshot was created.
$ openstack volume snapshot list
+--------------------------------------+-------------------+-------------+-----------+------+
| ID                                   | Name              | Description | Status    | Size |
+--------------------------------------+-------------------+-------------+-----------+------+
| 1c2daa2e-8b23-4992-acb9-286c0a8c589a | apresstest1_snap1 | None        | available |    1 |
+--------------------------------------+-------------------+-------------+-----------+------+
And now, you can show the details of the snapshot.
$ openstack volume snapshot show apresstest1_snap1
+--------------------------------------------+--------------------------------------+
| Field                                      | Value                                |
+--------------------------------------------+--------------------------------------+
| created_at                                 | 2022-07-26T13:04:23.000000           |
| description                                | None                                 |
| id                                         | 1c2daa2e-8b23-4992-acb9-286c0a8c589a |
| name                                       | apresstest1_snap1                    |
| os-extended-snapshot-attributes:progress   | 100%                                 |
| os-extended-snapshot-attributes:project_id | 9e0c535c2240405b989afa450681df18     |
| properties                                 |                                      |
| size                                       | 1                                    |
| status                                     | available                            |
| updated_at                                 | 2022-07-26T13:04:24.000000           |
| volume_id                                  | 41b8ab16-a8e0-412f-8d37-f235f3036264 |
+--------------------------------------------+--------------------------------------+
At the end, you can create a new volume from the snapshot. As a part of the creation process, you can specify a new volume size in gigabytes.
$ openstack volume create --snapshot apresstest1_snap1 --size 1 apresstest2_from_snap
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2022-07-26T13:24:52.301501           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | f7960041-ca6e-4b11-ade3-2df2b81d02a2 |
| multiattach         | False                                |
| name                | apresstest2_from_snap                |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | 1c2daa2e-8b23-4992-acb9-286c0a8c589a |
| source_volid        | None                                 |
| status              | creating                             |
| type                | iscsi                                |
| updated_at          | None                                 |
| user_id             | a20b5a5995b740ff90034297335b330a     |
+---------------------+--------------------------------------+
You can also delete the snapshot if needed.
$ openstack volume snapshot delete apresstest1_snap1
Figure 9-5 shows the Volume Snapshots tab in the Horizon web user interface.

A screenshot of the OpenStack dashboard depicts the volume snapshots on the right.

Figure 9-5

Working with snapshots in Horizon web interface view

Setting up Storage Pools

Cinder allows you to use multiple storage pools and storage drivers at the same time. You can find the list, which contains more than 50 storage drivers, on Cinder’s Support Matrix web page (https://docs.openstack.org/cinder/latest/reference/support-matrix.html).

You must enumerate all the back ends when using two or more back ends with different or the same type of drivers in the [DEFAULT] section of the cinder.conf configuration file.
[DEFAULT]
enabled_backends = lvmA, lvmB, nfsA
You need to add sections with back-end-specific information for each back end. Here is an example for two LVM back ends and one NFS back end.
[lvmA]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM
[lvmB]
volume_group=cinder-volumes-2
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM
[nfsA]
nfs_shares_config=/etc/cinder/shares.txt
volume_driver=cinder.volume.drivers.nfs.NfsDriver
volume_backend_name=NFS
If you want to give the user the right to choose on which back end their volumes are created, then the admin must define a volume type.
$ source ∼/keystonerc_admin
$ cinder type-create lvm1
$ cinder type-create lvm2
$ cinder type-create nfs1
$ cinder type-key lvm1 set volume_backend_name=lvmA
$ cinder type-key lvm2 set volume_backend_name=lvmB
$ cinder type-key nfs1 set volume_backend_name=nfsA

Summary

You will never find a production-grade OpenStack installation without a block storage service. Knowledge of this service will serve you well in real life after passing the exam.

The next chapter explores some troubleshooting techniques.

Review Questions

  1. 1.
    How many cinder-volume services exist in a typical installation?
    1. A.

      One

       
    2. B.

      At least three

       
    3. C.

      One per storage back end

       
    4. D.

      One per database instance

       
     
  2. 2.
    Which creates a volume named test with a 1 GB size?
    1. A.

      openstack volume create test 1

       
    2. B.

      cinder create --name test

       
    3. C.

      openstack volumes create --size 1 test

       
    4. D.

      cinder create --display-name test 1

       
     
  3. 3.
    What is the Linux LVM partition number?
    1. A.

      82

       
    2. B.

      8e

       
    3. C.

      83

       
    4. D.

      1F

       
     
  4. 4.
    How does Cinder backup differ from snapshots? (Choose two.)
    1. A.

      Backup is stored in Glance.

       
    2. B.

      Backup is stored in Swift.

       
    3. C.

      Backup can’t be incremental.

       
    4. D.

      Backup can be incremental.

       
     

Answers

  1. 1.

    C

     
  2. 2.

    D

     
  3. 3.

    B

     
  4. 4.

    B and D

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.83.126