libvirt provides storage management on the physical host through storage pools and volumes. Storage pools and volumes can be located on the main host or remotely via NFS mounts on the host. Storage pools are areas of storage set aside to contain volumes. Volumes are directly used by client domains for file system storage and are formatted by the client domain. A volume can contain any type of domain client partition type(s) and is controlled strictly by the domain client. A volume is always a member of a single storage pool. However, a domain can have access to multiple client volumes as long as none of those volumes is shared with any other active client domains; this is because there are no facilities in libvirt to share volumes between domains.
Pools Overview
This section will introduce the different types of storage pools and the advantages and disadvantages of using them. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by virtual machines. Storage pools are divided into storage volumes either by the storage administrator or by the system administrator, and the volumes are assigned to VMs as block devices.
NFS Storage Pools
The storage administrator responsible for an NFS server creates a share to store virtual machines’ data. The system administrator defines a pool on the virtualization host with the details of the share (e.g., nfs.example.com:/path/to/share should be mounted on /vm_data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share/vmdata. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started.
Once the pool is started, the files in the NFS share are reported as volumes, and the storage volumes’ paths may be queried using the libvirt APIs. The volumes’ paths can then be copied into the section of a VM’s XML definition describing the source storage for the VM’s block devices. In the case of NFS, an application using the libvirt methods can create and delete volumes in the pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share). Not all pool types support creating and deleting volumes. Stopping the pool (somewhat misleadingly referred to by virsh and the API as pool-destroy) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite the name. See the system man virsh page for more details.
The advantage of NFS storage pools is that read/write access is removed from the client domain and host and placed on the remote server hosting the NFS share. This is also the weakness as the remote server can become overloaded if it hosts too many client domains.
iSCSI Storage Pools
A second example is an iSCSI pool. A storage administrator provisions an iSCSI target to present a set of LUNs to the host running the VMs. LUNs are a unique identifier that enables separate devices in a storage subsystem to be addressed by Fibre Channel, iSCSI, or SCSI network protocols. When libvirt is configured to manage that iSCSI target as a pool, libvirt will ensure that the host logs into the iSCSI target, and libvirt can then report the available LUNs as storage volumes. The volumes’ paths can be queried and used in the VM’s XML definitions as in the NFS example. In this case, the LUNs are defined on the iSCSI server, and libvirt cannot create and delete volumes.
The advantage of using iSCSI storage pools is that management and read/write activities are taken care of remotely and not on the client domain or main host. However, management of this type of storage can become a hassle because an extra layer of ownership (for the storage volume) has been introduced.
Other Storage Pools
The most common storage pools, especially when testing, are local storage pools. These can be placed anywhere on the local storage of the main host. The advantage of this type of storage pool is that it is easily managed. The disadvantage is that if too many domains are active, it can slow the server down if each domain is heavily using its volume(s).
Storage pools and volumes are not required for the proper operation of VMs. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a VM, but some administrators prefer to manage their own storage, and VMs operate properly without any pools or volumes defined. On systems that do not use pools, system administrators must ensure the availability of the VMs’ storage using whatever tools they prefer by, for example, adding the NFS share to the host’s fstab so that the share is mounted at boot time.
If at this point the value of pools and volumes over traditional system administration tools is unclear, note that one of the features of libvirt is its remote protocol, so it’s possible to manage all aspects of a virtual machine’s lifecycle as well as the configuration of the resources required by the VM. These operations can be performed on a remote host entirely within the Python libvirt module. In other words, a management application using libvirt can enable a user to perform all the required tasks for configuring the host for a VM: allocating resources, running the VM, shutting it down, and deallocating the resources, without requiring shell access or any other control channel.
Directory back end: This is a local storage directory hosted on the main host.
Local file system back end: This is a complete file system hosted by the local host.
Network file system back end: This is usually an NFS share mounted on the local host.
Logical back end: This is can be any logical file system.
Disk back end: This is one or more disks dedicated to hosting the domain client and mounted on the local host.
iSCSI back end: This is an iSCSI mount dedicated to hosting domains and mounted on the local host.
SCSI back end: This is a SCSI mount dedicated to hosting domains and mounted on the local host.
Multipath back end: This is usually used with iSCSI to provide a multipath file system mounted on the local host.
RADOS Block Device (RBD) back end: This is a CEPH file system mounted on the local domain.
Sheepdog back end: This is a sheepdog system mounted on the local host.
Gluster back end: This is a Gluster system mounted on the local host.
ZFS back end: This is a ZFS partition mounted on the local host .
Listing Pools
A list of storage pool objects can be obtained using the listAllStoragePools method of the virConnect class , as shown in Listing 5-1.
Get the List of Storage Pools
Pool Usage
Usage of Some Storage Pool Methods
Many of the methods shown in the previous example provide information concerning storage pools that are on remote file systems, disk systems, or types other than local file systems. For instance, if the autostart flag is set, then when the user connects to the storage pool, libvirt will automatically make the storage pool available if it is not on a local file system, e.g., an NFS mount. Storage pools on local file systems also need to be started if autostart is not set.
The isActive method indicates whether the user must activate the storage pool in some way. The create method can activate a storage pool.
The isPersistent method indicates whether a storage pool needs to be activated using the create method. A value of 1 indicates that the storage pool is persistent and will remain on the file system after it is released.
Get the XML Description of a Storage Pool
Lifecycle Control
Create and Destroy Storage Pools
Note that the storage volumes defined in a storage pool will remain on the file system unless the delete method is called. But be careful about leaving storage volumes in place. If they exist on a remote file system or disk, then that file system may become unavailable to the guest domain since there will be no mechanism to reactivate the remote file system or disk by the libvirt storage system at a future time.
Discovering Pool Sources
Discover a Storage Pool’s Sources
Pool Configuration
setAutostart Method
Volume Overview
Storage volumes are the basic unit of storage that house a guest domain’s storage requirements. All the necessary partitions used to house a guest domain are encapsulated by the storage volume. Storage volumes are in turn contained in storage pools. A storage pool can contain as many storage pools as the underlying disk partition will hold.
Listing Volumes
List the Storage Volumes
Volume Information
List Storage Volume Information
Creating and Deleting Volumes
Create a Storage Volume
Cloning Volumes
Cloning a storage volume is similar to creating a new storage volume, except that an existing storage volume is used for most of the attributes. Only the name and permissions in the XML parameter are used for the new volume; everything else is inherited from the existing volume.
Clone an Existing Storage Volume
Configuring Volumes
XML Description for a Storage Volume
Summary
This chapter introduced the topics of storage pools and volumes. The chapter covered different types of storage pools and the advantages and disadvantages of them. It also described volumes and some of their performance characteristics.