© W. David Ashley 2019
W. David AshleyFoundations of Libvirt Development https://doi.org/10.1007/978-1-4842-4862-1_5

5. Storage Pools and Volumes

W. David Ashley1 
(1)
Austin, TX, USA
 

libvirt provides storage management on the physical host through storage pools and volumes. Storage pools and volumes can be located on the main host or remotely via NFS mounts on the host. Storage pools are areas of storage set aside to contain volumes. Volumes are directly used by client domains for file system storage and are formatted by the client domain. A volume can contain any type of domain client partition type(s) and is controlled strictly by the domain client. A volume is always a member of a single storage pool. However, a domain can have access to multiple client volumes as long as none of those volumes is shared with any other active client domains; this is because there are no facilities in libvirt to share volumes between domains.

Pools Overview

This section will introduce the different types of storage pools and the advantages and disadvantages of using them. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by virtual machines. Storage pools are divided into storage volumes either by the storage administrator or by the system administrator, and the volumes are assigned to VMs as block devices.

NFS Storage Pools

The storage administrator responsible for an NFS server creates a share to store virtual machines’ data. The system administrator defines a pool on the virtualization host with the details of the share (e.g., nfs.example.com:/path/to/share should be mounted on /vm_data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share/vmdata. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started.

Once the pool is started, the files in the NFS share are reported as volumes, and the storage volumes’ paths may be queried using the libvirt APIs. The volumes’ paths can then be copied into the section of a VM’s XML definition describing the source storage for the VM’s block devices. In the case of NFS, an application using the libvirt methods can create and delete volumes in the pool (files in the NFS share) up to the limit of the size of the pool (the storage capacity of the share). Not all pool types support creating and deleting volumes. Stopping the pool (somewhat misleadingly referred to by virsh and the API as pool-destroy) undoes the start operation, in this case, unmounting the NFS share. The data on the share is not modified by the destroy operation, despite the name. See the system man virsh page for more details.

The advantage of NFS storage pools is that read/write access is removed from the client domain and host and placed on the remote server hosting the NFS share. This is also the weakness as the remote server can become overloaded if it hosts too many client domains.

iSCSI Storage Pools

A second example is an iSCSI pool. A storage administrator provisions an iSCSI target to present a set of LUNs to the host running the VMs. LUNs are a unique identifier that enables separate devices in a storage subsystem to be addressed by Fibre Channel, iSCSI, or SCSI network protocols. When libvirt is configured to manage that iSCSI target as a pool, libvirt will ensure that the host logs into the iSCSI target, and libvirt can then report the available LUNs as storage volumes. The volumes’ paths can be queried and used in the VM’s XML definitions as in the NFS example. In this case, the LUNs are defined on the iSCSI server, and libvirt cannot create and delete volumes.

The advantage of using iSCSI storage pools is that management and read/write activities are taken care of remotely and not on the client domain or main host. However, management of this type of storage can become a hassle because an extra layer of ownership (for the storage volume) has been introduced.

Other Storage Pools

The most common storage pools, especially when testing, are local storage pools. These can be placed anywhere on the local storage of the main host. The advantage of this type of storage pool is that it is easily managed. The disadvantage is that if too many domains are active, it can slow the server down if each domain is heavily using its volume(s).

Storage pools and volumes are not required for the proper operation of VMs. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a VM, but some administrators prefer to manage their own storage, and VMs operate properly without any pools or volumes defined. On systems that do not use pools, system administrators must ensure the availability of the VMs’ storage using whatever tools they prefer by, for example, adding the NFS share to the host’s fstab so that the share is mounted at boot time.

If at this point the value of pools and volumes over traditional system administration tools is unclear, note that one of the features of libvirt is its remote protocol, so it’s possible to manage all aspects of a virtual machine’s lifecycle as well as the configuration of the resources required by the VM. These operations can be performed on a remote host entirely within the Python libvirt module. In other words, a management application using libvirt can enable a user to perform all the required tasks for configuring the host for a VM: allocating resources, running the VM, shutting it down, and deallocating the resources, without requiring shell access or any other control channel.

libvirt supports the following storage pool types:
  • Directory back end: This is a local storage directory hosted on the main host.

  • Local file system back end: This is a complete file system hosted by the local host.

  • Network file system back end: This is usually an NFS share mounted on the local host.

  • Logical back end: This is can be any logical file system.

  • Disk back end: This is one or more disks dedicated to hosting the domain client and mounted on the local host.

  • iSCSI back end: This is an iSCSI mount dedicated to hosting domains and mounted on the local host.

  • SCSI back end: This is a SCSI mount dedicated to hosting domains and mounted on the local host.

  • Multipath back end: This is usually used with iSCSI to provide a multipath file system mounted on the local host.

  • RADOS Block Device (RBD) back end: This is a CEPH file system mounted on the local domain.

  • Sheepdog back end: This is a sheepdog system mounted on the local host.

  • Gluster back end: This is a Gluster system mounted on the local host.

  • ZFS back end: This is a ZFS partition mounted on the local host .

Listing Pools

A list of storage pool objects can be obtained using the listAllStoragePools method of the virConnect class , as shown in Listing 5-1.

The flags parameter can be one or more of the following constants:
VIR_CONNECT_LIST_STORAGE_POOLS_INACTIVE
VIR_CONNECT_LIST_STORAGE_POOLS_ACTIVE
VIR_CONNECT_LIST_STORAGE_POOLS_PERSISTENT
VIR_CONNECT_LIST_STORAGE_POOLS_TRANSIENT
VIR_CONNECT_LIST_STORAGE_POOLS_AUTOSTART
VIR_CONNECT_LIST_STORAGE_POOLS_NO_AUTOSTART
VIR_CONNECT_LIST_STORAGE_POOLS_DIR
VIR_CONNECT_LIST_STORAGE_POOLS_FS
VIR_CONNECT_LIST_STORAGE_POOLS_NETFS
VIR_CONNECT_LIST_STORAGE_POOLS_LOGICAL
VIR_CONNECT_LIST_STORAGE_POOLS_DISK
VIR_CONNECT_LIST_STORAGE_POOLS_ISCSI
VIR_CONNECT_LIST_STORAGE_POOLS_SCSI
VIR_CONNECT_LIST_STORAGE_POOLS_MPATH
VIR_CONNECT_LIST_STORAGE_POOLS_RBD
VIR_CONNECT_LIST_STORAGE_POOLS_SHEEPDOG
VIR_CONNECT_LIST_STORAGE_POOLS_GLUSTER
VIR_CONNECT_LIST_STORAGE_POOLS_ZFS
# Example-1.py
from __future__ import print_function
import sys
import libvirt
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
pools = conn.listAllStoragePools(0)
if pools == None:
    print('Failed to locate any StoragePool objects.',
          file=sys.stderr)
    exit(1)
for pool in pools:
    print('Pool: '+pool.name())
conn.close()
exit(0)
Listing 5-1

Get the List of Storage Pools

Pool Usage

There are a number of methods available in the virStoragePool class . The example in Listing 5-2 features a number of the methods that describe some attributes of a pool.
# Example-2.py
from __future__ import print_function
import sys
import libvirt
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
pool = conn.storagePoolLookupByName('default')
if pool == None:
    print('Failed to locate any StoragePool objects.',
          file=sys.stderr)
    exit(1)
info = pool.info()
print('Pool: '+pool.name())
print('  UUID: '+pool.UUIDString())
print('  Autostart: '+str(pool.autostart()))
print('  Is active: '+str(pool.isActive()))
print('  Is persistent: '+str(pool.isPersistent()))
print('  Num volumes: '+str(pool.numOfVolumes()))
print('  Pool state: '+str(info[0]))
print('  Capacity: '+str(info[1]))
print('  Allocation: '+str(info[2]))
print('  Available: '+str(info[3]))
conn.close()
exit(0)
Listing 5-2

Usage of Some Storage Pool Methods

Many of the methods shown in the previous example provide information concerning storage pools that are on remote file systems, disk systems, or types other than local file systems. For instance, if the autostart flag is set, then when the user connects to the storage pool, libvirt will automatically make the storage pool available if it is not on a local file system, e.g., an NFS mount. Storage pools on local file systems also need to be started if autostart is not set.

The isActive method indicates whether the user must activate the storage pool in some way. The create method can activate a storage pool.

The isPersistent method indicates whether a storage pool needs to be activated using the create method. A value of 1 indicates that the storage pool is persistent and will remain on the file system after it is released.

The flags parameter can be the following constant or any of the other available constants:
VIR_STORAGE_XML_INACTIVE
Listing 5-3 shows how to get the XML description of a storage pool.
# Example-3.py
from __future__ import print_function
import sys
import libvirt
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
pool = conn.storagePoolLookupByName('default')
if pool == None:
    print('Failed to locate any StoragePool objects.',
          file=sys.stderr)
    exit(1)
xml = pool.XMLDesc(0)
print(xml)
conn.close()
exit(0)
Listing 5-3

Get the XML Description of a Storage Pool

Lifecycle Control

Listing 5-4 shows how to create and destroy both a persistent storage pool and a nonpersistent storage pool. Note that a storage pool cannot be destroyed if it is in an active state. By default, storage pools are created in an inactive state. Note: In Listing 5-4, be sure to change <path> in the xmlDesc to a valid location on your file system with enough space.
# Example-4.py
from __future__ import print_function
import sys
import libvirt
xmlDesc = """
<pool type="dir">
  <name>mypool</name>
  <uuid>8c79f996-cb2a-d24d-9822-ac7547ab2d01</uuid>
  <capacity unit="bytes">4306780815</capacity>
  <allocation unit="bytes">237457858</allocation>
  <available unit="bytes">4069322956</available>
  <source>
  </source>
  <target>
    <path>/home/dashley/images</path>
    <permissions>
      <mode>0755</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>"""
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
# create a new persistent storage pool
pool = conn.storagePoolDefineXML(xmlDesc, 0)
if pool == None:
    print('Failed to create StoragePool object.',
          file=sys.stderr)
    exit(1)
# destroy the storage pool
pool.undefine()
# create a new non-persistent storage pool
pool = conn.storagePoolCreateXML(xmlDesc, 0)
if pool == None:
    print('Failed to create StoragePool object.',
          file=sys.stderr)
    exit(1)
# destroy the storage pool
pool.undefine()
conn.close()
exit(0)
Listing 5-4

Create and Destroy Storage Pools

Note that the storage volumes defined in a storage pool will remain on the file system unless the delete method is called. But be careful about leaving storage volumes in place. If they exist on a remote file system or disk, then that file system may become unavailable to the guest domain since there will be no mechanism to reactivate the remote file system or disk by the libvirt storage system at a future time.

Discovering Pool Sources

The sources for a storage can be discovered by examining the pool’s XML description. An example program follows that prints out a pool’s source description attributes. Currently the flags parameter for the createXML method should always be 0. Note that Listing 5-5 will not work if a poolName value of default already exists.
# Example-5.py
from __future__ import print_function
import sys
import libvirt
from xml.dom import minidom
poolName = 'default'
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
sp = conn.storagePoolLookupByName(poolName)
if sp == None:
    print('Failed to find storage pool '+poolName,
          file=sys.stderr)
    exit(1)
raw_xml = sp.XMLDesc(0)
xml = minidom.parseString(raw_xml)
name = xml.getElementsByTagName('name')
print('pool name: '+poolName)
spTypes = xml.getElementsByTagName('source')
for spType in spTypes:
    attr = spType.getAttribute('name')
    if attr != None:
        print('  name = '+attr)
    attr = spType.getAttribute('path')
    if attr != None:
        print('  path = '+attr)
    attr = spType.getAttribute('dir')
    if attr != None:
        print('  dir = '+attr)
    attr = spType.getAttribute('type')
    if attr != None:
        print('  type = '+attr)
    attr = spType.getAttribute('username')
    if attr != None:
        print('  username = '+attr)
conn.close()
exit(0)
Listing 5-5

Discover a Storage Pool’s Sources

Pool Configuration

There are a number of methods that can configure aspects of a storage pool, but the main method is setAutostart, as shown in Listing 5-6. Note that Listing 5-6 will not work if the poolName value of default already exists.
# Example-6.py
from __future__ import print_function
import sys
import libvirt
poolName = 'default'
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
sp = conn.storagePoolLookupByName(poolName)
if sp == None:
    print('Failed to find storage pool '+poolName,
          file=sys.stderr)
    exit(1)
print('Current autostart seting: '+str(sp.autostart()))
if sp.autostart() == True:
    sp.setAutostart(0)
else:
    sp.setAutostart(1)
print('Current autostart seting: '+str(sp.autostart()))
conn.close()
exit(0)
Listing 5-6

setAutostart Method

Volume Overview

Storage volumes are the basic unit of storage that house a guest domain’s storage requirements. All the necessary partitions used to house a guest domain are encapsulated by the storage volume. Storage volumes are in turn contained in storage pools. A storage pool can contain as many storage pools as the underlying disk partition will hold.

Listing Volumes

Listing 5-7 demonstrates how to list all the storage volumes contained by the default storage pool. Listing 5-7 will not work if a poolName value of default already exists.
# Example-7.py
from __future__ import print_function
import sys
import libvirt
from xml.dom import minidom
poolName = 'default'
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
sp = conn.storagePoolLookupByName(poolName)
if sp == None:
    print('Failed to find storage pool '+poolName,
          file=sys.stderr)
    exit(1)
stgvols = sp.listVolumes()
print('Storage pool: '+poolName)
for stgvol in stgvols :
    print('  Storage vol: '+stgvol)
conn.close()
exit(0)
Listing 5-7

List the Storage Volumes

Volume Information

Information about a storage volume is obtained by using the info method . Listing 5-8 shows how to list the information about each storage volume in the default storage pool. Listing 5-8 will not work if a poolName value of default already exists.
# Example-8.py
from __future__ import print_function
import sys
import libvirt
poolName = 'default'
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
pool = conn.storagePoolLookupByName(poolName)
if pool == None:
    print('Failed to locate any StoragePool objects.',
          file=sys.stderr)
    exit(1)
stgvols = pool.listVolumes()
print('Pool: '+pool.name())
for stgvolname in stgvols:
    print('  Volume: '+stgvolname)
    stgvol = pool.storageVolLookupByName(stgvolname)
    info = stgvol.info()
    print('    Type: '+str(info[0]))
    print('    Capacity: '+str(info[1]))
    print('    Allocation: '+str(info[2]))
conn.close()
exit(0)
Listing 5-8

List Storage Volume Information

Creating and Deleting Volumes

Storage volumes are created using the storage pool’s createXML method . The type and attributes of the storage volume are specified in the XML passed to the createXML method. The flags parameter can be one or more of the following constants:
VIR_STORAGE_VOL_CREATE_PREALLOC_METADATA
VIR_STORAGE_VOL_CREATE_REFLINKVIR_CONNECT_LIST_STORAGE_POOLS_INACTIVE
Listing 5-9 will not work if the XML entry <path> does not point to a valid location.
# Example-9.py
from __future__ import print_function
import sys
import libvirt
stgvol_xml = """
<volume>
  <name>sparse.img</name>
  <allocation>0</allocation>
  <capacity unit="G">2</capacity>
  <target>
    <path>/var/lib/virt/images/sparse.img</path>
    <permissions>
      <owner>107</owner>
      <group>107</group>
      <mode>0744</mode>
      <label>virt_image_t</label>
    </permissions>
  </target>
</volume>"""
pool = 'default'
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
pool = conn.storagePoolLookupByName(pool)
if pool == None:
    print('Failed to locate any StoragePool objects.',
          file=sys.stderr)
    exit(1)
stgvol = pool.createXML(stgvol_xml, 0)
if stgvol == None:
    print('Failed to create a  StorageVol objects.', file=sys.stderr)
    exit(1)
# remove the storage volume
# physically remove the storage volume from the underlying disk media
stgvol.wipe(0)
# logically remove the storage volume from the storage pool
stgvol.delete(0)
conn.close()
exit(0)
Listing 5-9

Create a Storage Volume

Cloning Volumes

Cloning a storage volume is similar to creating a new storage volume, except that an existing storage volume is used for most of the attributes. Only the name and permissions in the XML parameter are used for the new volume; everything else is inherited from the existing volume.

It should be noted that cloning can take a long time to accomplish, depending on the size of the storage volume being cloned. This is because the clone process copies the data from the source volume to the new target volume. Listing 5-10 will not work if the XML entry <path> does not point to a valid location.
# Example-10.py
from __future__ import print_function
import sys
import libvirt
stgvol_xml = """
<volume>
  <name>sparse.img</name>
  <allocation>0</allocation>
  <capacity unit="G">2</capacity>
  <target>
    <path>/var/lib/virt/images/sparse.img</path>
    <permissions>
      <owner>107</owner>
      <group>107</group>
      <mode>0744</mode>
      <label>virt_image_t</label>
    </permissions>
  </target>
</volume>"""
stgvol_xml2 = """
<volume>
  <name>sparse2.img</name>
  <allocation>0</allocation>
  <capacity unit="G">2</capacity>
  <target>
    <path>/var/lib/virt/images/sparse.img</path>
    <permissions>
      <owner>107</owner>
      <group>107</group>
      <mode>0744</mode>
      <label>virt_image_t</label>
    </permissions>
  </target>
</volume>"""
pool = 'default'
conn = libvirt.open('qemu:///system')
if conn == None:
    print('Failed to open connection to qemu:///system',
          file=sys.stderr)
    exit(1)
pool = conn.storagePoolLookupByName(pool)
if pool == None:
    print('Failed to locate any StoragePool objects.',
          file=sys.stderr)
    exit(1)
# create a new storage volume
stgvol = pool.createXML(stgvol_xml, 0)
if stgvol == None:
    print('Failed to create a  StorageVol object.',
          file=sys.stderr)
    exit(1)
# now clone the existing storage volume
print('This could take some time...')
stgvol2 = pool.createXMLFrom(stgvol_xml2, stgvol, 0)
if stgvol2 == None:
    print('Failed to clone a  StorageVol object.',
          file=sys.stderr)
    exit(1)
# remove the cloned storage volume
# physically remove the storage volume from the
#    underlying disk media
stgvol2.wipe(0)
# logically remove the storage volume from the
#    storage pool
stgvol2.delete(0)
# remove the storage volume
# physically remove the storage volume from the
#    underlying disk media
stgvol.wipe(0)
# logically remove the storage volume from the
#    storage pool
stgvol.delete(0)
conn.close()
exit(0)
Listing 5-10

Clone an Existing Storage Volume

Configuring Volumes

Listing 5-11 is an XML description for a storage volume.
<volume>
  <name>sparse.img</name>
  <allocation>0</allocation>
  <capacity unit="G">2</capacity>
  <target>
    <path>/var/lib/virt/images/sparse.img</path>
    <permissions>
      <owner>107</owner>
      <group>107</group>
      <mode>0744</mode>
      <label>virt_image_t</label>
    </permissions>
  </target>
</volume>
Listing 5-11

XML Description for a Storage Volume

Summary

This chapter introduced the topics of storage pools and volumes. The chapter covered different types of storage pools and the advantages and disadvantages of them. It also described volumes and some of their performance characteristics.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.130.13