Chapter 29
Inspecting Cloud and Virtualization Services

  • Objective 1.5: Compare and contrast cloud and virtualization concepts and technologies.

images When designing and managing various cloud and virtualization system configurations, you need to understand how virtual and physical networks interoperate, the different disk storage choices available, how to automate booting a system, and how you can quickly install Linux distributions on your virtual machines. In addition, you need to be aware of some basic virtual machine creation and management tools. In this chapter, we’ll continue our journey, which started in Chapter 28, into cloud and virtualization topics.

Focusing on VM Tools

Various virtual machine utilities allow you to create, destroy, boot, shut down, and configure your guest VMs. There are many open-source alternatives from which to choose. Some work only at the command line and sometimes are used within shell scripts, while others are graphical. In the following sections, we’ll look at a few of these tools.

Looking at libvirt

A popular virtualization management software collection is the libvirt library. This assortment includes the following elements:

  • An application programming interface (API) library that is incorporated into several open-source VMMs (hypervisors), such as KVM
  • A daemon, libvirtd, that operates on the VM host system and executes any needed VM guest system management tasks, such as starting and stopping the VM
  • Command-line utilities, such as virt-install and virsh, that operate on the VM host system and are used to control and manage VM guest systems

While typically most command-line utilities that start with vir or virt employ the libvirt library, you can double-check this via the ldd command. An example is shown in Listing 29.1 on a CentOS distribution that has hypervisor packages installed.

Listing 29.1: Checking for libvirt employment using the ldd command

$ which virsh
/usr/bin/virsh
$
$ ldd /usr/bin/virsh | grep libvirt
        libvirt-lxc.so.0 => /lib64/libvirt‐lxc.so.0 (0x00007f4a3a10c000)
        libvirt-qemu.so.0 => /lib64/libvirt‐qemu.so.0 (0x00007f4a39f08000)
        libvirt.so.0 => /lib64/libvirt.so.0 (0x00007f4a39934000)
$

A primary goal of the libvirt project is to provide a single way to manage virtual machines. It supports a number of hypervisors, such as KVM, QEMU, Xen, VMware ESX, and so on. You can find out more about the libvirt project at the libvirt.org website.

Viewing virsh

One handy tool that uses the libvirt library is the virsh shell. It is a basic shell you can employ to manage your system’s virtual machines.

images If you’d like to try out the virsh shell, you can obtain it via the libvirt-client or libvirt-clients package. For older distributions, it may be located in the libvirt-bin package. Package installation was covered in Chapter 13. Be aware that additional software packages are needed to create virtual machines.

If you have a VMM (hypervisor) product installed, you can employ the virsh shell to create, remove, start, stop, and manage your virtual machines. An example of entering and exiting the virsh shell is shown in Listing 29.2. Keep in mind that super user privileges are typically needed for shell commands involving virtual machines.

Listing 29.2: Exploring the virsh shell

$ virsh
Welcome to virsh, the virtualization interactive terminal.

Type:  ’help’ for help with commands
       ’quit’ to quit

virsh #
virsh # exit

$

You don’t have to enter into the virsh shell in order to manage your virtual machines. The various virsh commands can be entered directly from the Bash shell, which makes it useful for those who wish to employ the commands in shell scripts to automate virtual machine administration. An example of using the virsh utility interactively is shown in Listing 29.3.

Listing 29.3: Using the virsh utility interactively

$ virsh help setvcpus
  NAME
    setvcpus - change number of virtual CPUs

  SYNOPSIS
    setvcpus <domain> <count> [&hyphen;&hyphen;maximum] [&hyphen;&hyphen;config]
[--live] [--current] [--guest] [--hotpluggable]

  DESCRIPTION
    Change the number of virtual CPUs in the guest domain.

  OPTIONS
    [--domain] <string>  domain name, id or uuid
    [--count] <number>  number of virtual CPUs
    --maximum        set maximum limit on next boot
    --config         affect next boot
    --live           affect running domain
    --current        affect current domain
    --guest          modify cpu state in the guest
    --hotpluggable   make added vcpus hot(un)pluggable


$

images An easier utility than virsh to use for creating virtual machines at the command line is the virt-install utility. It is a Python program and is typically available from either the virtinst or virt-install package, depending on your distribution.

Managing with Virtual Machine Manager

Not to be confused with a hypervisor (VMM), the Virtual Machine Manager (also called vmm) is a lightweight desktop application for creating and managing virtual machines. It is a Python program available on many distributions that employ a GUI and is obtainable from the virt-manager package.

The Virtual Machine Manager can be initiated from a terminal emulator within the graphical environment via the virt-manager command. The Virtual Machine Manager user interface is shown on a CentOS 7 distribution in Figure 29.1.

The figure shows a snapshot of the Virtual Machine Manager user interface.

Figure 29.1 Virtual Machine Manager

You do need to use super user privileges to run the Virtual Machine Manager, and if the virt-manager command is not issued from an account with those privileges, it will provide a pop-up window asking for the root account password or something similar, depending on the distribution.

One nice feature the Virtual Machine Manager provides through its View menu is performance statistic graphs. In addition, the GUI interface allows modification of guest virtual machines’ configurations, such as their virtual networks (virtual network configurations are covered later in this chapter).

images The Virtual Machine Manager offers, by default, a virtual network computing (VNC) client viewer (virt-viewer). Thus, a graphical (desktop environment) console can be attached to any running virtual machine. However, SPICE also can be configured to do the same. VNC and SPICE were covered in Chapter 8.

You can view screen shots of the Virtual Machine Manager, read its documentation, and peruse its code at the virt-manager.org website.

Understanding Bootstrapping

A bootstrap is a small fabric or leather loop on the back of a shoe. Nowadays you use them to help pull a shoe onto your foot by hooking your finger in the loop and tugging. A phrase developed from that little tool, “Pick yourself up by your bootstraps,” which means to recover from a setback without any outside help.

Over time, the computer industry began mimicking that phrase via the terms bootstrapping and booting. They are often used interchangeably, but typically booting a system refers to powering up a system and having it start via its bootloader code. Bootstrapping a system refers to installing a new system using a configuration file or image of an earlier system install.

Booting with Shell Scripts

Whether you are creating guest virtual machines in the cloud or on your own local host machine, there are various ways to get them booted. Starting a few VMs via a GUI is not too terribly difficult, but if your company employs hundreds of virtual machines, you need to consider automating the process.

Using shell scripts for booting virtual machines is typically a build-your-own approach, though there are many examples on the Internet. It works best for booting guest virtual machines on a company-controlled host machine.

images If you prefer not to start from scratch, take a look at GitHub for available scripts. One popular project, which has several forks, is github.com/giovtorres/kvm-install-vm.

You can create configuration files for your various virtual machines and read them into the shell script(s) for booting as needed. The guest can be booted, when the host system starts, at predetermined times, or on demand. This is a flexible approach that allows a great deal of customization.

images

Booting School VMs with Shell Scripts

A school environment is an excellent setting for using guest virtual machines, especially in computer science classes. VMs provide an economical and highly flexible solution.

On the servers, guest VMs are configured to employ either temporary or permanent storage, depending on class needs. For example, students who do not need long-term storage for their files on the VM are provided with guest virtual machines with transitory virtual disks. Students who do need to store and later access their files, such as programming students, are provided with guest machines with persistent storage.

These guest virtual machines are booted as needed via scripts. The scripts can be initiated by an instructor prior to class or scheduled via a cron job (covered in Chapter 26) or even via systemd timers (systemd was covered in Chapter 6).

In the classroom, either thin clients or student (or school-provided) laptops are available for accessing the guest virtual machines. The configuration and deployment of such machines simplifies many school computing environment complications as well as typically lowers associated costs.

Kick-Starting with Anaconda

You can quickly and rather easily bootstrap a new system (physical or virtual) using the kickstart installation method. This RHEL-based technique for setting up and conducting a system installation consists of the following:

  1. Create a kickstart file to configure the system.
  2. Store the kickstart file on the network or on a detachable device, such as a USB flash drive.
  3. Place the installation source (e.g., ISO file) where it is accessible to the kickstart process.
  4. Create a boot medium that will initiate the kickstart process.
  5. Kick off the kickstart installation.

Creating the Kickstart File

A kickstart file is a text file that contains all the installation choices you desire for a new system. While you could manually create this file with a text editor, it is far easier to use an anaconda file. For Red Hat–based distros, at installation, this file is created and stored in the /root directory and is named anaconda-ks.cfg. It contains all the installation choices that were made when the system was installed.

images Ubuntu distributions do not create anaconda files at system installation. Instead you have to install the system-config-kickstart utility (the package name is system-config-kickstart) and use it to create your kickstart file. Ubuntu has a native bootstrapping product called preseed. It may be wise to use it, as opposed to the Red Hat–based kickstart method. Also, openSUSE distributions have their own utility called AutoYaST, which is another bootstrap alternative to kickstart.

An example of a CentOS 7 distribution’s anaconda file is shown snipped in Listing 29.4.

Listing 29.4: Looking at the anaconda-ks.cfg file

# cat /root/anaconda-ks.cfg
#version=DEVEL
# System authorization information
auth --enableshadow --passalgo=sha512
# Use CDROM installation media
cdrom
# Use graphical install
graphical
# Run the Setup Agent on first boot
firstboot --enable
# Keyboard layouts
keyboard --vckeymap=us --xlayouts=’us’
# System language
lang en_US.UTF-8

# Network information
network  --bootproto=dhcp --device=enp0s3 [&hellip;]
network  --bootproto=dhcp --device=enp0s8 [&hellip;]
network  --hostname=localhost.localdomain

# Root password
rootpw --iscrypted $6$BeVyWOTQ9PdmzOO3$ri2[&hellip;]
# System services
services --enabled="chronyd"
# System timezone
timezone America/New_York --isUtc
user --name=Christine --password=$6$LzENYr[&hellip;]
# System bootloader configuration
bootloader --append=" crashkernel=auto" --[&hellip;]

# Partition clearing information
clearpart --none --initlabel

%packages
@^minimal
@core
chrony
kexec-tools

%end

%addon com_redhat_kdump --enable --reserve-mb=’auto’

%end

%anaconda
[&hellip;]
%end
#

Notice in Listing 29.4 that the root password and primary user password are stored in this file. The file needs to be kept secured so as not to compromise any of your virtual or physical systems this file is used to bootstrap.

In order to create the kickstart file for a system installation, start with the anaconda file and copy it for your new machines. Typically ks.cfg is used as the kickstart file name. After that, open the file in a text editor and make any necessary modifications.

Kickstart files use a special syntax, and unfortunately there are no man pages describing it. Your best option is to open your favorite web browser and enter Kickstart Syntax Reference to find a Fedora or Red Hat documentation site.

images Don’t let kickstart file typographical errors cause installation problems. Besides giving the file a team review, use the ksvalidator utility to find syntax issues in a kickstart file.

Storing the Kickstart File

For regular physical system installations, typically a configured kickstart file is stored on removable media (such as a USB flash drive) or somewhere on the network, if you plan on using a PXE or TFTP boot process. For virtual machine creation, you can store it locally on the host system. In any case, make sure the file is properly protected.

Placing the Installation Source

The installation source is typically the ISO file you are employing to install the Linux distribution. However, you can also use an installation tree, which is a directory tree containing the extracted contents of an installation ISO file.

Often for a regular physical system, the ISO is stored on removable media or a network location. However, for a virtual machine, simply store the ISO or installation tree on the host system. You can even place it in the same directory as the kickstart file, as was done on the system shown in Listing 29.5

Listing 29.5: Viewing the ks.cfg and ISO files’ location

# ls VM-Install/
ks.cfg  ubuntu-18.04.1-live-server-amd64.iso
#

Creating a Boot Medium

For a physical installation, the method and medium you choose depend on the various system boot options as well as the type of system on which you will be performing the installation. A simple method for servers that have the ability to boot from USB drives or DVDs is to store a bootable live ISO on one of these choices.

For a virtual machine installation, such as a virtual machine on a KVM hypervisor, there is no need to create a boot medium. This just gets easier and easier!

Kicking Off the Installation

After you have everything in place, start the kickstart installation. For a physical system, start the boot process, reach a boot prompt, and enter a command like linux ks=hd:sdc1:/ks.cfg, depending on your hardware environment, your bootloader, and the location of your kickstart file.

For a virtual system, you can simply employ the virt-install command, if it’s available on your host machine. Along with the other options used to create a virtual machine, you add two options similar to these:

--initrd-inject /root/VM-Install/ks.cfg
--extra-args="ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8"

If desired, you can create a shell script with a loop and have it reissue the virt-install command multiple times to create/install as many virtual machines as you need.

Initializing with Cloud-init

Cloud-init is a Canonical product (the same people who produce the Ubuntu distributions). It provides a way to bootstrap virtualized machines. Canonical on its cloud-init website, cloud-init.io, describes it best, “Cloud images are operating system templates and every instance starts out as an identical clone of every other instance. It is the user data that gives every cloud instance its personality and cloud-init is the tool that applies user data to your instances automatically.”

The cloud-init service is written in Python and is available for cloud-based virtualization services, such as Amazon Web Services (AWS), Microsoft Azure, and Digital Ocean as well as with cloud-based management operating systems, like OpenStack. And your virtual machines don’t have to be in a cloud to use cloud-init. It can also bootstrap local virtual machines using VMM (hypervisor) products like VMware and KVM. In addition, it is supported by most major Linux distributions. It is called an industry standard for a reason.

Cloud-init allows you to configure the virtual machine’s hostname, temporary mount points, and the default locale. Even better, pre-generated OpenSSH private keys can be created to provide encrypted access to the virtualized system. Customized scripts can be employed to run when the virtual machine is bootstrapped. This is all done through what is called user-data, which is either a string of information or data stored in files that are typically Yet Another Markup Language (YAML) formatted files.

images If you would like to take a look at the cloud-init utility, you can install it via the cloud-init package (package installation was covered in Chapter 13). It is available on most major Linux distributions.

The /etc/cloud/cloud.cfg file is the primary cloud-init configuration file. The command-line utility name is, as you might suspect, the cloud-init command. An example of using cloud-init to get help is shown in Listing 29.6.

Listing 29.6: Employing the -h option to get help on the cloud-init command

$ cloud-init -h
usage: /usr/bin/cloud-init [-h] [--version] [--file FILES] [--debug] [--force]
 {init,modules,single,query,dhclient-hook,features,analyze,devel,
collect-logs,clean,status}
                           ...

optional arguments:
  -h, --help            show this help message and exit
  --version, -v         show program’s version number and exit
  --file FILES, -f FILES
                        additional yaml configuration files to use
  --debug, -d           show additional pre-action logging (default: False)
  --force               force running even if no datasource is found (use at
                        your own risk)

Subcommands:
  {init,modules,single,query,dhclient-hook,features,analyze,devel,
collect-logs,clean,status}
    init                initializes cloud-init and performs initial modules
    modules             activates modules using a given configuration key
    single              run a single module
    query               Query standardized instance metadata from the command
                        line.
    dhclient-hook       run the dhclient hookto record network info
    features            list defined features
    analyze             Devel tool: Analyze cloud-init logs and data
    devel               Run development tools
    collect-logs        Collect and tar all cloud-init debug info
    clean               Remove logs and artifacts so cloud-init can re-run.
    status              Report cloud-init status or wait on completion.
$

 

images You would only employ the cloud-init command on a host machine, where virtual machines are created. For cloud-based virtualization services, such as AWS or Microsoft Azure, you provide the user-data file or information via their virtualization management interface to bootstrap newly created virtual machines.

Exploring Storage Issues

It is easy to forget that your virtual machines don’t use real disks. Instead, their virtual disks are simply files on a physical host’s disk. Depending on the VMM (hypervisor) employed and the set configuration, a single virtual disk may be represented by a single physical file or multiple physical files.

When setting up your virtual system, it is critical to understand the various virtual disk configuration options. The choices you make will directly affect the virtual machine’s performance. Several virtualization services and products have a few terms you need to understand before making these configurations.

Provisioning When a virtual machine is created, you choose the amount of disk storage. However, it is a little more complicated than simply selecting the size. Virtual disks are provisioned either thinly or thickly.

Thick provisioning is a static setting where the virtual disk size is selected and the physical file(s) created on the physical disk is pre-allocated. Thus, if you select 700GB as your virtual disk size, 700GB of space is consumed on the physical drive. Some hypervisors (VMMs) have various versions of thick provisioning, such as Lazy Zero Thick, that have performance implications.

Thin provisioning is grown dynamically, which causes the hypervisor to consume only the amount of disk space actually used for the virtual drive. Thus, if you select 700GB as your virtual disk size, but only 300GB of space is written to the virtual drive, then only 300GB of space is consumed on the physical drive. As more data is written to the virtual drive, more space is utilized on the physical drive up to the 700GB setting. Be aware that the reverse is not necessarily true—when you delete data from the virtual drive, it does not automatically free up disk space from the physical drive.

Thin provisioning is often done to allow overprovisioning. In this scenario, more disk space is assigned virtually than is available physically. The idea is that you can scale up the physical storage as needed. For example, if the host system is using LVM (covered in Chapter 11), additional volumes are added as needed to the physical logical volume to meet virtual machine demand.

Persistent Volumes The term persistent volume is used by many virtualization products, such as OpenStack and Kubernetes. In essence, a virtualized persistent volume is similar to a physical disk in how it operates. Data is kept on the disk until the system or user overwrites it. The data stays on the disk, whether the virtual machine is running or not, and with some virtualization products, it can remain even after the virtual machine using it is destroyed.

Blobs Blob storage is a Microsoft Azure cloud platform term. Blob storage is large unstructured data, which is offered over the Internet and can be manipulated with .NET code. Typically, a blob consists of images, streaming video and audio, big data, and so on.

Blob data items are grouped together into a container for a particular user account and can be one of three different types:

  • Block blobs are blocks of text and binary data. The blobs are not managed as a group but instead are handled independently of one another. Their size limit is 4.7TB.
  • Append blobs are also blocks of text and binary data. However, their storage is enhanced to allow for efficient appending operations. Thus, this blob type is often used for logging data.
  • Page blobs are simply random access files, which can be up to 8TB in size. They are also used as virtual disks for Azure virtual machines.

Considering Network Configurations

Applications on physical systems are able to reach the outside world via a network interface card (NIC) and an attached network. With virtualized systems and virtualized networks, the landscape is a little different. Virtualized machines can have any number of virtualized NICs, the hypervisor may provide virtualized internal switches, and the NIC configuration choices are plentiful. The right configuration results in higher network and application performance as well as increased security.

Virtualizing the Network

Network virtualization has been evolving over the last few years. While it used to simply mean the virtualization of switches and routers running at OSI level 2 and 3, it can now incorporate firewalls, server load balancing, and more at higher OSI levels. Some cloud providers are even offering Network as a Service (NaaS).

Two basic network virtualization concepts are virtualized local area networks (VLANs) and overlay networks.

VLAN To understand a VLAN, it is best to start with a local area network (LAN) description. Systems and various devices on a LAN are typically located in a small area, such as an office or building. They share a common communications line or wireless link, are often broken up into different network segments, and their network traffic travels at relatively high speeds.

A VLAN consists of systems and various devices on a LAN, too. However, this group of systems and various devices can be physically located across various LAN subnets. Instead of physical locations and connections, VLANs are based on logical and virtualized connections and use layer 2 to broadcast messages. Routers, which operate at layer 3, are used to implement this LAN virtualization.

Overlay Network An overlay network is a network virtualization method that uses encapsulation and communication channel bandwidth tunneling. A network’s communication medium (wired or wireless) is virtually split into different channels. Each channel is assigned to a particular service or device. Packets traveling over the channels are first encapsulated inside another packet for the trip. When the receiving end of the tunneled channel gets the encapsulated packet, the packet is removed from its capsule and handled.

With an overlay network, applications manage the network infrastructure. Besides the typical network hardware, this network type employs virtual switches, tunneling protocols, and software-defined networking (SDN). Software-defined networking is a method for controlling and managing network communications via software. It consists of an SDN controller program as well as two application programming interfaces called northbound and southbound. Other applications on the network see the SDN as a logical network switch.

Overlay networks offer better flexibility and utilization than non-virtualized network solutions. They also reduce costs and provide significant scalability.

Configuring Virtualized NICs

Virtual NICs (adapters) are sometimes directly connected to the host system’s physical NIC. Other times they are connected to a virtualized switch, depending on the configuration and the employed hypervisor. An example of a VM’s adapter using a virtual switch is shown in Figure 29.2.

The diagram shows an example of a VM’s adapter using a virtual switch.

Figure 29.2 Virtual machine using a virtual switch

When configuring a virtual machine’s NIC, you have lots of choices. It is critical to understand your options in order to make the correct selections.

Host-Only A host-only adapter (sometimes called a local adapter) connects to a virtual network contained within the virtual machine’s host system. There is no connection to the external physical (or virtual) network to which the host system is attached.

The result is speed. If the host system has two or more virtual machines, the network speed between the VMs is rather fast. This is because VMs’ network traffic does not travel along wires or through the air but instead takes place in the host system’s RAM.

This configuration also provides enhanced security. A virtual proxy server is a good example. Typically, a proxy server is located between a local system and the Internet. Any web requests sent to the Internet by the local system are intercepted by the proxy server that then forwards them. It can cache data to enhance performance, act as a firewall, and provide privacy and security. One virtual machine on the host can act as a proxy server utilizing a different NIC configuration and have the ability to access the external network. The other virtual machine employing the host-only adapter sends/receives its web requests through the VM proxy server, increasing its protection.

Bridged A bridged NIC makes the virtual machine like a node on the LAN or VLAN to which the host system is attached. The VM gets its own IP address and can be seen on the network.

In this configuration, the virtual NIC is connected to a host machine’s physical NIC. It transmits its own traffic to/from the external physical (or virtual) network.

This configuration is employed in the earlier virtual proxy server example. The proxy server’s NIC is configured as a bridged host, so it can reach the external network and operate as a regular system node on the network.

NAT A network address translation (NAT) adapter configuration operates in a way that’s similar to how NAT operates in the physical world. NAT in the physical networking realm uses a network device, such as a router, to “hide” a LAN computer system’s IP address when that computer sends traffic out onto another network segment. All the other LAN systems’ IP addresses are translated into a single IP address to other network segments. The router tracks each LAN computer’s external traffic, so when traffic is sent back to that system, it is routed to the appropriate computer.

With virtualization, the NAT table is maintained by the hypervisor instead of a network device. Also, the IP address of the host system is employed as the single IP address that is sent out onto the external network. Each virtual machine has its own IP address within the host system’s virtual network.

Physical and virtual system NAT has the same benefits. One of those is enhanced security by keeping internal IP addresses private from the external network.

Dual-Homed In the physical world, a dual-homed system (sometimes called a multi-homed system) is a computer that has one or more active network adapters. Often a physical host is configured with multiple NICs. This configuration provides redundancy. If one physical NIC goes bad, the load is handled by the others. In addition, it provides load balancing of external network traffic.

In the virtual world, many virtual machines are dual-homed or even multi-homed, depending on the virtual networking environment configuration and goals. Looking back to our virtual proxy server example, it is dual-homed, with one internal network NIC (host-only) to communicate with the protected virtual machine, and it has a bridged adapter to transmit and receive packets on the external network. Figure 29.3 shows the complete network picture of this virtual proxy server.

The diagram shows the complete network picture of the virtual proxy server.

Figure 29.3 Virtual proxy server

The physical and virtual machine network adapter configuration has performance and security implications. Understanding your internal virtual and external physical networks and goals is an important part of making these choices.

Summary

Configuring your cloud and/or virtualization environment requires knowledge concerning networking and storage options. In addition, you need to understand how to quickly boot large numbers of virtual machines as well as bootstrap new ones. Discerning some of the various virtual and cloud machine tools is important as well. With a firm grasp on these concepts, you can participate in cloud and virtual system planning teams, which can successfully migrate a company’s physical systems to a more modern and cost-effective environment.

Exam Essentials

Describe various VM tools. The libvirt library is a popular software collection of virtualization management components. It includes an API, a daemon (libvirtd), and command-line utilities, such as virt-install and virsh. The virsh shell is one such tool provided by the libvirt library that allows you to manage a system’s virtual machines. The Virtual Machine Manager (also called vmm) is a lightweight desktop application for creating and managing virtual machines. It can be initiated from the command line via issuing the virt-manager command in a terminal emulator.

Explain bootstrapping utilities. The kickstart installation method employs a kickstart file that contains all the bootstrap choices desired for a new system. Instead of starting from scratch, the anaconda file, /root/anaconda-ks.cfg, is available on Red Hat–based distros and can be modified to configure a kickstart file. Ubuntu distributions do not employ the kickstart installation method by default. Instead they use a bootstrapping product named preseed. openSUSE distros also have their own alternative, which is AutoYaST. The Canonical product, cloud-init, is a bootstrap utility that is available for local virtual machines as well as cloud-based ones.

Detail different virtual storage options. Virtual disks can be provisioned either thick or thin. Thick provisioning is a static setting where the virtual disk size is selected and the physical file(s) created on the physical disk is pre-allocated. Thin provisioning is grown dynamically, which causes the hypervisor to consume only the amount of disk space actually used for the virtual drive. Drives can be either persistent or temporary. Temporary volumes are discarded when the virtual machine is stopped, while persistent disks are kept not only when the VM is shut down, but sometimes even after it is deleted. Blob storage refers to unstructured data offered on the Microsoft Azure cloud platform. This storage typically consists of images, streaming video and audio, big data, and so on. There are three blob types—block, append, and page.

Summarize virtual network configurations. One network type is an overlay network. This network virtualization method uses encapsulation and communication channel bandwidth tunneling. Besides the typical network hardware, this network type employs virtual switches, tunneling protocols, and software-defined networking (SDN). Network adapters (NICs) also have many configuration virtualization options. A dual-homed virtual machine has two virtualized NICs. A host-only adapter connects to a virtual network contained within the virtual machine’s host system, and there is no connection to the external network. A bridged NIC makes the virtual machine like a node on the network to which the host system is attached. A NAT adapter creates a virtualized NAT router for the VM.

Review Questions

  1. Which of the following is true concerning the libvirt library software collection? (Choose all that apply.)

    1. Provides an API library for hypervisors
    2. Provides a complete hypervisor (VMM) application
    3. Provides the virsh and virsh-install utilities
    4. Provides the anaconda file used for bootstrapping
    5. Provides the libvirtd daemon for host systems
  2. Carol wants to automate the management of her virtual machines via a Bash shell script. Which of the following utilities can she use in this script? (Choose all that apply.)

    1. virsh
    2. virtinst
    3. virt-manage
    4. virt-install
    5. setvcpus
  3. Nick is setting up a bootstrapping process on a RHEL system. He needs to store the installation tree. Which of the following are locations where he could store it? (Choose all that apply.)

    1. Network location
    2. USB flash drive
    3. On AutoYaST
    4. Within the preseed directory
    5. With the kickstart file
  4. Which of the following is true concerning the cloud-init product? (Choose all that apply.)

    1. It was created and maintained by Microsoft.
    2. It is usable by cloud-based virtualization services.
    3. It is usable by cloud-based management operating systems.
    4. It is supported by most major Linux distributions.
    5. It is a bootstrap product.
  5. Ms. Danvers is designing a set of virtual machines for her company, Miracle. Currently, her host machine uses LVM but only has enough disk space for 1TB of data. Her three VMs will need 200GB of disk space immediately but are projected to grow to 300GB each within the next year. What should she do?

    1. Configure the three VMs to use persistent storage.
    2. Configure the three VMs to use temporary storage.
    3. Configure the three VMs to use thick provisioned storage.
    4. Configure the three VMs to use thin provisioned storage.
    5. Configure the three VMs to use blob storage.
  6. Mr. Fury is a programming professor at Galactic University. This next semester he has chosen to use virtual machines for his students’ labs. The students will be creating a single program that they work on throughout the entire semester. What is the best choice of disk storage for Mr. Fury’s student virtual machines?

    1. Persistent storage
    2. Temporary storage
    3. Thick provisioned storage
    4. Thin provisioned storage
    5. Blob block storage
  7. Which of the following is true about an overlay network? (Choose all that apply.)

    1. It is a storage virtualization method.
    2. It is a network virtualization method.
    3. It is a method that employs encapsulation.
    4. It is a method that employs bandwidth tunneling.
    5. It is a method that employs page blobs.
  8. Carol needs her virtual machines to all act as nodes on her host machine’s LAN and get their own IP address that they will use to send/receive network traffic. Which virtual NIC type should she configure on them?

    1. Host-only
    2. Bridged
    3. NAT
    4. Multi-homed
    5. Dual-homed
  9. Ms. Danvers wants her three virtual machines’ IP address to be kept private, but she also wants them to communicate on the host machine’s network using its IP address. Which virtual NIC type should she configure on them?

    1. Host-only
    2. Bridged
    3. NAT
    4. Multi-homed
    5. Dual-homed
  10. Nick has created five virtual machines on his host system. One virtual machine is employed as a firewall for the other four machines, which are confined with host-only adapters. The firewall VM operates on the host system’s network as a node. Which of the following describe his firewall adapter configuration. (Choose all that apply.)

    1. Host-only
    2. Bridged
    3. NAT
    4. Multi-homed
    5. Dual-homed
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.205.136