CHAPTER 6

Virtualization and the Cloud

In this chapter, you will learn about

•   Benefits of virtualization in a cloud environment

•   Virtual resource migrations

•   Migration considerations

•   Software defined

Virtualization is the key building block to cloud computing, and it is used by cloud providers to offer services to cloud consumers. Virtualization is the component that makes it possible for cloud services to provide a scalable, elastic, and on-demand environment. For example, cloud services may have thousands of hypervisors. When a cloud consumer requests a new server, the cloud provider provisions a new VM from a hypervisor. No new physical hardware needs to be put in place to service the request.

Virtualization allows an organization to efficiently scale its computing environment both up and down to meet its needs. When combined with cloud computing, virtualization takes advantage of the unlimited computing resources provided externally by a cloud provider to provide flexible and scalable virtualization solutions.

Virtualization will continue to play a significant role in cloud computing, as it is the technology that allows a cloud provider to deliver low-cost hosting environments to organizations no matter the size of the enterprise.

Benefits of Virtualization in a Cloud Environment

Cloud computing and virtualization go hand in hand. Virtualization makes cloud computing more efficient and easier to manage. Virtualization consolidates many physical servers into VMs running on fewer physical servers functioning as hosts. Through virtualization, a single host can run many guest operating systems and multiple applications instead of a single application on each server. Virtualization reduces the number of servers needed to host IT services, in turn, lessening rack space, power consumption, and administration.

Virtualization transforms compute resources into a centralized, sharable pool of resources that an organization can allocate to its business units on demand while still maintaining control of resources and applications.

Shared Resources

Cloud computing can provide compute resources as a centralized resource through shared resources. Shared resources are distributed on an as-needed basis to the cloud consumer. Thus, sharing resources improves efficiency and reduces costs for an organization.

Virtualization helps to simplify the process of sharing compute resources. As we discussed in Chapter 5, virtualization also increases the efficiency of hardware utilization. The cloud, on the other hand, adds a layer of management that allows a VM to be created quickly and scaled to meet the demands of the organization.

Figure 6-1 shows an example of how shared resources are configured.

Images

Figure 6-1  An illustration of shared resources in a cloud environment

Elasticity

Elastic computing allows compute resources to vary dynamically to meet a variable workload. A primary reason organizations implement a cloud computing model is the ability to dynamically increase or decrease the compute resources of their virtual environment.

A cloud provider can support elasticity by using resource pooling. Resource pooling allows compute resources to be pooled to serve multiple consumers by using a multitenant model. Resource pooling can provide a unique set of resources to cloud consumers so that physical and virtual resources can be dynamically assigned and reassigned based on cloud consumer demands.

With cloud computing and elasticity, the time to add or remove cloud resources and the time it takes to implement an application can both be drastically reduced. When an organization implements cloud computing and virtualization, it can quickly provision a new server to host an application and then provision that application, which in turn reduces the time it takes to implement new applications and services.

Elasticity allows an organization to scale resources up and down as an application or service requires. In this scenario, the organization becomes a cloud consumer, and the resources in the cloud appear to the consumer to be infinite, allowing the organization to consume as many or as few resources as it requires. With this new scalable and elastic computing model, an organization can respond to compute resource demands in a quick and efficient manner, saving it time and money. Not only can a cloud consumer dynamically scale the resources it needs, but it can also migrate its applications and data between cloud providers, making the applications portable. With the cloud, an organization can deploy applications to any cloud provider, making its applications portable and scalable.

While virtualization alone could provide many of these same benefits of elasticity and scalability, it would rely on compute resources purchased and owned by the organization rather than leased from a seemingly infinite resource like a cloud provider.

Another benefit of combining cloud computing and virtualization is the ability to self-provision virtual systems. An IT department in a cloud computing model can grant permissions that give users in other departments the ability to self-provision VMs. The IT department still controls how the VM is created and what resources are provided to that VM without actually having to create it. The IT department even can charge or keep track of the users who are creating the VM, making the users accountable for whether they actually need the machine and the resources it requires.

Images

EXAM TIP   Elasticity allows an organization to quickly and easily scale the virtual environment both up and down as needed.

Network and Application Isolation

As discussed previously, cloud computing and virtualization can enhance network security, increase application agility, and improve the scalability and availability of the environment. Cloud computing can also help to create network and application isolation.

Without network isolation, it might be possible for a cloud consumer to intentionally or unintentionally consume a significant share of the network fabric or see another tenant’s data in a multitenant environment. Proper configuration of the network to include resource control and security using network isolation helps to ensure these issues are mitigated.

There are also circumstances where specific network traffic needs to be isolated to its own network to provide an initial layer of security, to afford higher bandwidth for particular applications, to enforce chargeback policies, or for use in tiered networks.

Virtualization and cloud computing now provide a means to isolate an application without having to deploy a single application to a single physical server. By combining virtualization and network isolation, it is possible to isolate an application just by correctly configuring a virtual network. Multiple applications can be installed on one physical server, and then a given application can be isolated so that it can communicate only with network devices on the same isolated segment.

For example, you can install an application on a VM that is the same version or a newer version of an existing application, yet have that install be completely isolated to its own network for testing. The ability of an organization to isolate an application without having to purchase additional hardware is a crucial factor in the decision to move to virtualization and cloud computing.

Images

EXAM TIP   Virtualization makes it possible for an application to be installed on a VM and be isolated from other network devices. This feature is typically utilized in the entry-level stages of testing applications because the identical environment running in the IT department can be easily replicated.

Infrastructure Consolidation

Virtualization allows an organization to consolidate its servers and infrastructure by allowing multiple VMs to run on a single host computer and even providing a way to isolate a given application from other applications that are installed on other VMs on the same host computer. Cloud computing can take it a step further by allowing an organization not only to benefit from virtualization but also to purchase compute resources from a cloud provider. If an organization purchases its compute resources from a cloud provider, it requires fewer hardware resources internally.

Cost Considerations

Consolidating an organization’s infrastructure using virtualization and cloud compute resources results in lower costs to the organization, since it no longer needs to provide the same power, cooling, administration, and hardware that would be required without virtualization and cloud computing. The organization can realize additional cost savings in reduced time spent on maintaining the network environment, since the consolidated infrastructure is often easier to manage and maintain.

Energy Savings

Consolidating an organization’s infrastructure using virtualization and cloud compute resources results in lower energy consumption to the organization, since it no longer needs to provide the same power to equipment that was virtualized or replaced by cloud compute resources. Less hardware also results in reduced cooling needs and less square footage used in an office space.

Dedicated vs. Shared Compute Environment

A dedicated compute environment offers consistent performance because the organization does not need to contend with other tenants for compute resources. However, a dedicated compute environment is more expensive to lease than a shared compute environment because the cloud provider cannot distribute the costs for the compute resources over as many tenants.

Dedicated resources may be a requirement for some regulated industries or for companies with specific data handling or contractual isolation requirements.

Virtual Data Center Creation

Another option an organization has regarding infrastructure consolidation is a virtual data center. A virtual data center offers data center infrastructure as a service and is the same concept as a physical data center with the advantages of cloud computing mixed in.

A virtual data center offers compute resources, network infrastructure, external storage, backups, and security, just like a physical data center. A virtual data center also offers virtualization, pay-as-you-grow billing, elasticity, and scalability. An administrator can control the virtual resources by using quotas and security profiles.

A cloud user of a virtual data center can create virtual servers and host applications on those virtual servers based on the security permissions assigned to their user account. It is also possible to create multiple virtual data centers based on either geographic or application isolation requirements.

Virtual Resource Migrations

Now that you understand how cloud computing benefits from virtualization, you need to know how to migrate an organization’s current resources into either a virtual environment or a cloud environment.

Migrating servers to a virtual or cloud environment is one of the first steps in adopting a cloud computing model. Organizations do not want to start from scratch when building a virtual or cloud environment; they want the ability to migrate what is in their current data center to a cloud environment.

With the advancements in virtualization and consolidated infrastructures, organizations now see IT resources as a pool of resources that can be managed centrally, not as a single resource. IT administrators can now quickly move resources across the network from server to server; from data center to data center; or into a private, public, or hybrid cloud, giving them the ability to balance resource and compute loads more efficiently across multiple, even global, environments.

This section explains the different options for migrating an organization’s current infrastructure to a virtual or cloud environment.

Virtual Machine Templates

When an organization is migrating its environment to the cloud, it needs to have a standardized installation policy or profile for its virtual servers. The virtual machines (VMs) need to have a similar base installation of the operating system so that all the machines have the same security patches, service packs, and base applications installed.

VM templates provide a streamlined approach to deploying a fully configured base server image or even a fully configured application server. VM templates help decrease the installation and configuration costs when deploying VMs and lower ongoing maintenance costs, allowing for faster deploy times and lower operational costs.

Images

EXAM TIP   VM templates create a standardized set of VM configuration settings that allow for quick deployment of one or multiple VMs.

A VM template can be exported from one virtualization host and then imported on another virtualization host and be used as a master VM template for all virtualization hosts.

VM templates provide a standardized group of hardware and software settings that can repeatedly be reused to create new VMs that are configured with those specified settings. For example, a VM template can be defined to create a VM with 8192MB of memory, four vCPUs, two vGPUs, and three virtual hard disks. Alternatively, a VM template can be set up based on an existing, fully configured VM.

In essence, a VM template acts as a master image that an organization can use to quickly and efficiently deploy similar VM instances in its environment. The organization can then maintain the VM templates by applying operating system updates and application patches so that any new VM instances that are created with the template are up to date and ready to use instantly. Figure 6-2 displays a graphical representation of how VM templates work.

Images

Figure 6-2  Representation of a VM template

Physical to Virtual

Along with creating new VMs and provisioning those VMs quickly and efficiently using VM templates, there will be occasions when an organization needs to convert a physical server to a virtual server. The process of creating a VM from a physical server is called physical to virtual (P2V). Figure 6-3 illustrates how a P2V migration works.

Images

Figure 6-3  A graphical representation of physical-to-virtual (P2V) migration

P2V enables the migration of a physical server’s operating system, applications, and data to a newly created guest VM on a host computer. There are three different ways to convert a physical server to a virtual server:

•   Manual  You can manually create a new VM on a host computer and copy all the files from the OS, applications, and data from the source physical server. The manual process is time-consuming and not very effective.

•   Semiautomated  A semiautomated P2V approach uses a software tool to assist in the migration from a physical server to a virtual server. This simplifies the process and gives the administrator some guidance when migrating the physical server. There are also free software tools that help migrate a physical server from a virtual server.

•   Fully automated  The fully automated version uses a software utility that can migrate a physical server over the network without any assistance from an administrator.

Migrating a VM from a physical server can be done either online or offline. With an online migration, the physical computer or source computer remains running and operational during the migration. One of the advantages of the online option is that the source computer is still available during the migration process. This may not be a big advantage, however, depending on the application that is running on the source computer.

When doing an offline P2V conversion, the source computer is taken offline during the migration process. An offline migration provides for a more reliable transition, since the source computer is not being utilized. For example, if you are doing a migration of a database server or a domain controller, it would be better to do the migration offline, since the system is constantly being utilized.

Before migrating a physical machine to a VM, it is always advisable to check with the application vendor to make sure it supports the hardware and application in a virtual environment.

Virtual to Virtual

Similar to P2V, virtual to virtual (V2V) is the process of migrating an operating system, applications, and data, but instead of migrating them from a physical server, they are migrated from a virtual server.

Just like for P2V, software tools are available to fully automate a V2V migration. V2V can be used to copy or restore files and programs from one VM to another. It can also convert a VMware VM to a Hyper-V–supported VM or vice versa.

If the conversion is from VMware to Hyper-V, the process creates a .vhdx file and copies the contents of the .vmdk file to the new .vhdx file so that the VM can be supported in Hyper-V.

The Open Virtualization Format (OVF) is a platform-independent, extensible, open packaging and distribution format for VMs. OVF allows for efficient and flexible allocation of applications, making VMs mobile between vendors because the application is vendor and platform-neutral. An OVF VM can be deployed on any virtualization platform. Similarly, an Open Virtual Appliance (OVA) is an open standard for a virtual appliance that can be used in a variety of hypervisors from different vendors.

Client Tool Changes when Migrating or Upgrading

You may need to upgrade the client tools on a VM to a newer version if you move the VM to a host with a newer hypervisor version or upgrade the hypervisor software. Virtual machine client tools provide many features to the virtual machine. In addition to heartbeat connections, client tools offer the ability to take snapshots, synchronize the virtual machine clock with the host, direct data transfers from host to virtual machine, and send a remote shutdown or restart command to the virtual machine from the hypervisor without connecting to the virtual machine directly. Client tools need to be installed on the virtual machine in order for these features to work, and their version must match up with the hypervisor software version to support full functionality.

Virtual to Physical

The virtual-to-physical (V2P) migration process is not as simple as a P2V. A variety of tools are needed to convert a VM back to a physical machine. Here is a three-step process for doing a V2P conversion:

1.   Generalize the VM security identifiers. Install and run Microsoft Sysprep on the VM to prepare the image for transfer and allow for hardware configuration changes.

2.   Gather drivers. Prepare all the drivers for the target physical server before doing the migration.

3.   Convert using a third-party tool. Use a software tool such as Symantec Ghost or Acronis Universal Restore to facilitate the virtual-to-physical conversion and load the necessary hardware drivers onto the physical machine.

While a V2P conversion is not something that is often done, sometimes it is required, for a couple of different reasons. One of the reasons is to test how the application performs on physical hardware. Some applications may perform better on physical hardware than on virtual hardware. This is not a common circumstance, however, and it is fairly easy to increase the compute resources for a VM to improve the performance of an application that is hosted there.

The more common reason to perform a V2P is that some application vendors do not support their product running a virtual environment. Today almost all vendors do support their application in a virtual environment, but there are still a few who do not. This fact and the complexities of V2P over P2V make V2P a less common scenario. Unlike the P2V process, which requires only the software tool to do the migration, the V2P process involves more planning and utilities and is much more complex.

Physical to Physical

The physical-to-physical (P2P) migration process is used to convert one physical system to another. This is a common practice when upgrading the server hardware. IT administrators would do the P2P conversion to the new hardware and then retire the original machine. The process for P2P conversions is mostly the same as the V2P conversion:

1.   Generalize the source system security identifiers. Install and run Microsoft Sysprep on the source system to prepare the image for transfer and allow for hardware configuration changes.

2.   Gather drivers. Prepare all the drivers for the target physical server before doing the migration.

3.   Image the source system using a third-party tool. Use a software tool such as Symantec Ghost or Acronis to image the source.

4.   Restore the image to new hardware. Boot the new system to the imaging software, point it to the image, and run the restore operation. The imaging tool should include a function to load the necessary hardware drivers onto the physical machine. For example, this is called Universal Restore in Acronis.

Virtual Machine Cloning

Whether an organization creates a VM from scratch or uses one of the migration methods we just discussed, at some point, it might want to make a copy of that VM, called a clone.

Installing a guest operating system and all applications is a time-consuming process, so VM cloning makes it possible to create one or multiple copies of a VM or a VM template. Clones can also be used to create VM templates from existing machines.

When a company creates a VM clone, it is creating an exact copy of an existing VM. The existing VM then becomes the parent VM of the VM clone. After the clone is created, it is a separate VM that can share virtual disks with the parent VM or create its own separate virtual disks.

Once the VM clone is created, any changes made to the clone do not affect the parent VM and vice versa. A VM clone’s MAC address and universally unique identifier (UUID) are different from those of the parent VM.

VM cloning allows for deploying multiple identical VMs to a group. This is useful in a variety of situations. For example, the IT department might create a clone of a VM for each employee, and that clone would contain a group of preconfigured applications. Or the IT department might want to use VM cloning to create a development environment. A VM could be configured with a complete development environment and cloned multiple times to create a baseline configuration for testing new software and applications.

Images

EXAM TIP   VM clones provide an efficient way to create a copy of a VM to quickly deploy a development environment.

Virtual Machine Snapshots

A VM snapshot captures the state of a VM at the specific time that the snapshot is taken. A VM snapshot can be used to preserve the state and data of a VM at a specific point in time. Reverting to a snapshot is extremely quick compared to restoring from a backup.

It is common for snapshots to be taken before a major software installation or other maintenance. If the work fails or causes issues, the VM can be restored to the state it was in when the snapshot was taken in a very short amount of time.

A snapshot includes the state the VM is in when the snapshot is created. So if a VM is powered off when the snapshot is created, the snapshot will be of a powered-off VM. However, if the VM is powered on, the snapshot will contain the RAM and current state, so that restoring the snapshot will result in a running VM at the point in time of the snapshot. The snapshot includes all the data and files that make up the VM, including hard disks, memory, and virtual network interface cards.

Multiple snapshots can be taken of a VM. A series of snapshots are organized into a snapshot chain. A snapshot keeps a delta file of all the changes after the snapshot was taken. The delta file records the differences between the current state of the virtual disk and the state the VM was in when the snapshot was taken.

Clones vs. Snapshots

Clones and snapshots have different uses, and it is important not to confuse their use cases. VM cloning is used when you want to make a separate copy of a VM for either testing, separate use, or archival purposes.

However, if you are looking to save the current state of a VM so that you can revert to that state in case of a software installation failure or an administrative mistake, you should create a VM snapshot, not a VM clone.

Storage Migration

Storage migration is the process of transferring data between storage devices. Storage migration can be automated or done manually. Storage migration makes it possible to migrate a virtual machine’s storage or disks to a new location and across storage arrays while maintaining continuous availability and service to the VM. It also allows for migrating a VM to a different storage array without any downtime to the VM. Figure 6-4 displays how storage is migrated between storage devices.

Images

Figure 6-4  Using storage migration in a virtual environment

Storage migration eliminates service disruptions to a VM and provides a live and automated way to migrate the virtual machine’s disk files from the existing storage location to a new storage destination. Migrating VM storage to different storage classes is a cost-effective way to manage VM disks based on usage, priority, and need. It also provides a way to take advantage of tiered storage, which we discussed in Chapter 2.

Storage migration allows a VM to be moved from SAN-based storage to NAS, DAS, or cloud-based storage according to the VM’s current needs. Storage migration helps an organization prioritize its storage and the VMs that access and utilize that storage.

Block, File, and Object Migration

Storage migration can differ based on the type of data being moved. Block storage is migrated using tools on the source storage system or cloud storage to map it to the destination cloud. File migration can be configured at the storage layer (using storage or cloud software), similar to the block storage migration, or it can be performed within the system that is mapped to the storage. This involves using a tool within the operating system to copy the files from source to destination. Object storage can be moved using a RESTful API to interface with the object storage.

Host Clustering and HA/DR

High availability (HA) and disaster recovery (DR) functions of a hypervisor enable automatic failover with load balancing. A cluster consisting of multiple hypervisors, typically utilizing shared storage, must be configured to use HA. Some systems require management tools such as VMware’s vSphere or Microsoft System Center VM Manager to take advantage of some of the more advanced HA capabilities, profiles, and customization.

A high-availability cluster auto-balances VMs across the available hypervisors. It can also fail a VM over to another host if the host experiences issues or suffers from resource constraints. Each host in the cluster must maintain a reserve of resources to support additional VM migrations in the case of a host failure. HA clusters also periodically rebalance VMs across the cluster hosts to ensure that a comfortable resource ceiling is maintained.

Depending on the cluster size, some or all hosts in the cluster will be configured to monitor the status of other hosts and VMs. This is accomplished through heartbeat connections that tell other nodes that the hypervisor or VM is still active and functioning. Suppose a heartbeat is not received from a host for a predetermined amount of time (15 seconds for VMware). In that case, the VMs on that host will be failed over to other hosts in the cluster, and the host will be marked as inactive until a heartbeat signal is received again from the host. On-premises physical servers or VMs can be configured to fail over to cloud VMs, and vice versa.

Administrators should configure a dedicated network segment or VLAN for heartbeat traffic. Heartbeat traffic does not need to be routed if all hosts are on the same LAN. However, if hosts are spread across sites such as in multisite failover scenarios, the heartbeat network will need to be routed to the other site as well. In cloud environments, a VXLAN, NVGRE, STT, or GENEVE segment is a perfect solution for the heartbeat connection. VXLAN, NVGRE, STT, and GENEVE segmentation was covered in Chapter 4.

A dedicated virtual NIC does not need to be assigned to VMs on a cluster. The hypervisor client tools will send the heartbeat information to the hypervisor. Hypervisors can be configured to take specific actions if a heartbeat signal is not received from a VM, such as restarting the VM, notifying an administrator, reverting to a saved state, or failing the VM over to another host.

CPU Effect on HA/DR

The hypervisor only has so much information when it chooses the placement of a VM. Some VMs might not have many processors configured, but they are still processor intensive. If you find that your HA cluster frequently rebalances the machines in a suboptimal way, there are some actions you can take to remedy the situation.

HA resource determinations are based on a number of factors, including the following:

•   Defined quotas and limits

•   Which resource is requested by which VM

•   The business logic that may be applied by a management system for either a VM or a pool of VMs

•   The resources that are available at the time of the request

It is possible for the processing power required to make these decisions to outweigh the benefit of the resource allocations, and in those situations, administrators can configure their systems to allocate specific resources or blocks of resources to particular hosts to shortcut that logic and designate which resources to use for a specific VM or pool on all requests.

CPU affinity is one such application in which processes or threads from a specific VM are tied to a specific processor or core, and all subsequent requests from those processes or threads are executed by that same processor or core. Organizations can utilize reservations for VMs to guarantee an amount of compute resources for that VM.

Cloud Provider Migrations

It may become necessary to migrate VMs, data, or entire services from one cloud provider to another. There is a high level of standardization with most cloud platforms, but the migration process still requires a high level of integration between cloud providers for seamless migration.

It is important, when evaluating cloud providers, to ensure that they offer integration and migration options. One cloud provider may not meet your scalability or security requirements, necessitating a move to another cloud provider. Additionally, you may wish to diversify cloud resources across several providers to protect against data loss or service downtime from a single provider’s downtime.

In worst-case scenarios, you may need to export the VMs into a compatible format and then manually import them or import them with a script into the new cloud provider’s environment.

Extending Cloud Scope

A major advantage of cloud systems is that cloud consumers can extend existing workloads into the cloud or extend existing cloud systems, making the cloud a powerful and flexible system for companies to rely upon for changing business needs. This can be advantageous even in temporary situations, such as in cloud bursting, where traffic is routed to cloud resources when the load exceeds available local resources.

Vendor Lock-in

Vendor lock-in is a situation where a cloud consumer is unable to easily move to another cloud provider. Lock-in can occur when the vendor does not support common standards or data export and migration functions. If the cloud provider does not support common standards such as data export in a common format that the industry supports, cloud consumers will not be able to import their data into another provider and would be forced to re-create the data manually. Similarly, the cloud provider may not support data or VM export or migration, so the consumer would have no method of taking their business elsewhere. Lock-in occurs whenever the cost to switch to another provider is substantial.

PaaS or SaaS Migrations

It is important to determine which type of migration you want to conduct when migrating to the cloud or from cloud to cloud. A SaaS migration is often the easiest because the cloud vendor takes on most of the migration burden. However, there are cases where a PaaS migration may make more sense, such as lower ongoing subscription costs or a higher level of control. Let’s consider an example. Consider a company that has a locally hosted Exchange e-mail server that they want to migrate to the cloud. If they choose the PaaS solution, they would export their local Exchange server and import it into the cloud provider’s virtualization platform. If the Exchange server is not a stand-alone server, they may also need to reconfigure authentication for the system or move other supporting resources to the cloud so that it will function. In this scenario, the company would pay for the server hosting, and they would be responsible for maintaining the server.

Now, if the company chooses the SaaS approach, it would export the mailboxes from their local exchange into the cloud e-mail system, such as Office 365. In this SaaS solution, the company would pay a monthly subscription for each e-mail account, but they would no longer be responsible for server maintenance.

Similarly, a company could have resources in the cloud and choose to migrate those to another cloud provider. They can make the same choices here. The PaaS-to-SaaS migration would be similar to the local-to-SaaS migration. However, the SaaS-to-PaaS migration would require the company to set up the e-mail system within the new provider’s platform before exporting and importing the accounts into their new system.

Migration Firewall Configuration

When moving resources to a new cloud, you will need to ensure that you also move any firewall rules associated with those resources. Otherwise, the migrated resources will be unable to connect to other services properly, or end users will be unable to communicate with them. Follow these steps to migrate the firewall configuration:

1.   Verify feature support.

2.   Review current firewall rules.

3.   Review network connections between migrated and nonmigrated resources.

4.   Create the rules on the new firewall.

5.   Test communication.

6.   Migrate resources.

7.   Perform final validation.

The first step is to verify that the new cloud provider’s firewall supports the essential features you need. All firewalls have the primary support for access control lists (ACLs) and some level of logging, but other features such as bandwidth control, IDS or IPS functionality, malware scanning, VPN, data loss prevention (DLP), deep packet inspection, or sandboxing may or may not be included. This is the first step because it is a deal-breaker. If the provider does not support the required features, you will need to select a different provider.

The second step is to review the current firewall rules to identify those that apply to the migrated resources. If we use the e-mail server example from before, there would be firewall rules for web-based e-mail, IMAP, POP3, and SMTP. Document each of these rules so that they can be created on the new firewall.

Third, review network connections between migrated and nonmigrated resources. These communications take place between systems in the cloud or the local environment, and new firewall rules will need to be created to support this communication once the resources have been moved to the new provider. In the e-mail example, there may be communication to the backup server for continuous backups, domain controller for authentication, the DNS server for DNS lookups, and an NTP server for time synchronization. Document these communication requirements so that rules can be created for them.

In the fourth step, new firewall rules are created based on the documentation drafted in steps two and three. For the e-mail server example, rules would be created for web-based e-mail, IMAP, POP3, SMTP, backup, DNS, authentication, and NTP.

The fifth step is to verify connectivity for all the required resources. It is essential to do this step before migrating to ensure that the rules are configured correctly. Skipping this step could result in unexpected downtime to the system following its migration. Make sure you verify connectivity to each port and service that you configured. If you have a test copy of the system, you can restore it there but not modify external DNS pointers to it so that clients do not establish connections with it. Next, verify that the services can connect to it and then manually test a sample client connection. If everything works, you can move on to the next step.

The sixth step is to perform the migration of the resources, and this is followed by the final phase of validating that those resources can still connect to the required systems and that clients can connect to the migrated resources.

Migration ACLs  You may need to configure some ACLs to perform the migration. For example, you may create an ACL that allows communication to and from the VMware server over port 8000 to support vMotion if moving VMs between two VMware systems.

It is crucial to remove migration ACLs once you have completed the migration, as these ACLs increase the attack surface, making the systems potentially more vulnerable to attacks. They should be promptly removed, since they are no longer needed. In the example earlier, if vMotion is no longer required following the migration, you would remove the ACL that allows connections over port 8000 once the migration was complete.

Exercise 6-1: Creating a Cloud Firewall on Azure

In this exercise, we will create a cloud firewall on Azure and add firewall ACLs to allow SMTP, POP3, and web-based e-mail traffic to an internal system with the IP address 10.0.0.1.

Images

NOTE   You will need to have an Azure subscription to perform this task. You can sign up for a free 12-month subscription if you do not have one.

1.   Sign in to the Azure portal (https://portal.azure.com), as shown here.

Images

2.   Select Create A Resource, and a new window will load.

3.   Type firewall  in the search bar (see the illustration on the following page), then press ENTER.

4.   Various options from the marketplace will be displayed. Choose the firewall from Microsoft. This should be the first one displayed.

Images

Images

5.   Click Create.

6.   Select your Azure subscription in the subscription drop-down box. Then select a resource group. The resource group chosen for this example is one that I created previously called CloudExample. Give the instance a name. In this example, we called it CloudExample_FW1. Select a region and an availability zone from the drop-down boxes. In this example, we selected East US for the region and Zone 1 for the availability zone. Choose to create a new virtual network and give it a name. This example calls it CloudExample_Net1. Give it an address space of 10.0.0.0/16. Next, assign the subnet address space 10.0.0.0/24.

7.   The last option is for the public IP address. Click the Add New button and give it a name. This example calls it FW-IP. In the end, your screen should look like the following illustration, except that your resource group may be different. Click the blue Review + Create button at the bottom. This will display a screen that shows the values you just entered. Click the blue Create button at the bottom.

Images

8.   The deployment will take a few minutes. When it is done, you will see a screen like the following illustration, stating that the deployment is complete. Click the Go To Resource button.

Images

9.   Under settings, on the left side, select Rules.

10.   Click the Network Rule Collection tab and then click Add Network Rule Collection to create a new network rule.

11.   The Add Network Rule Collection screen will appear. Enter a name for the network rule. This example named it CloudExample-E-mail. Set the priority to 100  and the action to allow.

12.   Under the IP addresses section, create three rules. The first will be SMTP with the TCP protocol, * for the source address, a destination address of 192.168.10.1, and port 587. The second will be POP3 with the TCP protocol, * for the source address, a destination address of 192.168.10.1, and ports 110 and 993. You can specify two ports by putting a comma between them (e.g., 110,993). The third will be Mail-Web with the TCP protocol, * for the source address, a destination address of 192.168.10.1, and port 443. Your screen should look like this:

Images

13.   Click the Add button. It will take a moment for the rule to be created. Wait for the screen to display the new rule.

Images

Migration Considerations

Before an organization can migrate a VM using one of the migration methods discussed in the previous section, it needs to consider a few things. Among the most important of those considerations are the compute resources: the CPU, memory, disk I/O, and storage requirements. Migrating a physical server to a VM takes careful planning for it to be successful. Planning the migration of physical servers to the virtual environment is the job of IT administrators, and they must perform their due diligence and discover all the necessary information about both the server and the application that the server is hosting.

Requirements Gathering

It is essential to gather as much information as possible when preparing to migrate physical servers to a virtual environment. This information will help determine which servers are good candidates for migration and which of those servers to migrate first.

When evaluating a physical server to determine if it is the right candidate for a virtual server, it is important to monitor that server over a period of time. The monitoring period helps to produce an accurate profile of the physical server and its workload.

A monitoring tool such as Windows Performance Monitor or a comparable tool in the Linux environment can be used to accurately assess the resource usage for that particular server. The longer the physical server trends are monitored, the more accurate the evaluation of resource usage will be.

The time spent monitoring the system also varies depending on the applications the physical server is hosting. For example, it would make sense to monitor a database server for a more extended period than a print server. In the end, the organization needs to have an accurate picture of memory and CPU usage under various conditions so that it can use that information to plan the resources the physical server might need after it is converted to a VM.

Another consideration to make when determining if a physical server is the right candidate for virtualization is the file system’s status. When converting a physical server to a virtual server, all the physical server data is copied to the virtual server as part of the P2V process. Files and data that are not required are sometimes kept on a server, and those files do not need to be migrated as part of the P2V process, nor should they be. It is important to examine the hard drive of the physical server before performing a migration and remove all files and data that are not required for the server to function and provide the application it is hosting. Examples of these files might be drivers or hardware applications such as Wi-Fi tools, firmware update utilities, or other files meant to be used only by a physical machine.

Images

EXAM TIP   During a P2V migration, the host computer must have a sufficient amount of free memory because much of the data will be loaded into memory during the transfer process.

Migration Scheduling

After gathering the proper information to perform a successful physical-to-virtual migration, you need to plan when the project should be completed. Migrations will not result in downtime for systems that meet your specific P2V migration tool’s online migration requirements, such as the VMware vCenter converter or the Microsoft VM converter. However, systems under migration will likely experience slower performance while the migration is underway. It may be advisable to schedule migrations during downtime or a period where the activity is typically at its lowest, such as in the late evening or overnight.

Expect some downtime as part of the migration of a physical server to a virtual server if it does not meet the requirements of your P2V conversion tool. At a minimum, the downtime will consist of the time to start the new VM and shut down the old physical server. DNS changes may also need to be made and replicated to support the new virtual instance of the physical server.

Maintenance schedules should also be implemented or taken into consideration when planning a physical server’s migration to a virtual server. Most organizations have some maintenance schedule set up for routine maintenance on their server infrastructure, and these existing scheduled blocks of time might be suitable for P2V conversions.

Provide the business case for some downtime of the systems to the change management team before embarking on the P2V migration process. Part of that downtime goes back to the resource provisioning discussion earlier in this chapter. It is a balance between underprovisioning the new virtual servers from the beginning or overprovisioning resources. Underprovisioning causes additional and unnecessary downtime of the virtual server and the application the virtual server is hosting. On the other hand, overprovisioning reserves too many resources to the VM and consumes precious host resources where they are not required. This can sometimes even have a detrimental effect on performance.

Upgrading

In addition to P2V, V2P, and V2V, an organization may upgrade an existing VM to the latest virtual hardware or the newest host operating system. VM hardware corresponds to the physical hardware available on the host computer where the VM is created.

It may be necessary to upgrade the VM hardware or guest tools on a VM to take advantage of some of the host’s features. The host file system or hypervisor may also need to be updated to support these improvements. VM hardware features might include BIOS enhancements, virtual PCI slots, and dynamically configuring the number of vCPUs or memory allocation.

Another scenario that might require upgrading a VM is when a new version of the host operating system is released (e.g., when Microsoft releases a new version of Hyper-V or VMware releases a new version of ESXi). In this instance, an organization would need to upgrade or migrate its VMs to the new host server.

Upgrading to a new host operating system and migrating the VMs to that new host requires the same planning that would be needed to perform a P2V migration. Make sure you understand the benefits of the new host operating system and how those benefits will affect the VMs and, specifically, their compute resources. Once again, careful planning is critical before the upgrading process starts.

Workload Source and Destination Formats

The most straightforward migrations are performed when the source and destination formats are the same, but life is not always straightforward, and there will be times when an upgrade includes transitioning from one format to another.

Migrations or upgrades may include transitioning P2V, V2P, or V2V and from one platform such as Microsoft Hyper-V to VMware or Citrix Hypervisor. Migrations may also involve more advanced features such as virtual disk encryption or multifactor authentication that must be supported and configured on the destination server.

Virtualization Format  P2V migrations can be performed manually by setting up a new operating system and then installing applications, migrating settings, and copying data. However, this is time-consuming and often error-prone. It is more efficient to use software tools to fully or partially automate the P2V conversion process. Tools are specific to the destination virtualization platform. Such tools gather the required information from the physical machine and then create a VM on the destination virtualization platform such as Hyper-V or VMware.

V2P migrations can be performed by running Microsoft Sysprep on the VM to prepare the image for transfer and allow for hardware configuration changes. Next, all the drivers for the target physical server would need to be prepared before doing the migration, and then a software tool would be used to facilitate the virtual-to-physical migration and load the necessary hardware drivers onto the physical machine. Alternatively, V2P migrations can be performed manually by setting up a new operating system and then installing applications, migrating settings, and copying data.

V2V migration can be performed by exporting the VMs from the previous version and importing them into the new version of the host operating system software. Additionally, some software such as VMware VMotion or Microsoft SCVMM can perform migrations from one hypervisor version to another. However, this is often a one-way move because moving from a newer version to an older version is not usually supported.

Application and Data Portability  Migrations also may move from an encrypted format to a nonencrypted format or vice versa. Migrating encrypted VMs does require the encryption keys, so you must ensure that these are available prior to migration and ensure that the destination system supports the same encryption standards. Certificates or other prerequisites may need to be in place first to support this or other features of the VM.

Standard Operating Procedures for Workload Migrations

You will likely perform migrations many times. The first time you complete a migration, create a standard process for future migrations. You may find along the way that you can improve the process here or there. Feel free to add more details to the standard procedure as you discover enhancements.

A standard process ensures that others who perform the same task will do so with the same level of professionalism that you do. Standard operating procedures also ensure consistent implementation, including the amount of time it takes to perform the task and the resources required.

Standard operating procedures can also be used to automate processes. Once a process has been performed several times and is sufficiently well documented, there may be methods of automating the process so that it is even more streamlined. The documentation will ensure that you do not miss a critical step in the automation, and it can help in troubleshooting automation later on.

Environmental Constraints

Upgrades are also dependent upon various environmental constraints such as bandwidth, working hour restrictions, downtime impact, peak timeframes, and legal restrictions. We also operate in a global economy, so it is essential to understand where all users are working and the time zone restrictions for performing upgrades.

Bandwidth  Migrations can take a lot of bandwidth depending on the size of the VM hard drives. When migrating over a 1 Gbps or 10 Gbps Ethernet network, this is not as much of a concern, but bandwidth can be a considerable constraint when transferring machines over a low-speed WAN link, such as a 5 Mbps MPLS connection.

Evaluate machines that are to be migrated and their data sizes and then estimate how much time it will take to migrate the machines over the bandwidth available. Be sure to factor in other traffic as well. You do not want the migration to affect normal business operations in the process. Also, be sure that others are not migrating machines at the same time.

Working Hour Restrictions  Working hours can be a restriction on when upgrades or migrations are performed. Working hour restrictions may require that some work be performed outside of regular business hours, such as before 9:00 A.M. or after 5:00 P.M. Working hours may differ in your company. For example, they may be 7:00 A.M. to 7:00 P.M. in places where 12-hour shifts are common.

Working hour restrictions also affect how work is assigned to people who work in shifts. For example, suppose an upgrade is to take three hours by one person. In that case, it must be scheduled at least three hours prior to the end of that person’s shift, or the task will need to be transitioned to another team member while still incomplete. It generally takes more time to transition a task from one team member to another, so it is best to try to keep this to a minimum. Sometimes more than one person works on a task, but those people leave, and a new group takes over at some point so that a single person or group does not get burned out trying to complete a major task.

It is also important to factor in some buffer time for issues that could crop up. In the example, if the task is expected to take three hours and you schedule it precisely three hours before the employee’s shift ends, that provides no time for troubleshooting or error. If problems do arise, the task would be transitioned to another team member, who would need to do troubleshooting that might require input from the first team member to avoid rework, since the second employee may not know everything that was done in the first place. For this reason, it is crucial to keep a detailed log of what changes were made and which troubleshooting steps were performed, even if you do not anticipate transitioning the task to another person. This can also be helpful when working with technical support.

Downtime Impact  Not all migrations and upgrades require downtime, but it is very important to understand which ones do. Upgrades or migrations that require the system to be unavailable must be performed during a downtime. Stakeholders, including end users, application owners, and other administrative teams, need to be consulted prior to scheduling a downtime so that business operations are not affected. The stakeholders need to be informed of how long the downtime is anticipated to take, what value the change brings to them, and the precautions that the IT team is taking to protect against risks.

For systems that are cloud-consumer facing, if the cloud provider can’t avoid downtime to conduct a migration or upgrade, it needs to schedule the downtime well in advance and give cloud consumers plenty of notice so that the company does not lose cloud-consumer confidence by taking a site, application, or service down unexpectedly.

Peak Timeframes  Upgrades that do not require downtime could still affect the VM’s performance and the applications that run on top of it. For this reason, it is best to plan upgrades or migrations for times when the load on the system is minimal.

For example, it would be a bad idea to perform a migration on a DHCP server at the beginning of the day when users log into systems because that is when the DHCP server has the most significant load. Users would likely see service interruptions if a migration or an upgrade occurs during such a peak time.

Legal Restrictions  Migrating a VM from one location to another can present data sovereignty issues. Different countries have different laws, especially when it comes to privacy, and you will need to understand the type of data that resides on VMs and any limitations to where those VMs can reside.

Upgrades can also run into legal constraints when new features violate laws in the host country. For example, an upgrade may increase the encryption capabilities of software to the degree that it violates local laws requiring no more than a specific encryption bit length or set of algorithms.

Legal constraints can come up when upgrades violate laws for users of the system even if the application resides in a different country from the users. For example, the European Union’s General Data Protection Regulation (GDPR) affects companies that do business with Europeans, even if those businesses are not located in Europe. Consult with legal and compliance teams to ensure that you adhere with local laws and regulations.

Time Zone Constraints  Virtualized and cloud systems may have users spread across the globe. Additionally, it may be necessary to coordinate resources with cloud vendors or support personnel in different global regions. In such cases, time zones can be a considerable constraint for performing upgrades. It can be challenging to coordinate a time that works for distributed user bases and maintenance teams.

For this reason, consider specifying in vendor contracts and SLAs an upgrade schedule so that you do not get gridlocked by too many time zone constraints and are unable to perform an upgrade.

Follow the Sun  Follow the sun (FTS) is a method where multiple shifts work on a system according to their time zone to provide 24/7 service. FTS is commonly used in software development and customer support. For example, customer support calls might be answered in India during India’s regular working hours, after which calls are transitioned to the Philippines, and so on so that each group works its normal business hours. Similarly, a cloud upgrade could be staged so that teams in the United States perform a portion of the upgrade, and then as soon as they finish, a team in the UK starts on the next batch. When the UK team completes their work, a group in China begins, and then back to the United States the following morning.

Testing

The process of P2V, or V2V for that matter, generally leaves the system in complete working and functional order, and the entire system is migrated and left intact. With that said, any system that is being migrated should be tested both before and after the migration process. The IT administrator needs to define a series of checks that should be performed after the migration and before the virtual server takes over for the physical server. Some of the tests that should be completed on the virtual server after migration are as follows:

•   Remove all unnecessary hardware from the VM. (If you are migrating from a physical server to a virtual server, you might have some hardware devices that were migrated as part of the P2V process.)

•   When first booting the VM, disconnect it from the network. This allows the boot to occur without having to worry about duplicate IP addresses or DNS names on the network.

•   Reboot the VM several times to clear the logs and verify that it is functioning as expected during the startup phase.

•   Verify network configurations on the virtual server while it is disconnected from the network. Make sure the IP address configuration is correct so that the VM does not have any issues connecting to the network once network connectivity is restored.

Performing these post-migration tests will help to ensure a successful migration process and to minimize any errors that might arise after the migration is complete. As with anything, there could still be issues once the VM is booted on the network, but performing these post-conversion tests will lessen the likelihood of problems.

Databases

A database is a storage method for data, such as financial transactions, products, orders, website content, and inventory, to name a few. A database organizes data for fast data creation, retrieval, and searching. The two main methods of organizing information, relational and nonrelational, are discussed later in this section. The software that databases reside in is known as a database management system (DBMS).

Databases in the cloud can be deployed on top of virtualized servers that the company operates, or they can be utilized as a part of a Database as a Service (DBaaS) solution. The DBaaS solution requires the least amount of work for the cloud consumer. The cloud provider takes care of the database software and maintenance, and the consumer creates their database objects, such as tables, views, and stored procedures, then populates it with data.

Relational

A relational database organizes data into tables where each record has a primary key, a unique value that can be used to reference the record. Data is stored to minimize duplication of data within the database. Relationships between data in multiple tables are established by referencing the primary key from another table known as a foreign key.

For example, we may have a customer table that has a customer ID as the primary key. We might also have an orders table that stores each and every order placed. Its primary key is the order ID. However, we can determine which customer placed the order by including the customer ID in the orders table as a foreign key. Similarly, we can obtain the orders a customer made by querying the order table for each record that contains the customer ID. The most common application for relational databases is Online Transaction Processing (OLTP).

Relational databases are stored within software known as a relational database management system (RDBMS). Some RDBMSs include Microsoft SQL Server, MySQL, Oracle Database, and IBM DB2.

Nonrelational

Nonrelational databases store data in a less structured format. This allows them to be more flexible. Data sets do not have to contain the same pieces of data. For example, one name field might have several pieces of data associated with it. For one customer, it might include first name and last name; for another, first name, last name, middle initial, and title; and another first name, last name, and maiden name.

Nonrelational databases are sometimes called NoSQL databases because SQL is not used to query the data set. This does not mean that SQL cannot be used, but SQL will need to be translated into the query method for the nonrelational database.

Some advantages of a nonrelational database include simple design, support for multiple data types, flexible data organization and growth, and the ability to derive new insights from data without needing to first establish a formal relationship. However, there are some drawbacks. Nonrelational databases suffer from data duplication, which can result in more extensive data storage requirements. It is also more challenging to enforce transactional business rules at the database level because the underlying relationships have not been established. This has to be performed at the application level. Lastly, the semistructured approach to nonrelational databases and the agility it offers can give some the wrong impression, resulting in systems that are developed with too little structure. It is still crucial for developers to consider their data model when they develop an application that uses a nonrelational database.

Nonrelational databases are ideal when collecting data from multiple data sources, such as in data warehousing and Big Data situations. It is also useful in artificial intelligence, where the meaning of the data is unknown until the program acts on it. Some popular nonrelational databases include Cassandra, Coachbase, DocumentDB, HBase, MongoDB, and Redis.

Database Migrations

It is quite common to move databases between providers because much of the software and systems developed in the cloud utilize databases for the storage of the data required for their operation. Thus, when moving services to a cloud provider or between cloud providers, you will also need to move the associated databases. Cloud providers offer tools that can perform the migrations for you. For example, the Amazon database migration service and the Azure database migration service can both be used to perform a database migration. When using the Amazon service, the only cost is the compute resources required to do the migration. Azure offers its offline tool for free. This will result in some downtime while the migration is in progress. Their online tool can move the database without downtime, and it is available for premium tiers.

•   Amazon Database Migration Service  https://aws.amazon.com/dms/

•   Azure Database Migration Service  https://azure.microsoft.com/en-us/services/database-migration/

Cross-Service Migrations

You may need to convert from one database service to another. For example, you may want to move from Oracle to SQL Server or PostgreSQL to MySQL. The good news is that cloud migration tools allow you to move the data into a different database service with minimal effort. Both the Amazon and Azure services mentioned earlier offer schema conversion functions that will take the source elements and convert them into the format required for the destination database type.

The process typically involves analyzing the source database with the tool to generate a report on what can be converted and which areas will need to be modified before the conversion can be performed. Cross-service migrations require conversion of the following database elements:

•   Constraints  Constraints are used to apply rules to the data that can be put into a table or a column. Some of the most common constraints include requiring that all values in a column be unique, excluding null values (places where no data was provided), default values when none are provided, indexed values, or relational concepts like primary key or foreign key constraints.

•   Data types  Data types specify the kind of data contained in a database column, such as an integer, decimal, date, or binary data.

•   Functions  Functions are computed values such as an average, sum, or count.

•   Indexes  An index is used to perform fast searches on data. Indexes require additional data to be stored for each searchable item, which increases the time for data writes, but they vastly improve search query time because the query does not need to go through every row sequentially until it finds the desired data.

•   Procedures  Procedures perform a task using one or more Structured Query Language (SQL) statements.

•   Schema  The schema is an outline of how the database is structured.

•   Sequences  Sequences are used to automatically insert an incrementing numeric value into a database column each time new data is added.

•   Synonyms  Synonyms are alternative names given to database objects to make it easier to reference them in queries or procedures.

•   Tables  Tables are the basic building blocks of a database. Tables store the data in a database. They are organized into columns for each piece of data that comprises a record and rows for each discrete record. For example, a table might contain customers, so the columns might be customer ID, first name, last name, address, city, state, ZIP code, loyalty number, and join date. There would be a row for each customer.

•   Views  Views are alternative ways of accessing the data in one or more tables. They are formed from a SQL query and can include indexes for fast searching of specific data across the database or to provide restricted access to only a portion of the data contained in tables. Views do not hold the data itself. The data still resides in the underlying tables.

Software Defined

Rapidly changing business needs and customer expectations are driving companies to seek ever more agility from their technology. This has spawned a set of technologies, termed “software defined,” that perform physical functions within software. These software defined technologies include software defined network (SDN), software defined storage (STS), and software defined data center (SDDC). Each of these technologies is leveraged to provide today’s flexible and scalable cloud solutions.

Software Defined Network

The goal of SDN is to make networks that are easier to manage and more flexible in terms of handling changing application requirements. SDN does this by decoupling the data forwarding role of the network from the control role. SDN uses a controller that has visibility across the enterprise network infrastructure, including both physical and virtual devices from heterogeneous vendors and platforms. This visibility is used to provide faster detection and remediation of equipment faults or security incidents.

The controller is the hub for the information flow. Commands from the controller and information from the network devices are exchanged through APIs, called Southbound APIs. Similarly, information exchanged between the controller and policy engines or applications is done through Northbound APIs.

SDN offers greater flexibility because network resources can be dynamically assigned, expanded, or removed, depending on application needs. SDN improves security because security applications can reconfigure the network to mitigate attacks based on network intelligence.

Software Defined Storage

Like SDN, SDS’s goal is to improve the manageability and more flexibility of storage so that it can handle dynamically changing application requirements. SDS allows companies to utilize storage from multiple sources and vendors and automatically provision, deprovision, or scale the storage to meet application demands.

Centralized control and management of enterprise storage allow for storage types to be virtualized and addressed in a single place. Policies can be applied to the storage so that it can be configured in one place, rather than relying on multiple vendor-specific interfaces. Application data can span across multiple storage platforms for improved data redundancy and to combine best-of-breed solutions into a solution. All-flash arrays can be combined with spinning disks or VTL, which is then mirrored to replica sets in other locations. This occurs over multiple storage fabrics and platforms, without the software needing to understand the underlying pieces.

Software Defined Data Center

The SDDC is a combination of virtualization, SDN, and SDS to provide a single method of controlling the infrastructure to support operations, monitor security, and automate management functions. SDDC can be used to manage resources at multiple physical data centers or clouds to best allocate resources. SDDC makes it easier for companies to shift workloads among technologies or to replace underlying technologies without affecting the applications that run on top of them.

Chapter Review

There are many benefits to adopting a virtualized environment, including shared resources, elasticity, and network isolation for testing applications. Migrating to a virtual environment takes careful planning and consideration to define proper compute resources for the newly created VM. Understanding how to correctly perform a P2V migration is a key concept for the test and the real world, as you will be required to migrate physical servers to a virtual environment if you are working with virtualization or the cloud.

Questions

The following questions will help you gauge your understanding of the material in this chapter. Read all the answers carefully because there might be more than one correct answer. Choose the best response(s) for each question.

1.   Which of the following allows you to scale resources up and down dynamically as required for a given application?

A.   Subnetting

B.   Resource pooling

C.   Elasticity

D.   VLAN

2.   Which of the following data centers offers the same concepts as a physical data center with the benefits of cloud computing?

A.   Private data center

B.   Public data center

C.   Hybrid data center

D.   Virtual data center

3.   How does virtualization help to consolidate an organization’s infrastructure?

A.   It allows a single application to be run on a single computer.

B.   It allows multiple applications to run on a single computer.

C.   It requires more operating system licenses.

D.   It does not allow for infrastructure consolidation and actually requires more compute resources.

4.   Which of the following gives a cloud provider the ability to distribute resources on an as-needed basis to the cloud consumer and in turn helps to improve efficiency and reduce costs?

A.   Elasticity

B.   Shared resources

C.   Infrastructure consolidation

D.   Network isolation

5.   Your organization is planning on migrating its data center, and you as the administrator have been tasked with reducing the footprint of the new data center by virtualizing as many servers as possible. A physical server running a legacy application has been identified as a candidate for virtualization. Which of the following methods would you use to migrate the server to the new data center?

A.   V2V

B.   V2P

C.   P2P

D.   P2V

6.   You have been tasked with migrating a VM to a new host computer. Which migration process would be required?

A.   V2V

B.   V2P

C.   P2P

D.   P2V

7.   An application was installed on a VM and is now having issues. The application provider has asked you to install the application on a physical server. Which migration process would you use to test the application on a physical server?

A.   V2V

B.   V2P

C.   P2P

D.   P2V

8.   You have been tasked with deploying a group of VMs quickly and efficiently with the same standard configurations. What process would you use?

A.   V2P

B.   P2V

C.   VM templates

D.   VM cloning

9.   Which of the following allows you to move a virtual machine’s data to a different device while the VM remains operational?

A.   Network isolation

B.   P2V

C.   V2V

D.   Storage migration

10.   You need to create an exact copy of a VM to deploy in a development environment. Which of the following processes is the best option?

A.   Storage migration

B.   VM templates

C.   VM cloning

D.   P2V

11.   You are migrating a physical server to a virtual server. The server needs to remain available during the migration process. What type of migration would you use?

A.   Offline

B.   Online

C.   Hybrid

D.   V2P

12.   You notice that one of your VMs will not successfully complete an online migration to a hypervisor host. Which of the following is most likely preventing the migration process from completing?

A.   The VM needs more memory than the host has available.

B.   The VM has exceeded the allowed CPU count.

C.   The VM does not have the proper network configuration.

D.   The VM license has expired.

13.   After a successful P2V migration, which of the following tests, if any, should be completed on the new VM?

A.   Testing is not required.

B.   Remove all unnecessary software.

C.   Verify the IP address, DNS, and other network configurations.

D.   Run a monitoring program to verify compute resources.

14.   You are planning your migration to a virtual environment. Which of the following physical servers should be migrated first? (Choose two.)

A.   A development server

B.   A server that is running a non–mission-critical application and is not heavily utilized day to day

C.   A highly utilized database server

D.   A server running a mission-critical application

Answers

1.   C. Elasticity allows an organization to scale resources up and down as an application or service requires.

2.   D. A virtual data center offers compute resources, network infrastructure, external storage, backups, and security, just like a physical data center. A virtual data center also offers virtualization, pay-as-you-grow billing, elasticity, and scalability.

3.   B. Virtualization allows an organization to consolidate its servers and infrastructure by allowing multiple VMs to run on a single host computer.

4.   B. Shared resources give a cloud provider the ability to distribute resources on an as-needed basis to the cloud consumer, which helps to improve efficiency and reduce costs for an organization. Virtualization helps to simplify the process of sharing compute resources.

5.   D. P2V would allow you to migrate the physical server running the legacy application to a new VM in the new virtualized data center.

6.   A. V2V would allow you to migrate the VM to a new VM on the new host computer.

7.   B. One of the primary reasons for using the V2P process is to migrate a VM to a physical machine to test an application on a physical server if requested by the application manufacturer.

8.   C. VM templates would allow you to deploy multiple VMs, and those VMs would have identical configurations, which streamlines the process.

9.   D. Storage migration is the process of transferring data between storage devices and can be automated or done manually and allows the storage to be migrated while the VM continues to be accessible.

10.   C. When you create a VM clone, you are creating an exact copy of an existing VM.

11.   B. With an online migration, the physical computer or source computer remains running and operational during the migration.

12.   A. During a P2V migration, the host computer must support the source computer’s memory. More than likely the host does not have enough available memory to support the import of the VM in a migration scenario.

13.   C. After a successful migration, the network settings should be checked and verified before bringing the VM online.

14.   A, B. When planning a migration from a physical data center to a virtual data center, the first servers that should be migrated are noncritical servers that are not heavily utilized. A development server would be a good candidate, since it is most likely not a mission-critical server.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset