Chapter 6.  Advanced vSphere Infrastructure Management

In the previous chapters, we learnt how to deploy and configure both ESXi hosts and the vCenter server. With vCenter being the management layer of your vSphere infrastructure, there is a plethora of management options available at this layer.

In this chapter, we will cover the following topics:

  • Introducing vSphere vMotion
  • Clustering ESXi hosts for compute aggregation and power management
  • Clustering ESXi hosts for high availability

Introducing vSphere vMotion

vSphere vMotion is a VMware technology used to migrate a running virtual machine from one host to another without altering its power state. The beauty of the whole process is that it is transparent to the applications running inside the virtual machine. In this section, we will understand the inner workings of vMotion and learn how to configure it.

There are different types of vMotion:

  • Compute vMotion
  • Storage vMotion
  • Unified vMotion
  • Enhanced vMotion (X-vMotion)
  • Cross vSwitch vMotion
  • Cross vCenter vMotion
  • Long Distance vMotion

Compute vMotion is the default vMotion method and is employed by other features such as DRS, FT, and maintenance mode. When you initiate a vMotion, it initiates an iterative copy of all the memory pages. After the first pass, all the dirtied memory pages are copied again by doing another pass and this is done iteratively until the amount of pages left over to be copied is small enough to be transferred, and to switch over the state of the VM to the destination host. During the switch over, the virtual machine's device states are transferred and resumed at the destination host. You can initiate up to eight simultaneous vMotion operations on a single host.

Storage vMotion is used to migrate the files backing a virtual machine (virtual disks, configuration files, and logs) from one datastore to another while the virtual machine is still running. When you initiate a storage vMotion, it starts a sequential copy of source disks in 64 MB chunks. While a region is being copied, all the writes issued to that region are deferred until the region is copied. An already copied source region is monitored for further writes. If there is a write I/O, then it will be mirrored to the destination disk as well. This process of mirror writes to the destination virtual disk continues until the sequential copy of the entire source virtual disk is complete. Once the sequential copy is complete, all subsequent reads/writes are issued to the destination virtual disk. Keep in mind though that while the sequential copy is still in progress all the reads are issued to the source virtual disk. Storage vMotion is used in the storage DRS. You initiate up to two simultaneous SvMotion operations on a single host.

Unified vMotion is used to migrate both the running state of a virtual machine and files backing it from one host and datastore to another. Unified vMotion uses a combination of both Compute and Storage vMotion to achieve the migration. First, the configuration files and the virtual disks are migrated and only then does the migration of live states of the virtual machine begin. You can initiate up to two simultaneous Unified vMotion operations on a single host.

Enhanced vMotion (X-vMotion) is used to migrate virtual machines between hosts that do not share storage. Both the virtual machine's running state and the files backing it are transferred over the network to the destination. The migration procedure is the same as the Compute and Storage vMotion. In fact, Enhanced vMotion uses Unified vMotion to achieve the migration. Since the memory and disk states are transferred over the vMotion network, ESXi hosts maintain a transmit buffer at the source and a receive buffer at the destination. The transmit buffer collects and places data on to the network, while the receive buffer will collect data received via the network and then flush it to the storage. You can initiate up to two simultaneous X-vMotion operations on a single host.

Cross vSwitch vMotion allows you to choose a destination port group for the virtual machine. It is important to note that unless the destination port group supports the same L2 network, the virtual machine will not be able to communicate over the network. Cross vSwitch vMotion allows changing from Standard vSwitch to a VDS, but not from a VDS to Standard vSwitch. vSwitch to vSwitch and VDS to VDS is supported.

Cross vCenter vMotion allows the migration of virtual machines beyond the vCenter's boundary. This is a new enhancement with vSphere 6.0. However, for this to be possible both the vCenters should be in the same SSO domain and should be in enhanced linked mode. Infrastructure requirements for Cross vCenter vMotion have been detailed in the VMware Knowledge Base article 2106952 at the following link: http://kb.vmware.com/kb/2106952 .

Long Distance vMotion allows migrating virtual machines over distances with a latency not exceeding 150 milliseconds. Prior to vSphere 6.0, the maximum supported network latency for vMotion was 10 milliseconds.

Using the provisioning interface

You can configure a provisioning interface to send all non-active data of the virtual machine being migrated. Prior to vSphere 6.0, vMotion used the vmkernel interface which has the default gateway configured on it (which in most cases is the management interface vmk0) to transfer non-performance impacting vMotion data. Non-performance impacting vMotion data includes the virtual machine's home directory, older delta in the snapshot chain, base disks, and so on. Only the live data will hit the vMotion interface. The provisioning interface is nothing but a vmkernel interface with provisioning traffic enabled on this. The procedure to do this is very similar to how you would configure a VMkernel interface for management or vMotion traffic. You will have to edit the settings of the intended vmk interface and set Provisioning traffic as the enabled service:

Using the provisioning interface

It is important to keep in mind that the provisioning interface is not just meant for VMotion data, but if enabled, it will be used for cold migrations, cloning operations, and virtual machine snapshots. The provisioning interface can be configured to use a different gateway other than VMkernel's default gateway.

Enabling vMotion

vMotion traffic should be enabled on a vmkernel interface to make an ESXi host vMotion capable.. The procedure is fairly straightforward. It can either be done using the vSphere Client or the vSphere Web Client. Although we will discuss how this can be done using the vCenter GU, it is not mandatory to use the vCenter to enable vMotion:

  1. Connect to the vCenter using the vSphere Web Client
  2. In the Host and Clusters inventory, select an ESXi, host, navigate to Manage | Networking | VMkernel adapters, and select the vmk interface to enable vMotion.
  3. With the vmk interface selected, click on the pencil icon Enabling vMotion to bring-up the Edit Settings window.
  4. In the Edit Settings window, select vMotion traffic as the enabled service and click OK:

    Enabling vMotion
  5. Repeat steps 2 through 4 to enable vMotion on all the hosts in the cluster.

Enabling Multi-NIC vMotion

vMotion can be configured to use more than one physical NIC for traffic. Regardless of whether this has been configured on a VDS or a standard vSwitch, multi-NIC vMotion is achieved by configuring two separate vmkernel interfaces with vMotion traffic enabled on them.

Here is how it is done:

  1. Identify the physical adapters (vmnics) cabled to pass the vMotion traffic on.
  2. Create two separate VMkernel interfaces with the vMotion traffic enabled on it.
  3. Now, on the port groups corresponding to them, set the vmnics in active/standby mode configuration.

    For instance, if you were to use vmnic4 and vminc5, then create two separate vmkernel interfaces, vmk2 and vmk3, and configure the failover order as follows:

    Enabling Multi-NIC vMotion

  4. You will have to repeat steps 1 through 3 on all the ESXi hosts, if the standard switch is being used.
  5. If a VDS is being used, then you will have to configure NIC teaming only once at the dvPortGroup level. When you create vmkernel interfaces on the ESXi hosts, you will need to assign them to the dvPortGroup that was already created.

Performing a vMotion

vMotion can only be performed using the vCenter server. All the types of vMotion are achieved using the same migration wizard.

The migration wizard will present you with three migration types as options and those are:

  • Change compute resource only
  • Change storage only
  • Change both compute resource and storage

Performing a vMotion

Any migration type that involves compute migration will be presented with a wizard option to choose the destination port group that the virtual machine will be connected to after a successful migration. Although, vCenter handles configuring to the VM to connect to a different port group, it doesn't deal with the changing of the IP address if the destination port group is a different layer-2 network. This should be manually done by the administrator.

You also get to set a priority for the vMotion operations, by choosing between high or normal priority. The priority determines the amount of CPU resources that will be allocated for the vMotion operation. The priority of each vMotion task is relative to the priority set on the other vMotion tasks.

The following walk-through will help you visualize how a migration is initiated for a virtual machine:

  1. Connect to the vCenter server using the vSphere Web Client.
  2. Select the virtual machine from the inventory, right-click on it, and select Migrate.
  3. In the Migrate wizard screen, choose a migration type as per your requirement and click Next to continue:

    Performing a vMotion
  4. In the Select a compute resource screen, select a destination host or cluster (available from more than one vCenter if they are in enhanced linked mode) and click Next.
  5. In the Select network screen, choose the destination port group for the virtual machine if required, and click Next.
  6. In the Select vMotion priority screen, set a priority for the task and click Next.
  7. In the Ready to complete screen, review the migration options selected and click Finish to start the migration.

Enhanced vMotion Capability

In a vSphere environment, as your clusters scale out, though it is imperative to have processors with the same make and model, you could end up having hosts in a cluster with uncommon processor feature sets. For Compute vMotion to work or for the migrated VM to function reliably on the destination host, it is important to guarantee that the processor feature set of all the ESXi hosts in the cluster are identical. With Enhanced vMotion Capability (EVC) you can present a common feature set to all the virtual machines in the cluster. VMware has made several baselines available to choose from, for both AMD and Intel processors. The baselines are generally categorized by their CPU generations.

The following table lists the available baselines with vSphere 6.x for both Intel and AMD:

Enhanced vMotion Capability

Enabling EVC

Since EVC enables a common processor baseline, it is inevitable that some of the running virtual machines are using processor features that will become unavailable when you present a new baseline.

For the virtual machines to see the features available in a baseline, it requires a reboot. Also, you wouldn't be allowed to apply a lower EVC baseline than that of the current hardware processor generation until you evacuate the running/suspended virtual machines from the hosts.

Here is a sample warning message that prevents setting the chosen lower baseline:

Enabling EVC

To minimize the downtime, you could migrate the virtual machine to another cluster or a set of standby hosts before you enable the EVC baseline on the cluster. In cases where you do not have a cluster or standby hosts to migrate the virtual machines to, you will have to create a new EVC cluster and start moving evacuated (maintenance mode) hosts into the new cluster. As we progress, the moving host you can schedule downtime for sets off virtual machines and begins restarting them on the hosts in the new EVC cluster. If you are setting a processor baseline that has more features than the current processor, then it doesn't stipulate that the virtual machines should be evacuated. But, again, a reboot is necessary, if it has seen a new/differently represented feature.

The following walk-though will help you visualize the procedure involved in enabling EVC. The steps recommended in the walk-though will require a scheduled or an immediate downtime depending on the baseline level you choose to apply. Choosing a lower baseline would mean that you will have to plan for downtime at the same time as you plan to configure EVC. Choosing a higher baseline can allow you enough time to plan for a scheduled reboot at a later date. However, it is recommended to reboot the virtual machine once it has been moved to an EVC cluster:

  1. Connect to the vCenter server using the vSphere Web Client and navigate to the Hosts and Clusters view.
  2. Migrate all the virtual machines to another cluster, standalone hosts or shut them down.
  3. Select the cluster to enable EVC on, navigate to Manage | Settings | VMware EVC, and click on Edit to bring up the Change EVC Mode window:

    Enabling EVC
  4. In the Change EVC Mode window, select an EVC baseline that matches the make of the host's processor hardware. You will be presented with three options:
    • Disable EVC
    • Enable EVC for AMD Hosts
    • Enable EVC for Intel® Hosts

    Enabling EVC

    You cannot apply an AMD baseline to Intel or vice versa. Click OK if the validation succeeds.

  5. Power-off the migrated VMs (if you have not already done so), move them back to the EVC cluster, and power them on.

    Tip

    Note that you cannot apply an EVC baseline if the underlying physical processor doesn't support the features in the baseline.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.249.104