The VMware ESXi hosts are installed, vCenter Server is running, the networks are blinking, the storage is carved, and the VMFS volumes are formatted. Let the virtualization begin! With the virtual infrastructure in place, you as the administrator must shift your attention to deploying the virtual machines.
It is common for IT professionals to refer to a Windows or Linux system running on an ESXi host as a virtual machine (VM). Strictly speaking, this term is not 100% accurate. Just as a physical machine is bare-metal hardware before the installation of an operating system, a VM is an empty shell before the installation of a guest operating system (the term “guest operating system” is used to denote an operating system instance installed into a VM). From an everyday usage perspective, though, you can go on calling the Windows or Linux system a VM. Any references you see to “guest operating system” (or “guest OS”) are references to instances of Windows, Linux, or Solaris—or any other supported operating system—installed in a VM.
If a VM is not an instance of a guest OS running on a hypervisor, then what is a VM? The answer to that question depends on your perspective. Are you “inside” the VM, looking out? Or are you “outside” the VM, looking in?
From the perspective of software running inside a VM, a VM is really just a collection of virtual hardware resources selected for the purpose of running a guest OS instance.
So, what kind of virtual hardware makes up a VM? By default, VMware ESXi presents the following fairly generic hardware to the VM:
VMware selected this generic hardware to provide the broadest level of compatibility across the entire supported guest OSs. As a result, it's possible to use commercial off-the-shelf drivers when installing a guest OS into a VM. Figure 9.1 shows a few examples of VMware vSphere providing virtual hardware that looks like standard physical hardware. Both the network adapter and the storage adapter—identified as an Intel(R) 82574L Gigabit Network Connection and an LSI SAS 3000 series adapter, respectively—have corresponding physical counterparts, and drivers for these devices are available in many modern guest OSs.
However, VMware vSphere may also present virtual hardware that is unique to the virtualized environment. Look back at the display adapter in Figure 9.1. There is no such physical card as a VMware SVGA 3D display adapter; this is a device that is unique to the virtualized environment. These virtualization-optimized devices, also known as paravirtualized devices, are designed to operate efficiently within the virtualized environment created by the vSphere hypervisor. Because these devices have no corresponding physical counterpart, guest OS–specific drivers should optimally be provided. VMware Tools, described later in this chapter in the section “Installing VMware Tools,” satisfies this function and provides virtualization-optimized drivers to run these devices.
A physical machine might have a certain amount of memory installed, a certain number of network adapters, or a particular number of disk devices, and the same goes for a VM. A VM can include the following types and numbers of virtual hardware devices:
Hard drives are not included in this list, because VM hard drives are generally added as SCSI or AHCI devices. With up to 4 SCSI controllers and 15 devices per controller for a total of 60 SCSI / 256 PVSCSI devices per VM; it's possible to boot only from 1 of the first 8. Each VM can have a maximum of 4 SATA controllers with 30 devices per controller for a total of 120 possible virtual hard drives or CD/DVD drives. If you are using IDE hard drives, then the VM is subject to the limit of 4 IDE devices per VM, as mentioned previously.
There's another perspective on VMs besides what the guest OS instance sees. There's also the external perspective—what does the hypervisor see?
To better understand what a VM is, you must consider more than just how a VM appears from the perspective of the guest OS instance (for example, from the “inside”), as we've just done. You must also consider how a VM appears from the “outside.” In other words, you must consider how the VM appears to the ESXi host running the VM.
From the perspective of an ESXi host, a VM consists of several types of files stored on a supported storage device. The two most common files that compose a VM are the configuration file and the virtual hard disk file. The configuration file—hereafter referred to as the VMX file—is a plain-text file identified by a .vmx
filename extension, and it functions as the virtual resource recipe of the VM. The VMX file defines the virtual hardware that resides in the VM. The number of processors, the amount of RAM, the number of network adapters, the associated MAC addresses, the networks to which the network adapters connect, and the number, names, and locations of all virtual hard drives are stored in the configuration file.
Listing 9.1 shows a sample VMX file for a VM named Win2k16-01
.
Reading through the Win2k16-01.vmx
file, you can determine the following facts about this VM:
guestOS
line, you can see that the VM is configured for a guest OS referred to as “windows9srv-64
”; this corresponds to Windows Server 2016 64-bit.memsize
line, you know the VM is configured for 8 GB of RAM.scsi0:0.fileName
line tells you the VM's hard drive is located in the file Win2k16-01.vmdk
.floppy0 lines
, but it does not start connected (see floppy0.startConnected
).“dvportgroup-37”
port group, based on the ethernet0
lines.ethernet0.generatedAddress
line, the VM's single network adapter has an automatically generated MAC address of 00:50:56:b1:17:84.Although the VMX file is important, it is only the structural definition of the virtual hardware that composes the VM. It does not store any actual data from the guest OS instance running inside the VM. A separate type of file, the virtual hard disk file, performs that role.
The virtual hard disk file, identified by a .vmdk
filename extension and hereafter referred to as the VMDK file, holds the actual data stored by a VM. Each VMDK file represents a disk device. For a VM running Windows, the first VMDK file would typically be the storage location for the C: drive. For a Linux system, it would typically be the storage location for the root, boot, and a few other partitions. Additional VMDK files can be added to provide additional storage locations for the VM, and each VMDK file will appear as a physical hard drive to the VM.
Although we refer to a virtual hard disk file as a VMDK file, in reality there are two different files that compose a virtual hard disk. Both of them use the .vmdk
filename extension, but each performs a very different role: one is the VMDK descriptor file, and the other is the VMDK flat file. There's a good reason why we—and others in the virtualization space—refer to a virtual hard disk file as a VMDK file, though, and Figure 9.2 helps illustrate why.
Looking closely at Figure 9.2, you'll see only a single VMDK file listed. In actuality, though, there are two files, but to see them you must go to a command-line interface. From there, as shown in Figure 9.3, you'll see the two different VMDK files: the VMDK descriptor (the smaller of the two) and the VMDK flat file (the larger of the two and the one that has –flat
in the filename).
Of these two files, the VMDK descriptor file is a plain-text file and is human-readable; the VMDK flat file is a binary file and is not human-readable. The VMDK descriptor file contains only configuration information and pointers to the flat file; the VMDK flat file contains the actual data for the virtual hard disk. Naturally, this means that the VMDK descriptor file is typically very small, whereas the VMDK flat file could be as large as the configured virtual hard disk in the VMX. So, a 40 GB virtual hard disk could mean a 40 GB VMDK flat file, depending on other configuration settings you'll see later in this chapter.
Listing 9.2 shows the contents of a sample VMDK descriptor file.
There are several other types of files that make up a VM. For example, when the VM is running there will most likely be a VSWP file, which is a VMkernel swap file. You'll learn more about VMkernel swap files in Chapter 11, “Managing Resource Allocation.” There will also be an NVRAM file, which stores the VM's BIOS settings.
Now that you have a feel for what makes up a VM, let's get started creating some VMs.
Creating VMs is a core part of using VMware vSphere, and VMware has made the process as easy and straightforward as possible. Let's walk through the process, and we'll explain the steps along the way.
Perform the following steps to create a VM from scratch:
As you can see in Figure 9.7, the vSphere Web Client shows a fair amount of information about the datastores (size, provisioned space, free space, type of datastore). However, the vSphere Web Client doesn't show information such as IOPS capacity or other performance statistics. In Chapter 6, we discussed storage service levels, which allow you to create VM storage policies based on storage attributes provided to vCenter Server by the storage vendor (as well as user-defined storage attributes created and assigned by the vSphere administrator). In Figure 9.7, you can see the VM Storage Policy drop-down list, which lists the currently defined storage service levels.
When you select a storage service level, the datastore listing will separate into two groups: compatible and incompatible. Compatible datastores are datastores whose attributes or capabilities satisfy the storage service level as defined in the VM Storage Policies; incompatible datastores are datastores whose attributes do not meet the criteria specified in the storage service level. Figure 9.8 shows a storage service level selected and a compatible datastore selected for this VM's storage.
For more information on VM storage policies, refer to Chapter 6.
You can select between 1 and 128 virtual CPU sockets, depending on your vSphere license. Additionally, you can choose the number of cores per virtual CPU socket. The total number of cores supported per VM with VM hardware version 14 is 128. The number of cores available per virtual CPU socket will change based on the number of virtual CPU sockets selected. For specific information about how many virtual cores are available per virtual CPU socket, refer to docs.vmware.com
.
Keep in mind that the operating system you will install into this VM must support the selected number of virtual CPUs. Also keep in mind that more virtual CPUs doesn't necessarily translate into better performance, and in some cases larger values may negatively impact performance.
As shown in Figure 9.9, the vSphere Web Client displays recommendations about the minimum and recommended amounts of RAM based on the earlier selection of operating system and version. This is one of the reasons the selection of the correct guest OS is important when creating a VM.
The amount of RAM configured on this page is the amount of RAM the guest OS reflects in its system properties, and it is the maximum amount that a guest OS will ever be able to use. Think of it as the virtual equivalent of the amount of physical RAM installed in a system. Just as a physical machine cannot use more memory than is physically installed in it, a VM cannot access more memory than it is configured to use.
The correct default driver should already be selected based on the previously selected operating system. For example, the LSI Logic parallel adapter is selected automatically when Windows Server 2003 is selected as the guest OS, but the LSI Logic SAS adapter is selected when Windows Server 2008, 2012, or 2016 is chosen as the guest OS. We provided some additional details on the different virtual SCSI adapters in Chapter 6.
You are presented with the following options for adding a virtual disk to your VM.
Since a virtual hard disk is already configured by default, we'll use it to install our guest OS and we won't need to add another virtual disk.
Depending on your storage platform, storage type, and storage vendor's support for vSphere's storage integration technologies like VAAI or VASA, some of these options might be grayed out. For example, an NFS datastore that does not support the VAAIv2 extensions will have these options grayed out, as only thin-provisioned VMDKs are supported. (VAAI and VASA are discussed in greater detail in Chapter 6.)
There are two options for the location of the new virtual disk. These options are available by selecting the drop-down box next to the Location field. Keep in mind that these options control physical location, not logical location; they will directly affect the datastore and/or directory where files are stored for use by the hypervisor.
You can configure other options, such as shares, limits, or Virtual Flash sizing (discussed in greater detail in Chapter 6) for the virtual machine you are creating, if required.
When you are done adding or modifying the configuration of the virtual machine, select Next to continue.
As you can see, the process for creating a VM is pretty straightforward. What's not so straightforward, though, are some of the values that should be used when creating new VMs. What are the best values to use?
Choosing the right values to use for the number of virtual CPUs, the amount of memory, or the number or types of virtual NICs when creating your new VM can be difficult. Fortunately, there's lots of documentation out there on CPU and RAM sizing as well as networking for VMs, so our only recommendation is to right-size the VMs based on your needs (see the sidebar “Provisioning Virtual Machines Is Not the Same as Provisioning Physical Machines,” later in this chapter).
For areas other than the ones we just described, the guidance isn't quite so clear. Out of all the options available during the creation of a new VM, four areas tend to consistently generate questions from both new and experienced users alike:
Let's talk about each of these questions in a bit more detail.
You might be hoping that we'll give you specific guidance here about how to size your virtual machines. Unfortunately virtual machine sizing differs greatly depending on the environment, the applications installed on the virtual machines, performance requirements, and many other factors. Instead, it's better to discuss a methodology you can use to understand the resource utilization requirements (CPU, memory, disk, and network) of your physical servers before redeploying them as virtual machines.
Simply sizing your virtual machines to the same specifications used for physical servers can lead to oversizing (or undersizing) virtual machines unnecessarily. Both oversizing and undersizing a virtual machine can lead to performance problems for that virtual machine and for other virtual machines on the same ESXi host. Not correctly sizing your virtual machines can negatively impact consolidation ratios, too, ultimately requiring your cluster(s) to scale up or scale out.
Instead, a process called capacity planning can help you understand how to size your virtual machines. With capacity planning you learn over time how your current physical servers are utilized and then use that information to size your virtual machines. A typical capacity planning exercise takes place over a two-to-four-week period and uses tools to automatically monitor and report on the performance of physical servers. By monitoring your servers over time, such as over a 30-day period, you can capture normal business cycles such as end-of-month processing that you might otherwise miss if you monitor for only a short time.
Two of the most common tools used for capacity planning are free, though as you'll see, one of them is not available to everyone. Perhaps the most well-known is a product by VMware called Capacity Planner. The other product, Microsoft Assessment and Planning Toolkit, may not be as well known, but it's still a useful tool.
Both tools produce similar results, such as average and maximum utilization values for CPU, memory, disk, network, and other more specific performance counters. Capacity Planner is customizable and allows you to add custom performance counters to monitor beyond the standard Windows or application counters. Microsoft Assessment and Planning Toolkit is less customizable, but it includes advanced reporting for Microsoft applications (like SQL Server or SharePoint Server) that are useful if you're looking to virtualize these applications.
The process for using these tools is also similar for both. After running the capacity planning analysis over time, you review the results to understand the actual utilization of your servers. These tools also allow you to produce reports that tell you how many ESXi hosts (or in the case of Microsoft Assessment and Planning Toolkit, Hyper-V hosts, though the results are applicable to ESXi as well) you'll need to support the environment. For example, if you monitor 70 total physical servers, the tools may tell you that, based on the actual utilization of each server, you need only 7 total ESXi hosts to support those servers as virtual machines. Your results will vary depending on the actual utilization in your environment.
Capacity planning is such a useful exercise because it tells you the true utilization of your servers before you convert them to virtual machines. Let's say you have a physical server with two CPUs, each with eight cores and 64 GB of RAM, and that server runs Microsoft SQL Server. You might think that because SQL Server is typically an important application, the server must be fully utilized. In reality, a capacity planning exercise may reveal that the server uses only two CPU cores and 8 GB of RAM. When you virtualize that server, you can reduce the resources down to what the server actually uses and save resources for other virtual machines.
Whether you're just starting out on your virtualization journey or you're moving on to virtualizing more critical applications, capacity planning will provide valuable information for properly sizing your virtual machines. Without performing a capacity planning exercise, you are mostly just guessing at how many ESXi hosts you'll need to support the environment or how to properly size your virtual machines.
Choosing the display name for a VM might seem like a trivial assignment, but you must ensure that an appropriate naming strategy is in place. We recommend making the display names of VMs match the hostnames configured in the guest OS being installed. For example, if you intend to use the name Server1 in the guest OS, the VM display name should match Server1.
It's important to note that if you use spaces in the virtual display name—which is allowed—then using command-line tools to manage VMs becomes a bit tricky because you must quote out the spaces on the command line. In addition, because DNS hostnames cannot include spaces, using spaces in the VM name would create a disparity between the VM name and the guest OS hostname. Ultimately, this means you should avoid using spaces and special characters that are not allowed in standard DNS naming strategies to ensure similar names both inside and outside the VM. Aside from whatever policies might be in place from your organization, this is usually a matter of personal preference.
The display name assigned to a VM also becomes the name of the folder in the VMFS volume where the VM files will live. At the file level, the associated configuration (VMX) and virtual hard drive (VMDK) files will assume the name supplied in the display name text box during VM creation. Refer to Figure 9.15, where you can see that the user-supplied name of Win2k16-01
is reused for both the folder name and the filenames for the VM. If a VM happens to be renamed at any stage, all the associated files will retain the original name until a Storage vMotion occurs on the VM. Once the vMotion is complete, the majority of the files associated with that VM adhere to the new VM name. Note that file and VM names are case specific. You will learn more detail about Storage vMotion in Chapter 12, “Balancing Resource Utilization.”
The answer to the third question—how big to make the hard disks in your VM—is a bit more complicated. There are many different approaches, but some best practices facilitate the management, scalability, and backup of VMs. First, you should create VMs with multiple virtual disk files to separate the operating system from the custom user/application data. Separating the system files and the user/application data will make it easy to increase the number of data drives in the future and allow a more practical backup strategy. A system drive of 30 GB to 40 GB, for example, usually provides ample room for installation and continued growth of the operating system. The data drives across different VMs will vary in size because of underlying storage system capacity and functionality, the installed applications, the function of the system, and the number of users who connect to the computer. However, because the extra hard drives are not operating system data, it will be easier to adjust those drives when needed.
Keep in mind that additional virtual hard drives will pick up on the same naming scheme as the original virtual hard drive. For example, a VM named Server1 that has an original virtual hard disk file named Win2k16-01.vmdk
will name the new virtual hard disk file Win2k16-01_1.vmdk
. For each additional file, the last number will be incremented, making it easy to identify all virtual disk files related to a particular VM. Figure 9.16 shows a VM with two virtual hard disks so that you can see how vSphere handles the naming for additional virtual hard disks.
In Chapter 10, “Using Templates and vApps,” we'll revisit the process of creating VMs to see how to use templates to implement and maintain an optimal VM configuration that separates the system data from the user/application data. At this point, though, now that you've created a VM, you're ready to install the guest OS into the VM.
Depending on what kind of virtual machines you're deploying in your environment, you may need to think about graphics performance. For backend systems, such as database systems or email platforms, the graphics performance of the virtual machine is not important and is not something you typically have to worry about. If you're deploying a virtual desktop infrastructure (VDI), however, the graphics performance and capabilities of the virtual machine are likely to be a key consideration.
For VDI solutions like VMware Horizon View, end users no longer run a full desktop or laptop but instead connect to their virtual desktop (running on vSphere) from a variety of endpoint devices. These devices could be laptops, desktops, thin or zero clients, or even tablets and smartphones. The virtual desktop often acts as a complete desktop replacement for end users, so the desktop needs to perform as well as (or better than) the physical hardware that is being replaced.
In order to provide high-end graphics capabilities to virtual machines, vSphere 6 introduced Virtual Shared Graphics Acceleration (vSGA) and in vSphere 6.5 vGPUs were introduced in partnership with Nvidia. This technology allows you to install physical graphics cards of a specific type into your ESXi host and then offload the processing of 3D rendering to the physical graphics cards instead of the host CPUs. This offloading helps to reduce overall CPU utilization by allowing hardware that is purpose-built for rendering graphics to perform the processing. Additional functionality in vSphere 6.7 has been introduced for users of vGPUs with regard to VM mobility. Provided the latest hardware and driver VIBs are loaded in each ESXi host, VMs can now be vMotioned between hosts without needing to be powered off.
Although the 3D rendering settings are configured in the settings of a virtual machine, they are intended only for use with VMware Horizon. If you are using a VDI solution other than VMware Horizon, speak to the vendor to learn if 3D rendering on vSphere is supported.
A new VM is analogous to a physical computer with an empty hard drive. All the components are there but without an operating system. After creating the VM, you're ready to install a supported guest OS. The following OSs are some of the more commonly installed guest OSs supported by ESXi (this is not a comprehensive list as there are over 200 supported OSs listed on the vSphere 6.7 Guest OS Compatibility Guide):
Installing any of these supported guest OSs follows the same common order of steps for installation on a physical server, but the nuances and information provided during the install of each guest OS might vary greatly. Because of the differences involved in installing different guest OSs or different versions of a guest OS, we won't go into any detail on the actual guest OS installation process. We'll leave that to the guest OS vendor. Instead, we'll focus on guest OS installation tasks that are specific to a virtualized environment.
In the physical world, administrators typically put the OS installation media in the physical server's optical drive, install the OS, and then are done with it. Well, in a virtual world, the process is similar, but here's the issue—where do you put the CD when the server is virtual? There are a couple of ways to handle it. One way is quick and easy, and the other takes a bit longer but pays off later.
VMs have a few ways to access data stored on optical disks. VMs can access optical disks in one of three ways (Figure 9.17 shows the Datastore ISO File option selected):
ISO images are the recommended way to install a guest OS because they are faster than using an actual optical drive and can be quickly mounted or dismounted with little effort.
Before you can use an ISO image to install the guest OS, though, you must first put it in a location that ESXi can access. Generally, this means uploading it directly into a datastore accessible to your ESXi hosts or into a feature introduced in vSphere 6.0, the Content Library.
Perform these steps to upload an ISO image into a datastore:
The vSphere Web Client uploads the file into the selected folder in that datastore.
You can find out how to perform a similar action with Content Libraries in Chapter 10. After the ISO image is uploaded to an available datastore or into a Content Library, you're ready to install a guest OS using that ISO image.
Once you have the installation media in place—by using the local CD/DVD-ROM drive on the computer where you are running the vSphere Web Client or by creating and uploading an ISO image into a datastore—you're ready to use that installation media to install a guest OS into the VM.
Perform the following steps to install a guest OS using an ISO file on a shared datastore:
Working within the VM console is like working at the console of a physical system. From here, you can do anything you need to do to the VM: you can access the VM's BIOS and modify settings, you can turn the power to the VM off (and back on again), and you can interact with the guest OS you are installing or have already installed into the VM. We'll describe most of these functions later in this chapter in the sections “Managing Virtual Machines” and “Modifying Virtual Machines,” but there is one thing that we want to point out now.
The vSphere Web Client must have a way to know if the keystrokes and mouse clicks you're generating go to the VM or if they should be processed by the vSphere Web Client itself. To do this, it uses the concept of focus. When you click within a VM console, that VM will have the focus: all of the keystrokes and the mouse clicks will be directed to that VM. Until you have VMware Tools installed—a process we'll describe in the next section, you usually have to manually tell the vSphere Web Client when you want to shift focus out of the VM. To do this, you use the vSphere Web Client's special keystroke: Ctrl+Alt. When you press Ctrl+Alt, the VM relinquishes control of the mouse and keyboard and returns it to the vSphere Web Client. Keep that in mind when you are trying to use your mouse and it won't travel beyond the confines of the VM console window. Just press Ctrl+Alt, and the VM will release control. Some of the more modern Guest OS installations will relinquish control when you move the mouse edge of the console window, but it is handy to know the keystrokes in case this is not automatic.
Once you've installed the guest OS, you should then install and configure VMware Tools. We discuss VMware Tools installation and configuration in the next section.
Although VMware Tools is not installed by default, the package is an important part of a VM. VMware Tools offers several great benefits without any detriments. Recall from the beginning of this chapter that VMware vSphere offers certain virtualization-optimized (or paravirtualized) devices to VMs in order to improve performance. In many cases, these paravirtualized devices do not have device drivers present in a standard installation of a guest OS. The device drivers for these devices are provided by VMware Tools, which is just one more reason why VMware Tools is an essential part of every VM and guest OS installation.
In other words, installing VMware Tools should be standard practice and not an optional step in the deployment of a VM. The VMware Tools package provides the following benefits:
VMware Tools also helps streamline and automate the management of VM focus so you can move into and out of VM consoles easily and seamlessly without the Ctrl+Alt keyboard command.
The VMware Tools package is available for Windows, Linux, NetWare, Solaris, OSX, and FreeBSD; however, the installation methods vary because of the differences in the guest OSs. In all cases, the installation of VMware Tools can start when you select the option to install VMware Tools from the vSphere Web Client. Do you recall our discussion earlier about ISO images and how ESXi uses them to present CDs/DVDs to VMs? That's exactly the functionality being leveraged in this case. When you select to install VMware Tools, vSphere will mount an ISO as a CD/DVD for the VM, and the guest OS will reflect a mounted CD-ROM that has the installation files for VMware Tools.
As we mentioned previously, the exact process for installing VMware Tools will depend on the guest OS. Because Windows and Linux make up the largest portion of VMs deployed on VMware vSphere in most cases, those are the two examples we'll discuss. First, we'll walk you through installing VMware Tools into a Windows-based guest OS.
Perform these steps to install VMware Tools into Windows Server 2012 running as a guest OS in a VM (the steps for other versions of Windows are similar):
If the AutoPlay dialog box does not appear, open Windows Explorer and double-click the CD/DVD drive icon. The AutoPlay dialog box should then appear.
For most situations, you will choose the Typical radio button. The Complete installation option installs all available features, whereas the Custom installation option allows for the greatest level of feature customization.
During the installation, you may be prompted one or more times to confirm the installation of third-party device drivers; select Install for each of these prompts.
If the AutoRun dialog box appears again, simply close the dialog box and continue with the installation.
To install the enhanced VMware video driver and improve the graphical console performance on older Windows VMs, perform the following steps:
devmgmt.msc
and click OK. This will launch the Device Manager console.C:Program FilesCommon FilesVMwareDriverswddm_video
Then click Next.
After Windows restarts in the VM, you should notice improved performance when using the graphical console. Note that this procedure is no longer required in Windows Server 2012 and newer. The VMware SVGA 3D driver is automatically installed along with VMware Tools.
For older versions of Windows, such as Windows Server 2003, you can further improve the responsiveness of the VM console by configuring the hardware acceleration setting. It is, by default, set to None; setting it to Maximum provides a much smoother console session experience. The VMware Tools installation routine reminds you to set this value at the end of the installation, but if you choose not to set hardware acceleration at that time, it can easily be set later. We highly recommended that you optimize the graphical performance of the VM's console. (Note that Windows XP has this value set to Maximum by default.)
Perform the following steps to adjust the hardware acceleration in a VM running Windows Server 2003 (or Windows XP, in case the value has been changed from the default):
Now that the VMware Tools installation is complete and the VM is rebooted, the system tray displays the VMware Tools icon, the letters VM in a small gray box (Windows Taskbar settings might hide the icon). The icon in the system tray indicates that VMware Tools is installed and operational.
In previous versions of vSphere, double-clicking the VMware Tools icon in the system tray would bring up a set of configurable options. As of vSphere 5.1, that interface has been removed and replaced with the informational screen shown in Figure 9.22. Previously you could configure time synchronization, show or hide VMware Tools from the Taskbar, and select scripts to suspend, resume, shut down, or turn on a VM.
VMware now provides command-line-based tools that will allow you to configure these settings. You can access these by browsing to the installation directory of VMware Tools.
As with previous versions of VMware Tools, time synchronization between the guest OS and the host is disabled by default. You'll want to use caution when enabling time synchronization between the guest OS and the ESXi host because Windows domain members rely on Kerberos for authentication and Kerberos is sensitive to time differences between computers. A Windows-based guest OS that belongs to an Active Directory domain is already configured with a native time synchronization process against the domain controller in its domain that holds the PDC Emulator operations master role. If the time on the ESXi host is different from the time on the PDC Emulator operations master domain controller, the guest OS could end up moving outside the 5-minute window allowed by Kerberos. When the 5-minute window is exceeded, Kerberos will experience errors with authentication and replication.
You can take a few approaches to managing time synchronizations in a virtual environment. The first approach involves not using VMware Tools time synchronization and relying instead on the W32Time service and a PDC Emulator with a Registry edit that configures synchronization with an external time server. Another approach involves disabling the native time synchronization across the Windows domain and then relying on the VMware Tools feature. A third approach might be to synchronize the VMware ESXi hosts and the PDC Emulator operations master with the same external time server and then to enable the VMware Tools option for synchronization. In this case, both the native W32Time service and VMware Tools should be adjusting the time to the same value.
VMware has a few Knowledge Base articles that contain the latest recommendations for timekeeping. For Windows-based guest OS installations, refer to http://kb.vmware.com/kb/1318
or refer to the older, but still relevant document “Timekeeping in VMware Virtual Machines” at the following location:
http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf
We've shown you how to install VMware Tools into a Windows-based guest operation system, so now we'd like to walk through the process for a Linux-based guest OS.
A number of versions (or distributions) of Linux are available and supported by VMware vSphere. While they are all called “Linux,” they do have subtle differences from one distribution to another that make it difficult to provide a single set of steps that would apply to all Linux distributions. In this section, we'll describe two different methods for installing VMware Tools. First, we'll show you a simple installation process, using Open VM Tools and a package manager. For this example, we will use Ubuntu 16.04 LTS. After that, we'll use SuSE Linux Enterprise Server (SLES) version 11 to show you how to install VMware Tools using an ISO mounted from the ESXi host. SuSE is a popular enterprise-focused distribution of Linux, and version 11 doesn't include Open VM Tools.
Open VM Tools (OVT) is nearly the same as the normal VMware Tools, but it's packaged and updated slightly differently. Like the normal VMware Tools that ships with ESXi and can be installed from an ISO, OVT is a set of services, modules, and drivers that allow you to more seamlessly manage your Linux based VMs. OVT has one distinct difference: the method in which you install and update it. By including Open VM Tools as an open source project that's available to anyone, installing and updating can be customized depending on the Linux distribution. There are a number of Linux distributions that include Open-VM-Tools:
To install Open VM Tools into a VM running Ubuntu 16.04 LTS, perform the following steps:
sudo <command>
) to install OVT.sudo apt-get-update
.sudo apt-get install open-vm-tools
.Perform the following steps to install VMware Tools into a VM running the 64-bit version of SLES 11 as the guest OS:
cd /media/VMware Tools
.tar.gz
filename extension) to a temporary directory, and then change to that temporary directory using the following commands:tar -zxf VMwareTools-x.y.z-xxxxxxx.tar.gz –C /tmpcd /tmp/vmware-tools-distrib
/tmp/vmware-tools-distrib
directory, use the sudo
command to run the vmware-install.pl
Perl script with the following command:sudo ./vmware-install.pl
Enter the current account's password when prompted.
cdrm -rf /tmp/vmware-tools-distrib
The steps described here were performed on a VM running SLES 12 64-bit. Because of variations within different versions and distributions of Linux, the commands you may need to install VMware Tools within another distribution may not exactly match what we've listed here. However, these steps do provide a general guideline of what the procedure looks like.
After VMware Tools is installed, the Summary tab of a VM object identifies the status of VMware Tools as well as other information such as operating system, CPU, memory, DNS (host) name, IP address, and current ESXi host. Figure 9.23 shows a screen shot of this information for the Windows Server 2016 VM into which we installed VMware Tools earlier.
If you are upgrading to vSphere 6.7 from a previous version of VMware vSphere, you will have outdated versions of VMware Tools running in your guest OSs. You'll want to upgrade these in order to get the latest drivers. In Chapter 4, “Maintaining VMware vSphere,” we discuss the use of vSphere Update Manager to assist in this process, but you can also do it manually.
For Windows-based guest OSs, the process of upgrading VMware Tools is as simple as right-clicking a VM and selecting Guest OS ⇒ Upgrade VMware Tools. Select the option labeled Automatic Tools Upgrade and click OK. vCenter Server will install the updated VMware Tools and automatically reboot the VM, if necessary.
For other guest OSs, upgrading VMware Tools typically means running through the install process again, or running through an update in the Linux package manager. For more information, refer to the instructions for installing VMware Tools on SLES and Ubuntu.
Creating VMs is just one aspect of managing VMs. In the next sections, we look at some additional VM management tasks.
In addition to creating VMs, vSphere administrators must perform a range of other tasks. Although most of these tasks are relatively easy to figure out, we include them here for completeness.
Creating VMs from scratch, as described earlier, is only one way of getting VMs into the environment. It's entirely possible that you, as a vSphere administrator, might receive pre-created VMs from another source. Suppose you receive the files that compose a VM—notably, the VMX and VMDK files—from another administrator and you need to put that VM to use in your environment. You've already seen how to use the vSphere Web Client–based file browser to upload files into a datastore, but what needs to happen once it's in the datastore? In this case, you need to register the VM. The process of registering the VM adds it to the vCenter Server (or ESXi host) inventory and allows you to then manage the VM.
Perform the following steps to add (or register) an existing VM into the inventory:
When the Register Virtual Machine Wizard is finished, the VM will be added to the vSphere Web Client inventory. From here, you're ready to manipulate the VM in whatever fashion you need, such as powering it on.
There are six different commands involved in changing the runlevel and power state of a VM. Figure 9.25 shows these six commands on the context menu displayed when you right-click a VM and select Power.
By and large, these commands are self-explanatory, but there are a few subtle differences in some of them:
If you have a VM that you need to keep but that doesn't have to be listed in the VM inventory, you can remove the VM from the inventory. This keeps the VM files intact, and the VM can be re-added to the inventory (that is, registered) at any time later on using the procedure described earlier in this chapter, in the section “Adding or Registering Existing VMs.”
To remove a VM, right-click a powered-off VM and, from the context menu, select Remove From Inventory. Select Yes in the Confirm Remove dialog box, and the VM will be removed from the inventory. You can use the vSphere Web Client file browser to verify that the files for the VM are still intact in the same location on the datastore.
If you have a VM that you no longer need at all—meaning you don't need it listed in the inventory and you don't need the files maintained on the datastore—you can completely remove the VM. Be careful, though; this is not something that you can undo!
To delete a VM entirely, right-click a powered-off VM and select Delete from Disk from the context menu. The vSphere Web Client will prompt you for confirmation, reminding you that you are deleting the VM and its associated base disks (VMDK files). Click Yes to continue removing the files from both inventory and the datastore. Once the process is done, you can once again use the vSphere Web Client file browser to verify that the VM's files are gone.
Adding existing VMs, removing VMs from inventory, and deleting VMs are all relatively simple tasks. The task of modifying VMs, though, is significant enough to warrant its own section.
Just as physical machines require hardware upgrades or changes, a VM might require virtual hardware upgrades or changes to meet changing performance demands. Perhaps a new memory-intensive client-server application requires an increase in memory, or a new data-mining application requires a second processor or additional network adapters for bandwidth-heavy FTP traffic. In each of these cases, the VM requires a modification of the virtual hardware configured for the guest OS to use. Of course, this is only one task that an administrator charged with managing VMs could be responsible for completing. Other tasks might include leveraging vSphere's snapshot functionality to protect against a potential issue with the guest OS inside a VM. We describe both of these tasks in the following sections, starting with how to change the hardware of a VM.
In most cases, modifying a VM requires that the VM be powered off. There are exceptions to this rule, as shown in Figure 9.26. You can hot-add a USB controller, a SATA controller, an Ethernet adapter, a hard disk, or a SCSI device. Later in this chapter, you'll see that some guest OSs also support the addition (and subtraction) of virtual CPUs or RAM while they are powered on as well. Not all guest OS versions will see the new hardware configuration right away—you may need to reboot for the changes to take effect.
When you're adding new virtual hardware to a VM using the vSphere Web Client, the options are similar to those used while creating a VM. For example, to add a new virtual hard disk to an existing VM, you would use the New Device drop-down box at the bottom of the Virtual Machine Edit Settings dialog box. In Figure 9.26, you see that you can add a virtual hard disk to a VM while it is powered on. From there, the vSphere Web Client uses the same steps shown earlier in this chapter in Figure 9.11, Figure 9.12, and Figure 9.13. The only difference is that now you're adding a new virtual hard disk to an existing VM. As an example, we'll go through the steps to add an Ethernet adapter to a VM (the steps are the same regardless of whether the VM is actually running).
Perform these steps to add an Ethernet adapter to a VM:
Besides adding new virtual hardware, users can make other changes while a VM is powered on. For example, you can mount and unmount CD/DVD drives, ISO images, and floppy disk images while a VM is turned on. We described the process for mounting an ISO image as a virtual CD/DVD drive earlier in this chapter, in the section “Installing a Guest Operating System.” You can also assign and reassign adapters to virtual networks while a VM is running. All of these tasks are performed in the VM Properties dialog box, which you access by selecting Edit Settings from the context menu for a VM.
If you are running Windows Server 2008 or above, or any modern Linux distribution, you also gain the ability to add virtual CPUs or RAM to a VM while it is running. To use this functionality, you must first enable it. In a somewhat ironic twist, the VM for which you want to enable hot-add must be powered off.
To enable hot-add of virtual CPUs or RAM, perform these steps:
Once this setting has been configured, you can add RAM or virtual CPUs to the VM when it is powered on. Figure 9.28 shows a powered-on VM that has memory hot-add enabled. Figure 9.29 shows a powered-on VM that has CPU hot-plug enabled; you can change the number of virtual CPU sockets, but you can't change the number of cores per virtual CPU socket.
Once these features are enabled, you can use the same procedure to add hardware as you would when a VM is turned off. You may also need to consider (and test) that just because the OS supports adding hardware on the fly, the applications running on the OS may not. That is, you may not always see additional benefit if the application maps out the potential resources when first run, but not again until the application is stopped and restarted.
Aside from the changes described so far, configuration changes to a VM can take place only when the VM is in a powered-off state. When a VM is powered off, all of the various configuration options are available to change: RAM, virtual CPUs, or adding or removing other hardware components such as CD/DVD drives or floppy drives.
As you can see, running your operating system in a VM offers advantages when it comes time to reconfigure hardware, even enabling such innovative features as CPU hot-plug. There are other advantages to using VMs too; one of these advantages is a vSphere feature called snapshots.
VM snapshots allow administrators to create point-in-time checkpoints of a VM. The snapshot captures the state of the VM at a specific point in time. VMware administrators can then revert to their pre-snapshot state in the event the changes made since the snapshot should be discarded. Or, if the changes should be preserved, the administrator can commit the changes and delete the snapshot.
This functionality can be used in a variety of ways. Suppose you'd like to install the latest vendor-supplied patch for the guest OS instance running in a VM but you want to be able to recover in case the patch installation runs amok. By taking a snapshot before installing the patch, you can revert to the snapshot in the event the patch installation doesn't go well. You've just created a safety net for yourself. Keep in mind that snapshots do not affect RDM virtual hard disks or in-guest mounted iSCSI or NFS file systems. Also remember snapshots are made on a per-VM basis. If you have an application with multiple tiers and that is spread between multiple virtual machines, you may encounter application inconsistencies when reverting snapshots.
Earlier versions of vSphere did not allow Storage vMotions to occur when a snapshot was present, but this limitation was removed in vSphere 5.
Perform the following steps to create a snapshot of a VM:
As shown in Figure 9.30, there are two options when taking snapshots:
.vmsn
filename extension.When a snapshot is taken, depending on the previous options, some additional files are created on the datastore, as shown in Figure 9.31.
It is a common misconception for administrators to think of snapshots as full copies of VM files. As you can clearly see in Figure 9.31, a snapshot is not a full copy of a VM. VMware's snapshot technology consumes minimal space while still reverting to a previous snapshot by allocating only enough space to store the changes rather than making a full copy.
To demonstrate snapshot technology and illustrate its behavior (for practice only), we performed the following steps:
WIN2K16-01.vmdk
.WIN2K16-01.vmdk
.Review Table 9.1 for the results we recorded after each step. Note that these results were recorded as part of our example and may differ from your results if you perform a similar test.
TABLE 9.1: Snapshot demonstration results
VMDK SIZE | NTFS SIZE | NTFS FREE SPACE | |
Start (pre-first snapshot) | |||
WIN2K16-01.vmdk (C:)
|
8.6 GB | 40 GB | 31 GB |
First snapshot (pre-data copy) | |||
WIN2K16-01.vmdk (C:)
|
10.3 GB | 40 GB | 29.7 GB |
WIN2K16-01-000001.vmdk
|
17.4 MB | ||
First snapshot (post-data copy) | |||
WIN2K16-01.vmdk (C:)
|
10.3 GB | 40 GB | 26.5 GB |
WIN2K16-01-000001.vmdk
|
3.1 GB | ||
Second snapshot (pre-data copy) | |||
WIN2K16-01.vmdk (C:)
|
10.3 GB | 40 GB | 26.5 GB |
WIN2K16-01-000001.vmdk
|
3.1 GB | ||
WIN2K16-01-000002.vmdk
|
17.4 MB | ||
Second snapshot (post-data copy) | |||
WIN2K16-01.vmdk (C:)
|
10.3 GB | 40 GB | 23.3 GB |
WIN2K16-01-000001.vmdk
|
3.1 GB | ||
WIN2K16-01-000002.vmdk
|
3.1 GB |
As you can see in Table 9.1, the underlying guest OS is unaware of the presence of the snapshot and the extra VMDK files that are created. ESXi, however, knows to write changes to the VM's virtual disk to the snapshot VMDK, properly known as a delta disk (or a differencing disk). These delta disks start small and over time grow to accommodate the changes stored within them.
Despite the storage efficiency that snapshots attempt to maintain, over time they can eat up a considerable amount of disk space. Therefore, use them as needed, but be sure to remove older snapshots on a regular basis. Also be aware that there are performance ramifications to using snapshots. Because disk space must be allocated to the delta disks on demand, ESXi hosts must update the metadata files (files with the .sf
filename extension) every time the differencing disk grows. To update the metadata files, LUNs must be locked, and this might adversely affect the performance of other VMs and hosts using the same LUN. It is a generally recommended practice to reserve around 20% of capacity on your datastores for snapshots, VM swap files, and other metadata.
To view or delete a snapshot or revert to an earlier snapshot, you use the Snapshot Manager.
Follow these steps to access the Snapshot Manager:
To further illustrate the nature of snapshots, see Figure 9.33 and Figure 9.34. Figure 9.33 shows the file system of a VM running Windows Server 2016 after data has been written into two new folders named temp1
and temp2
. Figure 9.34 shows the same VM but after reverting to a snapshot taken before that data was written. As you can see, it's as if the new folders never even existed. Test this out for yourself to see what kind of damage you can do, and undo (using a test VM of course).
As you can see, snapshots are a great way to protect yourself from unwanted changes to the data stored in a VM. Snapshots aren't backups and should not be used in place of backups. However, they can protect you from misbehaving application installations or other processes that might result in data loss or corruption.
There are additional VM management tasks that we'll discuss in other chapters. For example, you might want to migrate a VM from one ESXi host to another ESXi host using vMotion; this is covered in Chapter 12. Changing a VM's resource allocation settings is covered in Chapter 11.
In the next chapter, we'll move from creating and managing VMs to streamlining the VM provisioning process with templates, OVF templates, and vApps. Although VMware makes the VM provisioning process pretty easy, we'll show you how using templates can simplify server provisioning even more while bringing some consistency to your VM and guest OS deployments.
3.16.54.63