Designing Virtual Machines

How often had engineers purchased new physical servers straight from a catalog or website without customizing them? It was very rare; each application had requirements and recommended configurations, and every organization had its own way of specializing and managing hardware. The same holds true for virtual machines. In this section, you are going to look at how to design and configure a virtual machine to suit the needs of the guest operating system, the application that will be installed, and your management system.

Virtual Machine Maximums

One of the goals of server virtualization is to replace all physical servers with virtual machines. Machine virtualization is not a new technology, but many computer rooms and datacenters have succeeded with converting only a relatively small percentage of servers into virtual machines. This can be for a number of reasons:

Budget, Politics, or Project Scope Some virtualization projects run out of money and stop after converting some physical servers into virtual machines. Unfortunately, some project failures are less logical, as the projects simply stop after reaching a certain point, either because of organizational politics or simply because the bureaucracy says that the project has been completed, even though maybe 60 percent or more of physical servers may still be left operational.
Lack of Compatibility Not all servers are virtualization candidates; some are ruled out because of their lack of compatibility with virtualization. For example, a server that is deeply integrated into physical communications might have physical interfaces that cannot be virtualized. Some servers might be running operating systems that are not supported by the hypervisor; for example, Hyper-V does not support Unix. And some applications do not support Hyper-V; for example, Oracle supports only its software on Oracle’s own virtualization products. Even VMware fans have to consider Microsoft SQL Server a better alternative than Oracle!
Scalability A virtual machine can support only so many virtual CPUs (logical processors). This limits the number of cores that a virtual machine has access to, therefore limiting the compute power and the number of simultaneously running threads (check the System ⇒ Processor Queue Length metric in Performance Monitor in the guest OS).
Windows Server 2008 R2 Hyper-V virtual machines were limited to four virtual CPUs—that’s just four logical processors. This placed sever limitations on what applications could be virtualized in larger datacenters. Windows Server 2008 R2 Hyper-V virtual machines were also limited to 64 GB RAM. Although this wasn’t as crippling as the number of virtual CPUs, it did prevent larger workloads, such as online transaction processing (OLTP) or data warehouses from being virtualized.

One of the goals of Windows Server 2012 was to enable any application that required more processor or memory to be installed in a Hyper-V virtual machine. As a result, Hyper-V now supports the following:

Up to 64 Virtual CPUs per Virtual Machine This assumes that the host has 64 or more logical processors and that the guest operating system supports this configuration.
Up to 1 TB RAM per Virtual Machine This is possible if the host has 1 TB to allocate to the virtual machine and the guest operating system supports this much memory.

This increased level of support should ensure that processor and memory scalability are almost never a concern when considering the installation of applications into virtual machines if the host hardware is specified appropriately.


Giant Virtual Machines
An organization will not normally plan to deploy lots of these gigantic virtual machines on a single host. Each logical processor may in fact be dedicated to a virtual CPU. A Windows Server 2012 or Hyper-V Server 2012 host can have 320 logical processors and up to 4 TB of RAM. Therefore, an organization might deploy two or four such virtual machines on a very large host. The benefits of doing this are the same as always with virtualization: flexibility, agility, reduced purchase and ownership costs, ease of backup, simpler DR replication, and so on.

Other virtual machine maximums include the following:

IDE Controllers A virtual machine must boot from a disk attached to IDE Controller 0 in a virtual machine. This controller can have two devices. There is also an IDE Controller 1, which can also have two devices; the virtual DVD drive is one of these by default.

Why Booting from IDE Doesn’t Cause a Problem
Those who are unfamiliar with Hyper-V might be shocked to see that Hyper-V virtual machines boot from a virtual IDE controller. This has nothing to do with hardware and does not require a physical IDE controller. The virtual device does not have a performance impact, as Ben Armstrong (Hyper-V Senior Program Manager Lead with Microsoft) explains on his blog at http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/12/01/why-hyper-v-cannot-boot-off-of-scsi-disks-and-why-you-should-not-care.aspx.

SCSI Controllers A SCSI controller can be attached to 64 pass-through disks (raw LUNs that are accessed directly by a virtual machine) or virtual hard disks. A virtual machine can have up to four SCSI controllers. That means a virtual machine can be attached to 256 virtual SCSI disks. A VHDX virtual hard disk can be up to 64 TB. And that means a virtual machine can have up to 16,384 TB of virtual storage.
Note that the SCSI controller requires that the guest operating system of the virtual machine has the Hyper-V integration components installed for the device driver to work.
There is a positive side effect of adding more virtual processors and SCSI controllers that is new in Windows Server 2012. Historically, a virtual machine had one storage channel. Now, a virtual machine gets one storage channel for every 16 virtual processors and per SCSI controller. Each device now has a queue depth of 256, instead of the previous limit, which was 256 per virtual machine. The handling of storage I/O is now distributed across all virtual processors instead of being bound to just a single one.
Network Adapters You can attach up to eight network adapters, also known as synthetic network adapters, in a virtual machine. A synthetic network adapter, shown as Network Adapter in virtual machine settings, is the best type of virtual NIC to use because it requires fewer host processor and context switches to transmit packets through a virtual switch.
The synthetic network adapter requires the Hyper-V integration components to be installed in the guest operating system of the virtual machine.
There is one network adapter in a virtual machine when you create it.
Legacy Network Adapters You can have up to four legacy network adapters in a virtual machine. The legacy network adapter is a less efficient virtual NIC that does not require the Hyper-V integration components to be installed in the guest OS. Therefore, this type of virtual network adapter could be used with older operating systems such as Windows NT 4 or Windows 2000 Server. Legacy network adapters cannot take advantage of any hardware offloads to accelerate virtual machine communications.
The synthetic network adapter does not support PXE booting, which is usually used for OS deployment solutions such as Windows Deployment Services (WDS) or System Center Configuration Manager. It is rare that virtual machines will be deployed using OS deployment mechanisms such as these in production environments. However, OS deployment experts love to use virtualization for creating and testing OS images. In that case, you can use a legacy network adapter in the virtual machine because it does support PXE booting.

Virtual Machine BIOS
If you edit the settings of a Hyper-V virtual machine, you will see the following boot order:
1. CD
2. IDE
3. Legacy network adapter
4. Floppy
You can change the order of these devices by selecting one and clicking Move Up or Move Down. There is also a Num Lock check box to enable Num Lock in the virtual machine.

Fibre Channel Adapters You can have up to four Virtual Fibre Channel adapters in a virtual machine. You can learn more about this topic in Chapter 9, “Virtual SAN Storage and Guest Clustering.”

The maximums of a virtual machine should not be considered as a goal. Microsoft recommends that you always allocate only enough resources to a virtual machine for its guest operating system and application(s) to be able to perform as required. For example, granting a virtual machine 32 virtual CPUs when it requires only 4 will waste physical processor capacity on the host. When that virtual machine is getting a quantum (a slice of time on the host processors), it will occupy 32 logical processors. That will waste the 28 occupied but unused logical processors that could have been used by other virtual machines.

The advice for sizing a virtual machine is the same as it would be for a physical server: grant only those resources that are required by the workload.

Note that you cannot hot-add resources to a running virtual machine—with two exceptions:

  • You can add a disk to a SCSI controller while the virtual machine is running.
  • You can increase Maximum or decrease Minimum RAM in a Dynamic Memory–enabled virtual machine’s settings while it is running.

You also can change the virtual switch connection of a virtual machine’s network cards. This is not quite hot, because there will be a brief moment of disconnection, and you might need to change the IP address(es) in the guest OS.

Auto-Start and Auto-Stop Actions

There are various ways to configure virtual machines to respond when their host is shut down or starts up.

Automatic Start Actions

There are three possible start actions you can set for a virtual machine when the host that it is placed on boots up. You can find these options under Automatic Start Action in the settings of the virtual machine:

Nothing The virtual machine will not start up.
Automatically Start If It Was Running When The Service Stopped The virtual machine will be automatically started if it was running when the host stopped. This is the default option, and generally the one that is most used.
Always Start This Virtual Machine Automatically The host will start the virtual machine whether it was running or not prior to the host stopping.

Both of the automatic start actions allow you to define a Startup Delay. This value (in seconds) dictates how long the virtual machine will wait after the Management OS is running to start. The purpose of this Startup Delay is to prevent all virtual machines from starting at the same time and creating processor/storage/memory contention.

You can use PowerShell to configure this setting. The following example configures all virtual machines with a name starting with VM0 to start if they were running when the host stopped and delays the virtual machines’ start up by 5 minutes:

Set-VM VM0* -AutomaticStartAction StartIfRunning -AutomaticStartDelay 300

Automatic Stop Actions

There are three ways to deal with virtual machines if the host they are running on is shut down. You can find these settings under Automatic Stop Action in a virtual machine’s settings:

Save The Virtual Machine This default setting configures all virtual machines to save their current state to a VSV file that is stored in the Virtual Machines subfolder where the virtual machine is stored. After this, the virtual machine stops running. When you start up the virtual machine, it reads the saved state and then continues to execute as if it had never stopped. A placeholder BIN file that is slightly larger than the memory size of the virtual machine is stored in this subfolder while the virtual machine is running. This reserves enough disk space to write the save state. Hyper-V will maintain this BIN file only if you choose this option.
Turn Off The Virtual Machine This setting causes the virtual machine to be powered off when the host shuts down. This is like hitting the power button on a computer and is normally going to be the least used setting. It can be useful for guest OSs that refuse to shut down (and therefore prevent the host from shutting down), or you might use it if you need to quickly shut down a host without waiting for virtual machines.
Shut Down The Guest Operating System This option cleanly shuts down the virtual machine via the guest OS integration components (required) before shutting down the host. You might choose this option if you do not want to save virtual machines to a saved state or if you do not want BIN files to consume disk space.

You can configure the Automatic Stop Actions via PowerShell. This instruction configures all virtual machines on a host to shut down when the host shuts down:

Set-VM * -AutomaticStopAction -ShutDown

Dynamic Memory

Service Pack 1 for Windows Server 2008 R2 added the Dynamic Memory feature to Hyper-V to optimize how host memory was allocated to virtual machines by the Management OS. Dynamic Memory has been improved in Windows Server 2012 Hyper-V.

Introducing Dynamic Memory

Several host resources can limit the number of virtual machines that can be placed on, or run on, a host or cluster of hosts:

Processor A host has only so much compute power, and each running virtual machine consumes some of this. The amount varies, depending on the demands of the guest OS and installed applications.
Processors have scaled out immensely in the past few years, and the host’s CPU is rarely the bottleneck on virtual machine-to-host density.
Storage Storage space is finite and will be consumed by a virtual machine. The physical disks have a limit on storage input/output operations per second (IOPS), and the connection to the disks will have limited bandwidth. The more demanding the virtual machines, the more IOPS and bandwidth will be used.
Careful consideration must be given when analyzing the requirements of virtual machines and planning host storage. Some applications can be very demanding of IOPS and storage connectivity, such as virtual desktop infrastructure (VDI) and OLTP. However, the emergence of SSD, caching technology, and faster networking mitigate this limitation.
Network Virtual machines connect to the physical network via one or more network connections on the host. It is rare, but possible, for virtual machines to require more bandwidth than is available on a host. Windows Server 2012 includes NIC teaming, so you can aggregate bandwidth by adding more NICs to a single logical host connection. (See Chapter 4 for more information.)
Memory Virtual machines consume RAM from a host. The most widely encountered bottleneck in the growth of virtual machine-to-host density is not processor, storage, or networking; it is the finite amount of RAM in a host that each virtual machine is consuming a piece of.
In the recent past, we have seen hardware manufacturers release servers on which every spare square centimeter seems to have a DIMM slot. In addition, increasingly large ECC memory is hitting the market (one very large DIMM costs more than several smaller ones that provide the same amount of memory). It makes sense economically over three years to put as much RAM into a single host as possible. Practical limits (such as balancing the risk of host loss and convincing a financial controller to pay more now for bigger hosts to reduce later costs of ownership) can force you into purchasing more less densely populated hosts that will cost more to own and operate over the same timeframe.

Without Dynamic Memory enabled in a virtual machine, the virtual machine must be given a large amount of RAM. For example, a SQL server might need 8 GB of Startup Memory. The virtual machine will consume all of that 8 GB, even if it is not required by the guest OS and application. That would be rather wasteful.

Microsoft introduced Dynamic Memory to accomplish several things:

  • It optimizes the allocation of RAM to virtual machines. Dynamic Memory gives enabled virtual machines a startup amount of memory, allowing them to expand up to a maximum amount of memory when required, and retrieves the memory back from the virtual machine when it is no longer required.
  • Dynamic Memory simplifies memory sizing. Administrators often added up the sum requirements of an application and OS, and installed RAM into a server, only to find it barely being used. Removing the RAM was difficult (because of downtime, ownership issues, and the risk that the RAM might be rarely required), so the capacity was wasted. The same holds true for virtual machines. Because Dynamic Memory allocates only what the virtual machine requires, administrators know that RAM will not be wasted on inaccurately sized virtual machines.
  • Self-service and cloud computing are enabled by Dynamic Memory. Customers, as compared to administrators, are less skilled at sizing their virtual machines. Customers can start small and grow big, taking advantage of the elastic trait of a cloud. If Resource Metering is enabled (Chapter 5, “Cloud Computing”), customers will be charged for only what they use.

How Dynamic Memory Works

To understand Dynamic Memory, you must first understand the settings that are used to enable and configure it, per virtual machine. You can configure Dynamic Memory for a virtual machine in Hyper-V Manager by selecting the virtual machine, clicking Settings in the Actions pane, and clicking Memory in the left navigation pane of the settings window. The settings, shown in Figure 3-4, are as follows:

Startup RAM You already met this setting when you were creating a virtual machine. This is the amount of memory that is allocated to a virtual machine when it starts up, whether you enable Dynamic Memory or not.
If you do not want to enable Dynamic Memory, specify (in megabytes) how much memory the virtual machine will require. All of this memory will be allocated from the host’s pool of RAM to the virtual machine when the virtual machine starts up. The virtual machine will fail to start if this memory cannot be completely assigned to the virtual machine.
If you do want to enable Dynamic Memory, set this to the amount of RAM that will successfully start the guest operating system (required) and start its services/applications at a basic level (optional, but often recommended).
Enable Dynamic Memory You will enable Dynamic Memory for this virtual machine by selecting this check box. This can be done only when a virtual machine is turned off, so it’s best done at the time of virtual machine creation or during a maintenance window. Selecting this check box will enable the remaining memory settings to be configured.

Figure 3-4 Configuring the memory of a virtual machine

c03f004.tif
Maximum RAM The Maximum RAM setting controls the amount of memory that the virtual machine can consume. The maximum setting is 1,048,576 MB or 1 TB, which is the maximum amount of RAM that can be allocated to a virtual machine. Note that a virtual machine can never be allocated more RAM than is available on a host.
You can increase the value of Maximum RAM while a virtual machine is running. This means that a service will not require downtime to give it more memory.
Memory Buffer This is an additional amount of memory that is allocated to a virtual machine. The default value of 20 percent is usually changed only when required. If a virtual machine requires 1,000 MB of RAM, for example, Dynamic Memory will allocate 1,200 MB of RAM to the virtual machine. The buffer is used in two ways. It is available as spare memory to the guest OS should it have an instant need for more memory that Dynamic Memory cannot respond to quickly enough. (Note that Dynamic Memory is quick, but a host might have a shortage of unallocated RAM.) The buffer is usually not wasted by a guest OS; Windows will use the additional memory as a file cache. Note that SQL Server will never use this RAM for file caching, so we generally set the buffer of SQL Server virtual machines to its lowest value of 5 percent. The maximum value is 2000 percent.
Memory Weight Dynamic Memory uses the Memory Weight (also known as Priority) value to determine how to divide up host memory when there is contention—that is, when several virtual machines are asking for all of the remaining host RAM. Memory weight is a sliding value from low (0) to high (100). A virtual machine with a higher Memory Weight is capable of making a better case for a share of the remaining memory on a contended host than a virtual machine with a lower Memory Weight. This value is usually left in the middle (50) and changed only for virtual machines with higher or lower service-level agreements.
Minimum RAM Microsoft did a considerable amount of engineering (referred to as MinWin) in Windows 8 and Windows Server 2012 to reduce the overall footprint of these operating systems. Although they still require 512 MB to boot up (just like their predecessors), the new operating systems can consume much less memory when they are left idle. This is particularly true in VDI and hosting environments, where many virtual machines can be left unused for months. Startup RAM would have to be 512 MB (minimum) to start the virtual machines, but Minimum RAM allows us to define an amount of memory that is lower (as low as 8 MB) than the Startup RAM.
This new Dynamic Memory setting allows idle virtual machines to use less memory than they booted up with. The unused memory is returned to the host and can be used by more-active virtual machines. This could also be combined with System Center Virtual Machine Manager Power Optimization to squeeze virtual machines down to few hosts during idle periods so that unused hosts could be powered off (and powered up again when required).
You can reduce Minimum RAM while a virtual machine is running. This means that you can better optimize a workload while monitoring the actual RAM usage as the virtual machine is idle.

Second Level Address Translation
Microsoft added support for Second Level Address Translation (SLAT) in Windows Server 2008 R2 Hyper-V. SLAT takes advantage of dedication functions in Intel EPT–capable and AMD NPT/RVI processors to offload the mapping of physical to virtual memory. SLAT can boost the performance of memory-intensive workloads such as databases (Exchange or SQL Server) and Remote Desktop Services (RDS) by around 20 percent.
You do not need to configure anything to use SLAT; you just need a capable processor. SLAT-capable processors are not required for Windows Server 2012 Hyper-V, but they are recommended. Server processors have been coming with SLAT support for several years.
Windows 8 Client Hyper-V does require a SLAT-capable processor. This is for performance reasons. Desktop and laptop processors, such as Intel Core (i3, i5, and i7), have come with this support only in recent years.

You can also configure these settings in a virtual machine by using Set-VMMemory. PowerShell uses Priority instead of Memory Weight. Note that you can use MB or GB instead of entering the true byte value of the memory setting.

Set-VMMemory VM01 -StartupBytes 1GB -DynamicMemoryEnabled $True -MinimumBytes `
256MB -MaximumBytes 16GB -Buffer 5 -Priority 100

Now it’s time to see how Dynamic Memory puts these settings to use. Figure 3-5 shows a sequence of events during which memory is assigned to and removed from a virtual machine. The VM worker process (VMWP.EXE in the Management OS of the host) of each virtual machine with Dynamic Memory enabled is responsible for managing Dynamic Memory for that machine. There is a connection between the worker process and the Dynamic Memory Virtual Service Client (DMVSC) through the VMBus. (DMVSC is one of the Hyper-V integration components installed in the guest OS of the virtual machine.)

Figure 3-5 Dynamic Memory scenarios

c03f005.eps

A virtual machine must be allocated with its Startup RAM for it to be able to boot up. That amount of memory is removed from the host and assigned to the virtual machine. There is a tiny, variable overhead for managing this memory (a few megabytes), whether Dynamic Memory is enabled or not.

Say that the Startup RAM setting is 1 GB, and the Maximum RAM setting is 16 GB. If you logged into the guest OS of the virtual machine, you would see that its memory (via Task Manager or Performance Monitor) is actually 1 GB and not the 16 GB that you might have expected. This is because the virtual machine has booted up with 1 GB, and the guest OS has not yet been given 16 GB of RAM by Dynamic Memory.

The memory pressure in the virtual machine might increase. This will be detected by the Management OS. Memory can be allocated to the virtual machine as long as there is unallocated memory on the host. A rapid and specially developed Plug and Play process is used to inject memory into the virtual machine. Remember that the Memory Buffer will be allocated as a percentage of the assinged RAM.

In this example, the virtual machine might have gone from Startup RAM of 1 GB and expanded up to 3 GB. Now if you looked at Task Manager or Performance Monitor in the guest OS, you would see that the memory size was 3 GB.


Hyper-V will do its best to fairly allocate memory to virtual machines if there is not enough RAM left for all of their demands. An algorithm will take the Memory Weight (Priority) and the memory pressure of each virtual machine to divide up the remaining memory.
Sometimes a high-priority virtual machine with low memory pressure might get less memory than a low-priority virtual machine with high memory pressure, or vice versa. As memory is allocated to a virtual machine, its pressure changes, and this affects how further memory is allocated by the algorithm.
System Center can preempt a memory contention issue by load-balancing virtual machines across hosts with more available resources, using Performance and Resource Optimization (PRO) or Dynamic Optimization.

If the pressure in the virtual machine subsides and memory becomes idle, that memory will be removed by the DMVSC and returned to the host. This happens in one of two ways:

  • Other virtual machines have high memory pressure, and memory is contended on the host. Hyper-V will immediately remove idle memory from virtual machines.
  • There is no contention. Hyper-V will leave the memory in the virtual machine for several minutes, just in case it is required again. This will save some CPU cycles. The memory will eventually be removed if it is still idle.

You cannot just remove memory from a running machine, physical or virtual. Dynamic Memory, like other virtualization technologies, uses a process called ballooning, as noted earlier in this chapter. The idle memory is removed from the virtual machine and returned to the unallocated pool of RAM on the host. A balloon is put in place of the removed memory in the virtual machine. This gives the illusion that the memory is still there, but it is not available to the guest OS.

Let’s return to our example of a virtual machine that has Startup RAM of 1 GB and has increased to 3 GB. Now it has ballooned down to 1.5 GB of RAM. If you check the memory in the guest OS of the virtual machine, you will see that it is still 3 GB. Remember that you can never remove memory from a running machine; the balloon has fooled the guest OS into believing that it still has its high-water mark of RAM. The memory amount can increase, but it will not go down while the guest OS is running. The value will reset when the virtual machine’s guest OS starts up again.


Monitoring Dynamic Memory
These are the only accurate ways to track memory allocation on a Hyper-V host without using Dynamic Memory–aware systems management solutions:
Hyper-V Manager The Assigned Memory column for the virtual machines shows the amount of memory allocated to each virtual machine including the Memory Buffer.
Management OS Performance Monitor The Hyper-V Dynamic Memory counters (Balancer, Integration Service, and VM) give you lots of information about Dynamic Memory on a host, such as what memory has been assigned, how much is guest visible, and what is the current pressure.

If the virtual machine becomes busy again and pressure increases, Hyper-V will detect this. Memory will be allocated to the virtual machine and deflate the balloon a little at a time.

If a virtual machine becomes idle, the pressure for memory will be very low. This will enable the DMVSC to return memory to the host. If the Minimum RAM value is lower than the Startup RAM value, Dynamic Memory will continue to remove idle memory and reduce the virtual machine to a value lower than what it booted up with. This will minimize the resource consumption of the idle virtual machine and free up memory for more-active virtual machines.

How does this affect our example virtual machine?

  • It booted up with 1 GB.
  • It increased to 3 GB.
  • It ballooned down to 1.5 GB.

Now the virtual machine might become idle and balloon further down, as far as its Minimum RAM value of 256 MB. Logging into the guest OS and checking the memory will still show us 3 GB (the previous high-water mark). Hyper-V Manager might show the virtual machine using 308 MB RAM (256 MB plus the Memory Buffer of 20 percent). At this point, we have saved 921 MB (1,229 MB, the Startup RAM plus Memory Buffer, minus 308 MB), which can be made available to other virtual machines.

A host can never assign more memory than it has. That means that the total sum of memory being used by virtual machines cannot exceed the amount of memory that is on the host—don’t forget that the Management OS will use a small amount of memory too. This means that we do not overcommit memory with Dynamic Memory.

In Hyper-V, there is no shared-paging or memory overcommitment as is found in other virtualization platforms. With shared paging, a hypervisor performs a single instance process of memory management. If two virtual machines have a common page of memory, it is stored once in the host’s physical RAM. This option is usually one of the first that you are told to turn off by other vendors when you open a support call. Shared paging at the hypervisor layer might also have problems if a guest operating system does this process internally to reduce its own memory footprint.

In memory overcommitment, a hypervisor lies to a virtual machine about the RAM that it has been assigned. With such a hypervisor, a virtual machine might be assigned 1 GB RAM at bootup, but the guest OS believes it has 16 GB RAM to use. If the guest OS tries to use it and the hypervisor does not have the physical RAM to allocate, then the hypervisor has to lie. This lie is accomplished by using second-level paging to simulate RAM; a host paging file blindly pages RAM for the virtual machine with no visibility over page usage or priority. This will hugely reduce memory performance, and requires additional (very fast) disk space on the host for the second-level paging file.

Microsoft did look at these approaches and saw the drawbacks on supportability, performance, and stability. This is why Dynamic Memory does not overcommit memory; service performance is more important than virtual machine density.

Smart Paging

Imagine that you have lots of idle virtual machines with Dynamic Memory Minimum RAM configured. Each of these virtual machines is on a host and has ballooned to below its Startup RAM as in Figure 3-6. The host’s RAM is completely filled with virtual machines. Now one of three scenarios happens:

  • The host reboots, thus requiring virtual machines (if configured to automatically start) to boot up with their Startup RAM.
  • One or more virtual machines reboot, each requiring its Startup RAM.
  • One or more virtual machines reset, and each one requires its Startup RAM.

Figure 3-6 Virtual machines, on using Minimum RAM, have filled a host.

c03f006.eps

The problem is that the host RAM was already full of virtual machines, and these once idle virtual machines now need more memory than they had before—the sum of the Startup RAM values is higher than the sum of the Minimum RAM values. The host has a responsibility to get the virtual machines back up and running.

This is actually a rare, and theoretical, circumstance. Someone or something has deliberately squeezed a large number of idle virtual machines onto a host until the RAM was contended. Realistically, there will be only a few virtual machines that have ballooned down below their Startup RAM. A host might have failed in a densely populated Hyper-V cluster, causing virtual machines to be failed over to this host. System Center Power Optimization might have been used too aggressively to move virtual machines onto fewer hosts, and shut down the idle hosts, and a patching window may have caused virtual machines to reboot.

A well-sized implementation would have prevented these issues. Failover Clustering will use the “best available” host (the one with the most RAM) to fail over each virtual machine. Always ensure that you have enough host capacity in a cluster to handle a failover scenario if performance is more important to you than budget when sizing high availability. Don’t be too ambitious with System Center Power Optimization; always leave room for at least one (or more in bigger environments) failover host and be aware of automated patching windows.

A management system such as System Center will also intervene, where possible, to load-balance and move virtual machines (with no downtime) to more-suitable hosts. All of these techniques should make this Minimum RAM vs. Startup RAM contention issue a rare event. But even then, Hyper-V has to successfully give these virtual machines enough memory for their Startup RAM to get them running again.

Smart Paging is a process whereby a Smart Paging file is temporarily created to simulate memory for a virtual machine when one of the three preceding scenarios occurs and the host does not have enough memory to start up previously running virtual machines that must be auto-started. This is not second-level paging; Smart Paging exists only to enable virtual machines to start up and return to their idle state, so they can balloon back down to their previous amount of memory, below their Startup RAM. Eventually this does occur, and the Smart Paging file will be removed. If a virtual machine continues to use a Smart Paging file, alerts in Event Viewer will inform you. This would be indicative of overly aggressive placement of virtual machines on this host.

There is not a single Smart Paging file for the host. Instead, each virtual machine that requires Smart Paging will have its own Smart Paging file. You can see the default storage location (where the virtual machine is stored) in the virtual machine settings under Smart Paging File Location. You can retrieve this path by using the following PowerShell command:

(Get-VM VM01).SmartPagingPath

You can change the path (while the virtual machine is powered off) by running this:

Set-VM VM01 -SmartPagingFilePath “D:SmartPaging”

Other Dynamic-Memory Side Effects

There are two other side effects of increasing and decreasing memory in a virtual machine.

The Save State BIN File

A virtual machine can be configured to automatically save its state when a host shuts down. The state of the virtual machine’s processors and memory is written to a VSV file in the Virtual Machines subfolder where the virtual machine’s configuration is stored. If the virtual machine is configured to save its state when the host shuts down, Hyper-V will maintain a BIN file. This placeholder file is a few megabytes larger than the amount of memory currently assigned to the virtual machine. It ensures that the virtual machine will always have disk space to write its save state to.

The BIN file will increase and decrease in size to roughly match the memory that is allocated to the virtual machine. You should account for this when sizing your storage. This can be quite a surprise for administrators who are creating virtual machines with very large memories. You can remove this BIN file by changing the virtual machine to use one of the alternative Automatic Stop Actions in the virtual machine settings in the GUI or via PowerShell:

Set-VM VM01 -AutomaticStopAction ShutDown

The Guest OS Paging File

If the guest OS of the virtual machine is set to automatically manage the paging file, you might see it increase in size as memory is allocated to the virtual machine. This won’t be an issue with relatively small amounts of increase, but it would be a problem if a virtual machine was expanding from 512 MB to 64 GB of RAM. The paging file could attempt to fill the drive that it is on, thus depriving you of storage and reducing the optimal size of the paging file. Here are two suggestions:

  • Ensure that the paging file disk has sufficient space for growth.
  • Manually configure the guest OS paging file for virtual machines that are configured to have massive potential growth.

Dynamic Memory Strategies and Considerations

There are many ways to configure Dynamic Memory. You might have a one-size-fits-all approach to all virtual machines that might be suitable for a cloud. You might perform a very granular configuration per virtual machine on a private, non-cloud environment in which IT is directly involved with the management of everything.

Self-Service and Startup RAM

Say you run a cloud with self-service. Your customers, internal or external, can use a portal to deploy their own virtual machines from a library of templates, all of which are managed by your organization. Your virtual machine templates are set up with Dynamic Memory enabled. Every virtual machine will boot up with a Startup RAM of 512 MB and maybe grow to a Maximum RAM of 4 GB, 8 GB, or more. And there’s the problem.

Just about every application (at least from Microsoft) that you install requires a SQL Server database. SQL Server will refuse to install if the prerequisites check finds less than 1 GB RAM in the guest OS of the virtual machine. The virtual machine has had no need to increase the RAM beyond 1 GB, and the installer fails. You can be guaranteed lots of calls from customers looking for help. A few of those help desk calls will be from angry people who claim that they are paying for 4 GB RAM or more and that you’re trying to con them. You could ask them to temporarily spike their memory (run MSPAINT.EXE and set the image size to 10,000 by 10,000 pixels), but that won’t calm the customers down.

Alternatively, you can try different configurations with your virtual machine templates and prevent the issue from happening at all.

You could set the virtual machine to have a Startup RAM that is the most common minimum prerequisite for an installation. 1 GB is the minimum amount of memory required by SQL Server 2012. You could combine this with Minimum RAM set to 256 MB. Any new virtual machine will quickly balloon down from 1 GB until SQL Server or a similarly demanding application is installed. Any unused virtual machine will have only a very small footprint on the host.

Some public cloud operators are concerned that Dynamic Memory shows the guest OS administrator only the high-water mark of memory allocation since the guest OS last booted up. In that case, the Startup RAM could be set to match the Maximum RAM. This will require a lot of memory to be available to start the virtual machine. Once the virtual machine is running, it can balloon down to a much lower Minimum RAM. The downside to this approach is that you are almost guaranteeing Smart Paging activity if lots of virtual machines need to start up at the same time and there is no other host to load-balance to.

Using Maximum RAM

Here are three possible approaches, using names not created by Microsoft, to setting the Maximum RAM of a virtual machine:

Elastic This approach uses the default maximum RAM of 1 TB. A virtual machine will take what it can from the host, up to this amount. This is a very cloud purist approach, as customers can take what they need and pay for it later using Resource Metering. But the elastic approach has a few major problems.
Customers might use the resources, but will they pay for them? Legal contracts can mean much less than you might think in the real world of business relationships. How long might it take to get a customer to pay if you have to follow up with debt collection or legal actions?
The other issue is more common; there are a lot of bad business applications out there, and memory leaks happen. If a memory leak happens in a virtual machine with a massive Maximum RAM setting, then that bad application will keep eating host memory until it either hits the maximum or consumes all that the host has to offer.
Cloud computing is already difficult enough to size with the unpredictable nature of future workloads that you can’t predict. The elastic approach makes this even worse because the memory usage for virtual machines has no limits other than the total host capacity.
Realistic This is the opposite of the elastic approach. Administrators will create virtual machines and templates with smaller amounts of Maximum RAM. Virtual workloads are monitored, and the Maximum RAM setting can be increased as required. This increase could be automated, but that’s going to be subject to the same flaws as the elastic maximum memory approach.
The realistic approach requires monitoring and alerting of virtual machines so that an administrator can increase the Maximum RAM of those machines (without downtime). Each increase would be small, only enabling what memory assignment the virtual machine requires during genuine peak usage.
Although this completely optimizes memory allocation and solves the problems of the elastic approach, it does require some human effort, which has an operational cost of its own.
Optimistic Some organizations have a policy of allocating a reasonably large amount of memory to all machines (PC or server) so that there is no restriction on performance and administrator intervention is minimized. The thought is that memory is often a bottleneck on performance, and memory is cheaper than administrator time and reduced service levels for the business. In other words, being a memory scrooge is false economics.
The optimistic Dynamic Memory approach is a balance between being elastic and being realistic. Virtual machines are created with a reasonably generous amount of Maximum RAM. The virtual machines should still be monitored, but the amount of alerts and human intervention should be much lower. There is some risk of the problems of the elastic approach, but the amount of RAM is much lower, and so the problems should be fewer and contained.

Using Minimum RAM

We have to consider the potential impact of Smart Paging when using Minimum RAM. Aggressive usage of this useful feature on hosts that cannot be load-balanced (for example, by System Center) will lead to Smart Paging when these hosts are overloaded. Here are some things to consider:

  • Do not manually overload non-load-balanced hosts with virtual machines.
  • When designing hosts, always leave enough hosts to deal with host failure. For example, a 64-node Hyper-V cluster probably needs several (N+4 maybe), and not just one (N+1), failover host.
  • When using System Center Power Optimization, keep in mind that virtual machines will probably patch at night, when you have squeezed the virtual machines down to fewer hosts. Always leave enough host capacity powered up for System Center to load-balance the virtual machines as they power up. For example, you might shut down six of ten hosts and barely squeeze the virtual machines onto the remaining four hosts. That will cause Smart Paging during reboot/resets. Instead, shut down just five hosts and still reduce your power costs by 50 percent at night while avoiding Smart Paging.

Host Memory Reserve

There is a single pool of RAM on a host. From this pool, the Management OS consumes RAM for itself, and Dynamic Memory assigns memory to virtual machines. Without any controls, high-pressure virtual machines could consume the entire RAM from the host, and leave the Management OS without the ability to get any more memory. This could cause the Management OS to freeze (the virtual machines would still continue to operate) until the virtual machines balloon down.

Hyper-V has a control to prevent this from happening. The Host Memory Reserve is used by the Management OS to reserve host RAM for itself so that Dynamic Memory cannot claim it.

The formula for calculating this value has not been made public at the time of this writing. What we do know is that the setting is calculated much more conservatively than it was in Windows Server 2008 R2 with Service Pack 1. This means that a host will reserve slightly more memory for itself than it would have before. A Management OS requires at least 512 MB to start but usually requires around 2 GB RAM to run optimally.

The setting (amount of MB to be reserved specified in REG_DWORDMemoryReserve at HKLMSOFTWAREMicrosoftWindows NTCurrentVersionVirtualization) should not be tampered with and should be left to be managed by Hyper-V, unless you are instructed otherwise by Microsoft support.

Processors

You can configure the virtual CPUs of a virtual machine by editing the settings of the virtual machine and clicking Processor. Note that Processor can expand, to reveal further settings called Compatibility and NUMA.

Processor Settings

A virtual machine can have from 1 to 64 virtual processors. The number of virtual processors cannot exceed the number of logical processors in the host. Keep this in mind if you plan to move the virtual machine around different hosts by using Live Migration. Although you can configure the number of virtual CPUs in uneven numbers, this is not recommended. You can configure the number of virtual CPUs by using the Number Of Virtual Processors setting (which has a default of 1), as shown in Figure 3-7.

The Resource Control options allow you to customize how the virtual CPUs of this virtual machine access the physical logical processors that they will run on. There are three settings:

Virtual Machine Reserve (Percentage) Each virtual CPU will run on a host’s logical processor, taking up a share of its time. Using this setting, you can guarantee a minimum amount of time on the processor for this virtual machine. This is a percentage value from 0 (the default—nothing guaranteed) to 100 (only this virtual machine will use the logical processors it runs on). It defines how much of a logical processor a single processor of this virtual machine will consume.
The overall reservation of host processor for this virtual machine will be displayed in Percent Of Total System Resources. For example, a virtual machine with two virtual CPUs with a reserve of 50 percent, on a host with four logical processors, will be guaranteed a minimum of 25 percent of all processor capacity: 100/[(2 × 50%) × 4]%.
Typically this setting is not used, but it can be useful for some processor-heavy applications, such as SharePoint, that should be guaranteed some cores on the host without having to contend with other lightweight virtual machines.
Virtual Machine Limit (Percentage) This rarely used setting can be used to cap the logical processor utilization of the virtual CPUs. Just as with Virtual Machine Reserve, this setting is per virtual processor, and the total host processor limit will be calculated for you and displayed in Percent Of Total System Resources.
Relative Weight This is a weight value of between 0 and 10,000 that defaults to 100. This is used by Hyper-V to decide which virtual machines will get access to logical processors when there is processor contention. A virtual machine with a higher Relative Weight will get more time on the host’s logical processors.

Figure 3-7 Configuring virtual processors in a virtual machine

c03f007.tif

Set-VMProcessor is the PowerShell cmdlet for configuring virtual machine CPUs. You can mimic the Processor settings in Hyper-V Manager by running this:

Set-VMProcessor VM01 -Count 4 -Reserve 100 -Maximum 100 -RelativeWeight 1000

Compatibility

There is just a single setting in the GUI called Migrate To A Physical Computer With A Different Processor Version. With this setting enabled, you can use Hyper-V Live Migration to move a virtual machine to hosts with different versions of processor from the same manufacturer. For example, you could move a running virtual machine from a host with Intel E5 processors to a host with Intel Xeon 5500 processors with no impact on service availability.

This setting is off by default. This is because it reduces the physical processor functionality that the virtual machine can use. Basically, the virtual machine is reduced to the lowest set of processor features made by that manufacturer (Intel or AMD). This is quite a price to pay for Live Migration across hosts of different ages. You can avoid this in one of two ways:

Buy the Newest Processor You Can Afford When you install a new farm of Hyper-V hosts, you should purchase the latest processor that you can afford. If you require new hosts in 12 months, you still have a small chance of being able to buy the hosts with the same processor.
Start New Host or Cluster Footprints Larger environments might start entirely new host footprints with no intention of regularly live-migrating virtual machines between the newer hosts (or cluster) and the older hosts (or cluster). This is not advice that you can give to a small/medium enterprise and should be reserved for larger organizations.

If you have no choice, and you must support Live Migration between different generations of processor from the same manufacturer, you must enable the Migrate To A Physical Computer With A Different Processor Version setting. You can also do this with PowerShell:

Set-VMProcessor * -CompatibilityForMigrationEnabled $True

Hyper-V veterans might notice that one compatibility setting of the past (and for the past) is missing. Older operating systems such as Windows NT 4 Server require the virtual CPUs to run with compatibility for older operating systems enabled. Otherwise, these historic operating systems cannot be installed. You still have the setting, but it can be reached only via PowerShell; this shows you how often Microsoft expects people to use it!

Set-VMProcessor VM02 -CompatibilityForOlderOperatingSystemsEnabled $True

NUMA

Non-Uniform Memory Access (NUMA) is a hardware architecture driven by the way that physical processors access memory on the motherboard of servers.

What is NUMA?

A large processor, one with many cores, may be designed by the manufacturer to divide itself up to enable parallel direct connections to different banks of memory, as shown in Figure 3-8. This example shows an 8-core processor that has been divided into two NUMA nodes. Cores 0–3 are in NUMA Node 0 and have direct access to one set of memory. Cores 4–7 are in NUMA Node 1 and have direct access to another set of memory.

If a process that is running on cores in NUMA Node 0 requests memory, a NUMA-aware operating system or hypervisor will do its best to assign RAM from NUMA Node 0. This is because there is a direct connection to the memory in that same node. If there is not enough memory available in NUMA Node 0 to meet the request, then something called NUMA node spanning will occur, and the operating system or hypervisor will assign RAM from another NUMA node, such as NUMA Node 1. The process that is running in NUMA Node 0 has indirect access to the memory in NUMA Node 1 via the cores in NUMA Node 1. This indirect access will slow the performance of the process.

Figure 3-8 A NUMA architecture

c03f008.eps

NUMA is not unique to Hyper-V. This hardware architecture is one that operating systems (Windows and Linux) and hypervisors (Hyper-V and ESXi), and even some applications (such as IIS 8, SQL Server, and Oracle) have to deal with.

The example in Figure 3-8 is a simple one. Imagine that you have two of those 8-core processors, or maybe ten 16-core processors. You could have a lot of NUMA nodes. The truth is, there is no rule for NUMA architecture and sizing. It depends on the processor that you have and the memory placement/sizing on your motherboard. You could search for information for your processor, but the quickest solution might be to discover what your hardware actually has:

  • CoreInfo by Microsoft SysInternals (http://technet.microsoft.com/sysinternals/cc835722.aspx) can dump your NUMA architecture when run with the -N flag.
  • You can run the Get-VMHostNumaNode PowerShell cmdlet to see the architecture and sizes of the NUMA nodes.
  • The NUMA Node Memory counters in Performance Monitor show the memory size and utilization of your NUMA nodes.

Hyper-V and NUMA

Hyper-V is NUMA aware. When a virtual machine runs in a NUMA node, Hyper-V will do its best to allocate memory to that virtual machine from the same NUMA node to minimize NUMA node spanning.

Dynamic Memory can cause a virtual machine to span NUMA nodes. Refer to Figure 3-8. Imagine that both NUMA nodes have 16 GB of RAM. A virtual machine, VM01, is running in NUMA Node 0 and has been allocated 10 GB of RAM from the direct access memory. Now memory pressure grows in the virtual machine, and Dynamic Memory needs to assign more RAM. There is no more free RAM left in NUMA Node 0, so Dynamic Memory needs to assign indirect access memory from another NUMA node.

You might want to prevent this from occurring. You can do this via Hyper-V Manager:

1. Select the host and click Hyper-V Settings in the Actions pane.
2. Open the NUMA Spanning settings.
3. Clear the Allow Virtual Machines To Span Physical NUMA Nodes check box (enabled by default).

Changing the NUMA spanning setting requires you to restart the Virtual Machine Management Service (VMMS, which runs in user mode in the management OS). You can do this as follows:

4. Open Services (Computer Management or Administrative Tools), find a service called Hyper-V Virtual Machine Management, and restart it. This will not power off your virtual machines; remember that they are running on top of the hypervisor, just as the management OS is.

You can view the current NUMA node spanning setting by running the following PowerShell snippet:

(Get-VMHost).NumaSpanningEnabled 

To disable NUMA spanning on the host and restart the VMMS, you can run the following piece of PowerShell:

Set-VMHost -NumaSpanningEnabled $false
Restart-Service "Hyper-V Virtual Machine Management"

NUMA was not a big deal with Windows Server 2008 R2 Hyper-V; it was something that was dealt with under the hood. We were restricted to a maximum of four virtual CPUs per virtual machine, so NUMA node spanning was uncommon. But Windows Server 2012 Hyper-V supports up to 64 virtual CPUs and 1 TB RAM in a single virtual machine; there is no doubt that a larger virtual machine (even six or eight virtual CPUs, depending on the host hardware) will span NUMA nodes. And that could have caused a problem if Microsoft had not anticipated it.

Windows Server 2012 Hyper-V reveals the NUMA node architecture that a virtual machine resides on to the guest OS (Windows or Linux) of the virtual machine when it starts up. This allows the guest OS to schedule processes on virtual CPUs and assign memory to processes while respecting the NUMA nodes that it is running on. This means we get the most efficient connections at the virtual layer, and therefore the physical layer, between processors and memory. And this is how Microsoft can scale virtual machines out to 64 virtual processors and 1 TB RAM without making compromises.


NUMA and Linux
Linux might be NUMA aware, but it does not scale out very well. Once a Linux operating system, on any physical or virtual installation, scales about seven processors or 30 GB RAM, then you must set numa=off in the GRUB boot.cfg. This will reduce the performance of memory and the services running in that Linux machine. In the case of Linux, it would be better to scale out the number of virtual machines working in parallel if the application is architected to allow this, rather than scaling up the specification of the virtual machines.

There is one remaining consideration with NUMA and larger virtual machines that span NUMA nodes. The NUMA architecture that the guest operating system is using cannot be changed until it powers off. What happens when the running virtual machine is moved (via Live Migration) to another host with a smaller NUMA node architecture? The answer is that the guest operating system will be using invalid virtual NUMA nodes that are not aligned to the physical NUMA structure of the host, and the performance of the services provided by the virtual machine will suffer.

Ideally, you will always live-migrate virtual machines only between identical hosts, and any new hosts will be in different server footprints. And just as with processor compatibility, this can be an unrealistic ambition.

Figure 3-9 shows the NUMA settings of a virtual machine, which you can find by editing the settings of a virtual machine and browsing to Processor ⇒ NUMA. The Configuration area shows the current NUMA configuration of the virtual machine. This example shows two virtual processors that reside on a single socket (physical processor) in a single NUMA node.

Figure 3-9 Two virtual processors that reside on a single socket (physical processor) in a single NUMA node

c03f009.tif

The NUMA Topology area allows you to customize the virtual NUMA node. This should be sized to match the smallest NUMA node on the hosts that your virtual machine can be moved to via Live Migration. This can be tricky to do, so Microsoft made it easy:

1. Move the virtual machine to the host with the smallest NUMA nodes.
2. Shut down the virtual machine.
3. Open the NUMA settings of the virtual machine.
4. Click the Use Hardware Topology button to complete the NUMA settings for the virtual machine.

We could not find a PowerShell alternative at the time of this writing.

Virtual Storage

A virtual machine needs somewhere to install its operating system and applications, and somewhere to store its data. This section describes the storage of Hyper-V virtual machines.

The Storage Formats

You can use three disk formats in Hyper-V: pass-through disks, VHD format, and VHDX format.

Pass-Through Disks

A raw LUN, known as a pass-through disk, can be assigned to a virtual machine. There are two reasons to do this:

  • Getting nearly 100 percent of the potential speed of the RAW disk is not enough. Pass-through disks are often used in lab environments to perform tests at the very fastest possible speed.
  • You need to create guest clusters that can share a LUN, where using virtual or file server storage is not an option.

Perform the following to attach a pass-through disk to a virtual machine:

1. Create a LUN in the underlying physical storage.
2. Ensure that the LUN is visible in Disk Management on the host. Do not bring the LUN online on the host.
3. Edit the settings of the virtual machine and browse to the controller that you want to add the disk to.
4. Select Hard Drive and click Add.
5. Change Controller and Location as required to select the controller of choice and a free channel.
6. Select the Physical Hard Disk radio button and select the raw disk that you want to connect to the virtual machine. Note that the name, such as Disk 2, will match the label in Disk Management.

Using PowerShell you can use Get-Disk to identify the LUN (disk number). You can attach this LUN as a pass-through disk to the virtual machine by running Add-VMHardDiskDrive.

Add-VMHardDiskDrive VM02 -ControllerType SCSI -ControllerNumber 0 `
-ControllerLocation 0 -DiskNumber 2

Pass-through disks are the least flexible and most expensive to own type of storage that can be used. They cannot be moved; they require traditional backup; they require application- or storage-based DR replication; and they do not lend themselves to self-service creation/administration in a cloud.

Virtual hard disks are the alternative to pass-through disks. Virtual hard disks are just files that simulate disks. Because they are software, they are easy to manage, and they do lend themselves to thin provisioning, self-service in a cloud, and easy administration. Virtual hard disks are easy to back up (they are just files), easy to replicate to DR sites, and easy to move. And most important, virtual hard disks can run at nearly the speed of the underlying physical storage.

VHD Format

The VHD format has been around since the 1990s, when it was created by Connectix, a company that Microsoft acquired and whose technology became the foundation of Microsoft’s machine virtualization. The VHD disk format has been supported by Microsoft Virtual Server and by Hyper-V since Windows Server 2008. VHD has a major limitation—which wasn’t considered a limitation just a few years ago: VHD cannot scale beyond 2,040 GB (just under 2 TB).

VHDX Format

Windows Server 2012 adds a new disk format called VHDX that can scale out to 64 TB, which is also the maximum LUN size supported by NTFS and ReFS. The VHDX format also has the following features:

  • It allows application developers to use VHDX as a container and store metadata in the file.
  • VHDX maintains an internal log (within the file) to maintain consistency of the contained data. For example, it should prevent data corruption during power outages.
  • VHDX (on IDE or SCSI virtual controllers) supports UNMAP to allow Windows 7 SP1 and Windows Server SP1 and later guest OSs to inform supporting underlying storage to reclaim space that used to contain data for thin provisioning (useful for dynamic VHDX—dynamically expanding disks will be explained later in the chapter). This requires the integration components to be installed in the guest OS. The unused space is reclaimed when the volume is defragged or when you run Optimize-Volume -DriveLetter <X> –ReTrim. Note that pass-through disks attached to Virtual Fibre Channel or SCSI controllers also can support Unmap.

Other Uses for Virtual Hard Disks
The virtual hard disk is one of the two datacenter container file types used by Microsoft. Windows Imaging File Format (WIM) files are used in operating system deployment. For example, you’ll find some WIM files in the Windows Server 2012 installation media: one called boot.wim boots up the installation routine (a boot image), and install.wim contains the operating system image that will be deployed to the machine.
Virtual hard disks are used for virtualization to store virtual machine data, but they are also used for a few other things.
A computer can be configured to boot from a virtual hard disk instead of an operating system on a traditional NTFS-formatted LUN. This is referred to as a Native VHD or boot from VHD. Note that the use of the term VHD in this case comes from a time when the VHD format was the only kind of virtual hard disk, and sometimes VHD is used as a general term to include VHD and VHDX. You can boot from either VHD or VHDX files when using Windows Server 2012 or Windows 8 boot loaders.
A virtual hard disk can also be used when Windows Server Backup (WSB) is backing up computers (see Chapter 10, “Backup and Recovery”). The backup data is stored in a virtual hard disk. Windows Server 2012 uses VHDX by default for the backup media; VHDX can scale out to 64 TB and can support backing up of larger volumes than the VHD format, which is limited to 2,040 GB.

Disk Sector Alignment

Alignment is a process whereby files are constructed in block sizes that match the way that they are physically stored on disks. Hard disks have used 512-byte sectors up to now. That means that data is read from and written to the disk in 512-byte chunks. Therefore, pretty much every operating system, hypervisor, and file type has been designed to be aligned for 512-byte sectors; for example, the VHD format is designed to be read and written in 512-byte chunks.

However, a change has been occurring in the storage business. Disks have started to adopt larger 4K-sector sizes. This allows the disks to get bigger and maintain performance. The storage industry gives us two kinds of 4K disk:

Native 4K Disks This disk format requires an operating system and files that are aligned to and support 4K sectors without any assistance or emulation. The disk reads and writes 4K chunks of data so the physical sector size of the disk is 4K. The operating system will read and write 4K chunks of data so the logical sector size of the disk is 4K.
512-Byte Emulation (512e) Disks The storage industry is providing 4K disks with firmware that emulates a 512-byte disk. When an operating system requests a 512-byte sector, the disk will read in 4K and extract the requested 512 bytes. The real concern is with the performance of the emulated write operation known as read-modify-write (RMW). The operating system will send 512 bytes down to the disk to write. The disk must spin to read the 4K sector that contains that 512 bytes, read the sector, inject the 512 bytes into the 4K of data, and write the 4K. RMW requires disk activity that 4K-aligned files do not need, and therefore is much slower. Microsoft believes (http://technet.microsoft.com/library/hh831459) RMW speeds could be between 30 percent and 80 percent slower than nonemulated writes. In this case, the physical sector size is 4K, but the logical sector size is 512 bytes.

Windows Server 2012 supports native 4K disks (http://msdn.microsoft.com/library/windows/desktop/hh848035(v=vs.85).aspx). You can install Windows Server 2012 on a native 4K disk and use it as a Hyper-V host. Windows Server 2012 will read and write 4K at a time without any emulation.

The structure of a VHD file that is created on a Windows Server 2012 host is padded so that it is aligned to 4K. Microsoft has also included an RMW process in VHD to allow it to be stored on native 4K disks. Note that you will need to convert VHDs that are created or used on hosts previous to Windows Server 2012 or Hyper-V Server 2012. This is because they lose the enhancement that aligns the disk to 4K and allows native 4K storage.

Microsoft is the first virtualization manufacturer to produce a virtual hard disk format (VHDX) that is designed to allow native 4K alignment without RMW. This means that VHDX is the best storage option on 512-byte-sector disks (scalability and data consistency), on 512e disks, and on 4K disks (performance). The VHD is retained on Windows Server 2012 for backward compatibility. A best practice is to convert VHD files into the VHDX format if you do not require support for legacy versions of Hyper-V.


Windows 8 Client Hyper-V and VHDX
The VHDX format and 4K matching capability of VHDX are also supported by Windows 8 Client Hyper-V, giving you offline portability between virtualization in the datacenter and virtualization on the client device.

The Types of Virtual Hard Disk

There are three types of virtual hard disk: fixed size, dynamically expanding, and differencing. The three types are available if you choose either VHD or VHDX as your format.

Fixed-Size Virtual Hard Disk

When you create a fixed virtual hard disk of a certain size, a virtual hard disk is created that is that size on the physical storage. For example, if you create a 100 GB fixed VHDX, a 100 GB VHDX file is created.

The creation process for a fixed virtual hard disk can take some time. This is because each block of the VHDX is zeroed out to obscure whatever data may have been stored on the underlying file system.

Fixed virtual hard disks are the fastest of the virtual hard disk types, and are often recommended for read-intensive applications such as SQL Server or Exchange.

However, many virtual machines use only a small percentage of their virtual storage space. Any empty space is wasted, because the virtual hard disk file is fully utilizing the underlying physical storage (preventing SAN thin provisioning from having an effect). You could deploy small fixed virtual hard disks, monitor for free space alerts, and expand the virtual hard disks as required. However, this requires effort and the virtual hard disk to be offline (usually requiring the virtual machine to be powered off too).


Offloaded Data Transfer
Windows Server 2012 has added support for using Offloaded Data Transfer (ODX) with ODX-capable SANs. ODX was originally intended to speed up the copying of files between servers on the same SAN. Without ODX, the source server reads the file from a LUN on the SAN and sends it to the destination server, which then writes the file to another LUN on the same SAN. With ODX, the servers and SAN exchange secure tokens, and the SAN copies the file from one LUN to another without all of the latent traffic. This can be advantageous for the deployment of virtual machines to SAN-attached hosts from a library that is stored on the SAN; this will be very helpful for a self-service cloud.
The creation of fixed virtual hard disks is also accelerated by ODX. Some Hyper-V customers refused to adopt the fixed type because it was slow to create, even though it offered the best performance. ODX greatly reduces the time to create fixed virtual hard disks and should resolve this issue.

Dynamically Expanding Virtual Hard Disk

The dynamically expanding (often shortened to dynamic) type of virtual hard disk starts out as a very small file of just a few megabytes and grows to the size specified at the time of creation. For example, you could create a 127 GB dynamic VHDX. The file will be 4 MB in size until you start to add data to it. The file will gradually grow to 127 GB as you add data.

The dynamic virtual hard disk has spawned a lot of debate over the years in the Hyper-V community. There are those who claim that the dynamic type is just as fast as the fixed virtual hard disk, and there are those who say otherwise. In Windows Server 2008 R2, Microsoft increased the block size in a dynamic VHD from 512 KB to 2 MB. This meant that the dynamic VHD would grow more quickly. The growth rate was increased again with dynamic VHDX in Windows Server 2012.

However, real-world experience shows that write speed is not the big concern with the dynamic type. Imagine you have lots of dynamic virtual hard disks stored on the same physical volume. They all grow simultaneously in random patterns. Over time, they fragment, and this fragmentation scatters the VHDX all over the volume. Anecdotal evidence points out that the read performance of the VHDX files will suffer greatly, and this will reduce the performance of intensive read operations such as database queries. This could reduce the performance of interactive applications to below service-level agreement levels. Fixed virtual hard disks offer superior read performance over time in a production environment.

The benefit of the dynamic virtual hard disk is that it does consume only the space required by the contained data plus a few additional megabytes. This can reduce the costs of purchase and ownership of physical storage and enable the usage of SAN thin provisioning. Dynamic virtual hard disks are very useful for those who need to minimize spending and in lab environments but are not concerned with the loss in read performance.


It’s Not Just a Choice of One or the Other
If you are concerned just about performance, the fixed type of virtual hard disk would seem to be the correct way to go. But can you deploy fixed virtual hard disks quickly enough in your cloud? Can you accelerate the creation or copy of a 100 GB fixed virtual hard disk using ODX? What if you do not have ODX because you are using SMB 3 storage?
Some organizations have adopted a hybrid model. In this situation, virtual machines are configured to boot using dynamic virtual hard disks attached to IDE Controller 0 and Location 0. This means that a large virtual hard disk takes up a small amount of space in a template library and takes a shorter amount of time to deploy than a fixed-size alternative.
Any data will be stored in fixed virtual hard disks that are attached to the SCSI controller(s) of the virtual machine. Storing data in this way will ensure that the data virtual hard disk will be less prone to fragmentation. This also isolates the data, making the configuration and management of the virtual machine and its guest OS/data more flexible.
Using multiple types of virtual hard disk does come at a cost; it increases the complexity of deployment and maintenance.

Differencing Virtual Hard Disk

The differencing type is used when you need to be able to rapidly provision one or more virtual machines and you need to:

  • Do it quickly
  • Use as little storage space as possible

When you create a differencing virtual hard disk, you must point it at a parent virtual hard disk that already has data in it. The differencing disk will start out very small with no data. A virtual machine will use the differencing disk as follows:

  • Data older than the differencing disk is read from the parent disk.
  • Any new data that must be stored is written to the differencing disk, causing it to grow over time.
  • Any data newer than the differencing disk is read from the differencing disk.

The traits of differencing disks make them very useful for pooled VDI virtual machines (the sort that are created when a user logs in and deleted when they log out) and virtual machines used for software testing. A virtual machine is prepared with an operating system and generalized using Sysprep (if the guest OS is Windows) with an automated configuration answer file. The virtual machine is destroyed, and the hard disk is retained and stored in a fast shared location that the host(s) can access. This virtual hard disk will be the parent disk. New virtual machines are created with differencing virtual hard disks that point to the generalized virtual hard disk as their parent. This means that new virtual machines are created almost instantly and only require the guest OS to start up and be customized by the answer file.

Differencing disks are the slowest of all the virtual hard disk types. Differencing disks will also grow over time. Imagine deploying applications, security patches, service packs, and hotfixes to virtual machines that use differencing disks. Eventually they will grow to be bigger than the original parent disk, and they will run slower than the alternative dynamic or fixed types.

A common problem is that people assume that they can mess with the parent disk. For example, they try to expand it or replace it with a newer version with an updated guest OS. This will break the link between the parent and the differencing disks.

It is for these reasons that differencing disks should not be used for any virtual machine that will be retained for more than a few hours or days, such as a production server.

Virtual Hard Disk Management

You will use the same tools to create VHD and VHDX formats, and VHDX will be the default format.

Creating Virtual Hard Disks

You can create virtual hard disks in several ways, including these:

  • The New Virtual Machine Wizard will create a dynamic VHDX file as the boot disk for a new virtual machine. We have already covered this earlier in the chapter.
  • Disk Manager can be used to create virtual hard disks. We do not cover this method in this book.
  • You can click New ⇒ Hard Disk in the Actions pane in Hyper-V Manager to start the New Virtual Hard Disk Wizard. This will create a new virtual hard disk in a location of your choosing that you can later attach to a virtual machine.
  • You can edit the storage controller settings of a virtual machine, click Add to add a new Hard Drive, and click New to start the New Virtual Hard Disk Wizard. This will add the new virtual hard disk (stored in your preferred location) and attach it to the controller with the Location (channel) of your choosing.
  • You can use the New-VHD PowerShell cmdlet.

The New Virtual Hard Disk wizard will step you through the decision-making process of creating a new virtual hard disk:

1. The Choose Disk Format option enables you to select either VHD or VHDX (selected by default).
2. You are asked which type of virtual hard disk to use (dynamic is selected by default) in the Choose Disk Type screen.
3. Specify Name And Location asks you to enter the filename of the virtual hard disk and the folder to store the file in.
4. The Configure Disk screen asks you to specify how the virtual hard disk will be created.

If you are creating a differencing virtual hard disk, you are simply asked to enter the path and filename of the desired parent disk.

If you are creating a dynamic or fixed virtual hard disk, the screen is different. If you want a new empty virtual hard disk, you enter the desired size of the disk. Alternatively, you can create a new virtual hard disk that is made using the contents of a LUN that is attached to the server or from another virtual hard disk. These options can be used in a few ways, including the following:

  • You want to convert a pass-through disk or a LUN into a virtual hard disk. This might be useful for people who used pass-through disks in past versions of Hyper-V and who now want to convert them to VHDX files that are more flexible and equally as scalable.
  • You would like to convert an existing virtual hard disk into a Windows Server 2012 virtual hard disk. This could be useful if you have used a VHD on an older (Windows Server 2008 or Windows Server 2008 R2) version of Hyper-V and want to convert it into one that is padded for better performance on 512e physical disks.

Using the PowerShell alternative can be a bit quicker than stepping through the New Virtual Hard Disk Wizard, especially if you need to create lots of virtual machines or virtual hard disks. The following example creates a 100 GB fixed VHDX:

New-VHD -Path "D:Virtual Machinesparent.vhdx" -Fixed -SizeBytes 100GB

The next example creates a differencing VHDX that uses the first VHDX as its parent:

New-VHD -Path "D:Virtual MachinesVM01differencing.vhdx" -ParentPath `
“D:Virtual Machinesparent.vhdx”

If you want to create VHDX files that are optimized for 512e or 4K disks, you must use the New-VHD PowerShell cmdlet. There are two ways to configure a VHDX to get the best performance and to get operating system compatibility:

Physical Sector Size You can optimize the file structure of a VHDX to match the sector size of the physical storage that it is stored on. This will affect performance of the VHDX, the guest OS storage, and the guest application(s). Remember that a virtual machine’s storage might be moved from one storage platform to another.
Logical Sector Size A VHDX has sector sizes that are seen by the guest OS and application(s). You can configure a VHDX to have a specific sector size to optimize the performance of your guest operating system and application(s). Remember that some operating systems or applications might not support 4K sectors.

Creating VHDX files with a sector size that doesn’t match the physical storage will greatly reduce your storage performance. Therefore, it is recommended that you match the physical sector size of VHDX files with the type of physical disks that you are using. For example, a 4K guest OS should use VHDXs with 4K logical and physical sector sizes if the disk is 4K. However, a non-4K-capable guest OS should use VHDXs with a 512-byte logical sector size and a 4K physical sector size to get RMW and the best compatibility possible when stored on a 4K disk. Table 3-1 shows how to match the physical sector size of the VHDX with the physical disk to get the best performance.

Table 3-1: Physical disk vs. VHDX physical sector size

Physical Disk TypeVHDX Physical Sector Size
512-byte512, 512e, 4K (no difference in performance)
512e4K, 512e, 512 (decreasing order of performance)
4K4K, 512e (decreasing order of performance)

The following example creates a VHDX with 4K logical and physical sector sizes:

New-VHD -Path "D:Virtual MachinesVM02	est.vhdx" -Dynamic -SizeBytes 100GB `
-LogicalSectorSizeBytes 4096 -PhysicalSectorSizeBytes 4096

The guest OS and application(s) must support 4K sectors if you choose 4K logical and physical sector sizes for the VHDX. The next example creates a VHDX file that matches the 4K sector of the physical storage but uses 512-byte logical sectors:

New-VHD -Path "D:Virtual MachinesVM02	est.vhdx" -Dynamic -SizeBytes 100GB `
-LogicalSectorSizeBytes 512 -PhysicalSectorSizeBytes 4096

ReFS and Hyper-V
The Resilient File System (ReFS) is Microsoft’s “next generation” file system, offering greater levels of availability and scalability than NTFS (see http://msdn.microsoft.com/library/windows/desktop/hh848060(v=vs.85).aspx). ReFS currently has limited support in Windows Server 2012, but it is envisioned to be a successor to NTFS, which has been around since the early 1990s.
ReFS can be used on nonclustered volumes to store virtual machines. However, you must disable the ReFS Integrity Streams (the automated file system checksums) if you want to be able to store VHD or VHDX files on the volume.

Attaching Virtual Hard Disks

In Hyper-V Manager, you can edit the settings of a virtual machine, browse to the controller of choice, select the Location (channel), click Add, and browse to or enter the path to the desired virtual hard disk.

Alternatively, you can use the Add-VMHardDiskDrive PowerShell cmdlet. This example adds an existing (fixed) VHDX file to the boot location of the IDE 0 controller of a virtual machine called VM01:

Add-VMHardDiskDrive VM01 -ControllerType IDE -ControllerNumber 0 `
-ControllerLocation 0 -Path `
“D:Virtual MachinesVM01Virtual Hard DisksVM01.vhdx”

You could instead attach a virtual hard disk to a SCSI controller in the virtual machine:

Add-VMHardDiskDrive VM01 -ControllerType SCSI -ControllerNumber 0 `
-ControllerLocation 0 -Path `
“D:Virtual MachinesVM01Virtual Hard DisksVM01.vhdx”

Modifying Virtual Hard Disks

You will probably want to view the settings of a virtual hard disk before you modify it. You can do this in a few ways, including these:

  • Browse to an attached VHD in the setting of a virtual machine and click Inspect.
  • Run the Inspect task in the Actions pane of Hyper-V Manager and browse to the virtual hard disk.
  • Run Get-VHD with the path to the virtual hard disk.

The Edit Virtual Hard Disk Wizard will step you through the possible actions for modifying a virtual hard disk. Note that the virtual hard disk must be offline (usually requiring an attached virtual machine to be offline) to modify it. There are two ways to access it in Hyper-V Manager:

  • Click Edit Disk in the Actions pane of Hyper-V Manager and browse to the virtual hard disk.
  • Browse to the desired virtual hard disk in the settings of a virtual machine and click Edit.

The Choose Action screen will present the possible actions, depending on whether the virtual hard disk is of fixed, dynamic, or differencing type:

  • Dynamic and differencing virtual hard disks can be compacted to reduce their storage utilization.
  • You can convert from one type to another, such as dynamic to fixed.
  • Virtual hard disks can be expanded.
  • Differencing disks can be merged with their parent to create a single virtual hard disk with all of the data.

The wizard will vary depending on the action that is chosen:

Compact This action compacts the size of the virtual hard disk on the physical storage and requires no further information from you.
The Optimize-VHD (http://technet.microsoft.com/library/hh848458.aspx) cmdlet also performs this action.
Convert Using this action, you can create a copy of the virtual hard disk of the desired format and type in the location of your choice. The time required depends on the storage speed, storage connection, and the size of the virtual hard disk. Creating a fixed virtual hard disk without ODX support might take some time.
A new virtual hard disk of the desired type and format is created. It is up to you to replace and/or delete the original virtual hard disk.
You can use Convert-VHD (http://technet.microsoft.com/library/hh848454.aspx) to switch between virtual hard disk formats and types by using PowerShell:
Convert-VHD “D:Virtual MachinesVM01Disk.vhd” -Destinationpath “D:Virtual MachinesVM01Disk.vhdx”
Expand This action allows you to increase the size of a virtual hard disk. This operation would be near instant for a dynamic disk, very quick for a fixed disk with ODX support, and potentially slow for a fixed disk with no ODX support.
Performing an expand action is like adding a disk to a RAID array. You will need to log in to the guest OS and use its disk management tools to modify the file system. For example, you could use Disk Management if the guest OS was Windows, and expand the volume to fill the newly available disk space in the virtual hard disk.
PowerShell lets you expand or shrink a virtual hard disk by using Resize-VHD (http://technet.microsoft.com/library/hh848535.aspx). Don’t make any assumptions; make sure that the guest OS and file system in the virtual hard disk support the shrink action before you use it (tested backups are recommended!).
Merge The merge action updates the parent disk of a selected differencing virtual hard disk with the latest version of the sum of their contents. There are two options. The first allows you to modify the original parent disk. The second option allows you to create a new fixed or dynamic virtual hard disk.
Do not use this action to update the parent virtual hard disk if it is being used by other differencing disks.
You can also use the Merge-VHD PowerShell cmdlet (http://technet.microsoft.com/library/hh848581.aspx) to perform this action.

Set-VHD (http://technet.microsoft.com/library/hh848561.aspx) is also available in PowerShell to change the physical sector size of a virtual hard disk or to change the parent of a differencing disk.

Removing and Deleting Virtual Hard Disks

You can remove virtual hard disks from a virtual machine. Virtual hard disks that are attached to IDE controllers can be removed only when the virtual machine is not running. SCSI-attached virtual hard disks can be removed from a running virtual machine, but you should make sure that no services or users are accessing the data on them first.

You can remove a virtual hard disk from a virtual machine in Hyper-V Manager by editing the settings of the virtual machine, browsing to the controller, selecting the disk, and clicking Remove. You can use PowerShell to remove a virtual hard disk. This example will remove the first virtual hard disk from the first SCSI controller, even with a running virtual machine:

Remove-VMHardDiskDrive VM01 -ControllerType SCSI -ControllerNumber 0 `
-ControllerLocation 0

Removing a virtual hard disk from a virtual machine will not delete it. You can use Windows Explorer, the Del command, or the Remove-Item PowerShell cmdlet to delete the virtual hard disk file.


The Virtual Machine Files
A virtual machine is made up of several files:
Virtual Machine XML File This is stored in the Virtual Machines subfolder and describes the virtual machine. It is named after the unique ID (GUID) of the virtual machine.
BIN File This is the snapshot placeholder.
VSV File This is a saved state file.
VHD/VHDX Files These are the two types of virtual hard disk.
AVHD/AVHDX Files These are snapshots, and there is one for each virtual hard disk.
Smart Paging File Hyper-V temporarily creates this file when Smart Paging is required.
Hyper-V Replica Log This file exists when you enable replication of a virtual machine (see Chapter 12, “Hyper-V Replica”).
Remember that a virtual machine can also use raw LUNs if you choose to use pass-through disks.

Network Adapters

Virtual machines can have one or more virtual network adapters that are connected to a virtual switch. You can learn much more about this topic, and the advanced configuration of virtual network adapters, in Chapter 4.

Adding Network Adapters

You can add, remove, and configure virtual network adapters in a virtual machine’s settings in Hyper-V Manager. To add a virtual network adapter, you can open Add Hardware, select either a (synthetic) Network Adapter or a Legacy Network Adapter, and click Add. You can use the Add-VMNetworkAdapter cmdlet to add network adapters by using PowerShell. The following example adds a (synthetic) network adapter without any customization:

Add-VMNetworkAdapter VM01

You can also add a legacy network adapter. Add-VMNetworkAdapter does allow you to do some configuration of the new virtual network adapter. This example creates a legacy network adapter and connects it to a virtual switch:

Add-VMNetworkAdapter VM01 -IsLegacy $True -SwitchName ConvergedNetSwitch

Configuring Virtual Network Adapters

You can configure the settings of a virtual network adapter in Hyper-V Manager by browsing to it in the virtual machine’s settings, as shown in Figure 3-10. The basic settings of the virtual network adapter are as follows:

Figure 3-10 Network Adapter settings in a virtual machine

c03f010.tif
Virtual Switch The name of the virtual switch that the virtual machine is connected to. Note that the virtual machine will expect to find an identically named virtual switch if you move it from one host to another via Live Migration.
Enable Virtual LAN Identification Enabling this allows a virtual network adapter to filter VLAN traffic for a specific VLAN ID. This means that the virtual network adapter will be able to communicate on the VLAN. It can communicate only with other VLANs via routing on the physical network.
The VLAN Identifier This is where you enter the VLAN ID or tag for the required VLAN. This setting is grayed out unless you select the Enable Virtual LAN Identification check box.
Bandwidth Management Windows Server 2012 has built-in network Quality of Service (QoS). This allows you to guarantee a minimum or maximum amount of bandwidth for a virtual machine. The PowerShell alternative is much more powerful than this GUI option.
Remove Button This can be used to remove the virtual network adapter.

All of these settings are covered in great detail in Chapter 4. The Hardware Acceleration settings are also covered in that chapter.

The Advanced Features settings, shown in Figure 3-11, allow you to further configure a virtual network adapter.

Figure 3-11 Advanced Features settings of a virtual network adapter

c03011.tif

The main Advanced Features settings are as follows:

MAC Address By default, a virtual machine uses a dynamically created MAC address. You’ll find that a virtual machine has an unpredictable MAC address that is generated from a pool. Each host has a pool of possible MAC addresses, which you can find in Hyper-V Manager by clicking Virtual Switch Manager in Actions and then clicking MAC Address Range. You can enable dynamic MAC addresses as follows:
Set-VMNetworkAdapter VM01 -DynamicMacAddress
Dynamic MAC addresses may cause problems for some virtual machines. Linux can bind IP address configurations to MAC addresses rather than network adapters; this can cause Linux guest OSs to go offline after Live Migration when they change a MAC address. Some applications or network services may require static MAC addresses.
You can enter a static MAC address by selecting Static rather than Dynamic and entering a valid MAC address. You will have to generate a valid MAC address. Note that System Center Virtual Machine Manager simplifies this process by supplying a static MAC address from a managed central pool.
Some services, such as Windows Network Load Balancing, might require MAC Address Spoofing to be enabled. This will allow a virtual network adapter to change MAC addresses on the fly.
The next piece of PowerShell configures a static MAC address for the mentioned virtual machine (assuming it has a single virtual network adapter), and enables MAC address spoofing:
Set-VMNetworkAdapter VM01 -StaticMacAddress "00165D01CC01" `
-MacAddressSpoofing On
DHCP Guard A problem for network administrators occurs when authorized server administrators build unauthorized (Windows or Linux) DHCP servers and place them on production networks. This is a real problem in self-service clouds; invalid addresses can be handed out to customer virtual machines, can break their ability to communicate, and can even be a security risk by influencing the routing of traffic. There is nothing you can do to stop guest OS administrators from configuring DHCP servers, but there is something you can do to block them from answering DHCP requests.
Selecting the Enable DHCP Guard check box will prevent the virtual machine from answering DHCP clients via this virtual network adapter. This is not enabled by default. You can script this setting for all virtual machines by using PowerShell:
Set-VMNetworkAdapter * -DhcpGuard On
Router Guard The Router Guard feature is similar to DHCP Guard, but it prevents the virtual machine from sending out router redirection to the network via this virtual network adapter. You can also enable this feature by using PowerShell:
Set-VMNetworkAdapter * -RouterGuard On

For the Security Conscious
Environments that require network security, such as public clouds (hosting companies), will want DHCP Guard and Router Guard to be turned on by default. Strangely, Microsoft left these two security features disabled by default.
If you are using a virtual machine deployment solution such as System Center Virtual Machine Manager, you can enable the settings by default in your virtual machine templates. Alternatively, you could write a script that is scheduled to run on a regular basis. The script would do the following:
1. Enable DHCP Guard and Router Guard on all virtual machines
2. Disable the feature on just those required virtual machines, such as virtualized DHCP servers
Only Hyper-V administrators have access to the settings of a virtual machine. That means that guest OS administrators won’t have access to your virtual machine settings (including network and security configuration) unless you delegate it to them.

Port Mirroring You can enable Port Mirroring to do network traffic analysis for a virtual machine from another virtual machine. This can be useful in a cloud or a change-controlled environment (pharmaceutical industry) for which you need to do diagnostics but cannot install analysis tools on production systems in a timely manner.
The concept is that you change the Mirroring Mode from None (the default setting) to Source on the virtual network adapter that you want to monitor. On another virtual machine, the one with the diagnostics tools, you set the Mirroring Mode of the virtual network adapter to Destination. Now the incoming and outgoing packets on the source virtual network adapter are copied to the destination virtual network adapter.
You can enable Port Mirroring by using PowerShell:
Set-VMNetworkAdapter VM01 -PortMirroring Source
Set-VMNetworkAdapter VM02 -PortMirroring Destination

Here’s the quickest way to disable Port Mirroring:

Set-VMNetworkAdapter * -PortMirroring None
Note that the source and destination virtual network adapters must be on the same virtual switch (and therefore the same host).
NIC Teaming Windows Server 2012 is the first version of Windows Server that includes and supports NIC teaming (only the Microsoft version). There is much more on this subject in Chapter 4. You must select the Enable This Network Adapter To Be Part Of A Team In The Guest Operating System check box if you want to enable a virtual machine to create a NIC team from its virtual NICs (limited to two virtual NICs per guest OS team). This setting has nothing to do with NIC teaming in the host’s Management OS.
Set-VMNetworkAdapter VM01 -AllowTeaming On

Hardware Acceleration Features
Features such as Virtual Machine Queuing and IPsec Task Offloading are enabled by default in a virtual network card. This does not mean that they are used; it means only that a virtual machine can use these features if they are present and enabled on the host. You would disable features only if you need to explicitly turn them off—for example, if you have been advised to do so by Microsoft Support.

Using Integration Services

In Chapter 1, you learned about the role of integration components or services. A number of services allow the host and virtual machine to have a limited amount of interaction with each other. You can enable (they are on by default) or disable these features by opening a virtual machine’s settings and browsing to Integration Services. There you will find the following:

Operating System Shutdown This is used to cleanly initiate a shutdown of the guest OS and virtual machine from the host or a management tool.
Time Synchronization The virtual machine’s clock will be synchronized with that of the host. Typically, you leave this enabled unless you are told otherwise by Microsoft Support or it causes guest OS clock issues.
Data Exchange This is a feature that uses Key Value Pairs (KVPs) to allow the host to get limited but useful information from the guest OS, and vice versa. Some information can be requested from the host by the guest OS (HostExchangeItems), some can be requested from the guest OS by the host (GuestExchangeItems), and some information is automatically shared by the guest OS with the host (GuestIntrinsicExchangeItems).
Heartbeat This is used to detect whether the integration components/services are running, and therefore whether the guest OS is running.
Backup (Volume Snapshot) Volume Shadow Copy Services (VSS) can be used on the host to get an application-consistent backup of a virtual machine, including the guest OS and VSS-capable applications (SQL Server, Exchange, and so on) if the guest OS is supported by Hyper-V and supports VSS (Linux does not have VSS support).

Typically, these features are left enabled because they are very useful. You should not disable them unless you have a valid reason. Remember that the integration components/services must be installed in the guest OS (and kept up-to-date) for this functionality to work correctly.


More on KVPs
Ben Armstrong (a Microsoft senior program manager lead of Hyper-V) shared a very nice PowerShell script (which still works on Windows Server 2012) on his blog (http://blogs.msdn.com/b/virtual_pc_guy/archive/2008/11/18/hyper-v-script-looking-at-kvp-guestintrinsicexchangeitems.aspx). If you run the PowerShell script, it will ask you for a host to connect to and a virtual machine on that host. The script will then display the GuestIntrinsicExchangeItems. You can also add a line to display the GuestExchangeItems.
Another Microsoft program manager, Taylor Brown, has written and shared updated scripts to add or read KVPs (http://blogs.msdn.com/b/taylorb/archive/2012/12/05/customizing-the-key-value-pair-kvp-integration-component.aspx).
The amount of information that the guest OS shares with the host varies depending on the guest OS. Windows guests certainly share more information, including Windows version, edition, service pack, build number, and the IPv4 and IPv6 addresses of the guest OS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.176.68