Chapter 6

Hyper-V

Dynamic memory

Smart paging

Resource metering

Guest integration services

Generation 2 VMs

Enhanced Session Mode

Discrete Device Assignment

Nested virtualization

PowerShell Direct

HVC for Linux

Virtual hard disks

Managing checkpoints

Virtual Fibre Channel adapters

Storage QoS

Hyper-V storage optimization

Hyper-V virtual switches

Virtual machine network adapters

Optimizing network performance

Virtual machine MAC addresses

Network isolation

Hyper-V replica

Hyper-V failover clusters

Hyper-V guest clusters

Live migration

Storage migration

Exporting, importing, and copying VMs

VM Network Health Detection

VM drain on shutdown

Domain controller cloning

Shielded virtual machines

Managing Hyper-V using PowerShell

Hyper-V is a virtualization platform that is built into Windows Server. Not only can you use Hyper-V to host virtual machines and a special type of containers, Hyper-V is integrated into the very fabric of Microsoft’s Azure cloud. Hyper-V is also available on some editions of Windows 10. This means that it’s possible to transfer a virtual machine from Windows 10 to Windows Server to Azure and back without needing to alter the virtual machine’s format.

In this chapter, we look at many topics, including memory configuration, generation 2 VMs, nested virtualization, virtual hard disks, replicas and failovers, live migration, storage migration, guarded virtualization fabrics, and shielded virtual machines.

Dynamic memory

You have two options when assigning memory to VMs. You can assign a static amount of memory, or you can configure dynamic memory. When you assign a static amount of memory, the amount of memory assigned to the VM remains the same, whether the VM is starting up, is currently running, or is in the process of shutting down.

When you configure dynamic memory, you can configure the following values in Windows Admin Center:

  • Startup Memory. This is the amount of memory allocated to the VM during startup. This can be the same as the minimum amount of memory, or it can be as large as the maximum amount of allocated memory. Once the VM has started, it will instead use the amount of memory configured as the Minimum Memory.

  • Minimum Memory. This is the minimum amount of memory that the VM will be assigned by the virtualization host when dynamic memory is enabled. When multiple VMs are demanding memory, Hyper-V may reallocate memory away from the VM until this Minimum Memory value is met. You can reduce the Minimum Memory setting while the VM is running, but you cannot increase it while the VM is running.

  • Maximum Memory. This is the maximum amount of memory that the VM will be allocated by the virtualization host when dynamic memory is enabled. You can increase the Maximum Memory setting while the VM is running, but you cannot decrease it while the VM is running.

  • Memory Buffer. This is the percentage of memory that Hyper-V should allocate to the VM as a buffer.

  • Memory Weight. This setting allows you to configure how memory should be allocated to this particular VM as compared to other VMs running on the same virtualization host.

Generally, when you configure dynamic memory, the amount of memory used by a VM will fluctuate between the Minimum Memory and Maximum Memory values. You should monitor VM memory utilization and tune these values so that they accurately represent the VM’s actual requirements. If you allocate a Minimum Memory value below what the VM would actually need to run, this shortage might cause the virtualization host to reduce the amount of memory allocated to this Minimum Memory value, which will cause the VM to stop running.

Smart paging

Smart paging is a special technology in Hyper-V that functions in certain conditions when a VM is restarting. Smart paging uses a file on the disk to simulate memory to meet Startup Memory requirements when the Startup Memory setting exceeds the Minimum Memory setting. Startup Memory is the amount of memory allocated to the VM when it starts, but not when it is in a running state. For example, you could set Startup Memory to 2,048 MB and the Minimum Memory to 512 MB for a specific virtual machine. In a scenario where 1,024 MB of free memory was available on the virtualization host, smart paging would allow the VM to access the required 2,048 MB of memory.

Because it uses disk to simulate memory, smart paging is only active if the following three conditions occur at the same time:

  • The VM is being restarted.

  • There is not enough memory on the virtualization host to meet the Startup Memory setting.

  • Memory cannot be reclaimed from other VMs running on the same host.

Smart paging doesn’t allow a VM to perform a “cold start” if the required amount of Startup Memory is not available but the Minimum Memory amount is available. Smart paging is only used when a VM that was already running restarts and the above three conditions have been met.

You can configure the location of the smart paging file on a per-VM basis. By default, smart paging files are written to the C:ProgramDataMicrosoftWindowsHyper-V folder. The smart paging file is created only when needed and is deleted within 10 minutes of the VM restarting.

Resource metering

Resource metering allows you to track the consumption of processor, disk, memory, and network resources by individual VMs. To enable resource metering, use the Enable-VMResourceMetering Windows PowerShell cmdlet. You can view metering data using the Measure-VM Windows PowerShell cmdlet. Resource metering allows you to record the following information:

  • Average CPU use

  • Average memory use

  • Minimum memory use

  • Maximum memory use

  • Maximum disk allocation

  • Incoming network traffic

  • Outgoing network traffic

Average CPU use is measured in megahertz (MHz). All other metrics are measured in megabytes. Although you can extract data using the Measure-VM cmdlet, you need to use another solution to output this data into a visual form, such as a graph.

Guest integration services

Integration services allow the virtualization host to extract information and perform operations on a hosted VM. By default, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows 10, and Windows 8.1 include Hyper-V integration services. If you are running a Linux guest VM on your Hyper-V Server, you can download the Linux Integration Services for Hyper-V and Azure from Microsoft’s website. Integration services installation files are available for all operating systems that are supported on Hyper-V. You can enable the following integration services:

  • Operating System Shutdown. This integration service allows you to shut down the VM from the virtualization host, rather than from within the VM’s OS.

  • Time Synchronization. Synchronizes the virtualization host’s clock with the VM’s clock. Ensures that the VM clock doesn’t drift when the VM is started, stopped, or reverted to a checkpoint.

  • Data Exchange. Allows the virtualization host to read and modify specific VM registry values.

  • Heartbeat. Allows the virtualization host to verify that the VM OS is still functioning and responding to requests.

  • Backup (Volume Checkpoint). For VMs that support Volume Shadow Copy, this service synchronizes with the virtualization host, allowing backups of the VM while the VM is in operation.

  • Guest Services. Guest services allow you to copy files from the virtualization host to the VM using the Copy-VMFile Windows PowerShell cmdlet.

Generation 2 VMs

Generation 2 VMs are a special type of VM that differ in configuration from the VMs that are now termed “generation 1 VMs,” which could be created on Hyper-V virtualization hosts running the Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 operating systems. Generation 2 VMs are supported on Windows Server 2012 R2 and later operating systems. Generation 2 VMs are also now supported in Azure.

Generation 2 VMs provide the following functionality:

  • Can boot from a SCSI virtual hard disk

  • Can boot from a SCSI virtual DVD

  • Supports UEFI firmware on the VM

  • Supports VM Secure Boot

  • PXE boot using standard network adapter

There are no legacy network adapters with generation 2 VMs, and most legacy devices, such as COM ports and the Diskette Drive, are no longer present. Generation 2 VMs are “virtual first” and are not designed to simulate hardware for computers that have undergone physical-to-virtual (P2V) conversion. If you need to deploy a VM that requires an emulated component such as a COM port, you’ll need to deploy a generation 1 VM.

You configure the generation of a VM during the VM creation. Once a VM is created, Hyper-V doesn’t allow you to modify the VM’s generation. Windows Server 2016 and Windows Server 2019 support both generation 1 and generation 2 VMs.

Generation 2 VMs boot more quickly and allow the installation of operating systems more quickly than generation 1 VMs. Generation 2 VMs have the following limitations:

  • You can only use generation 2 VMs if the guest operating system is running an x64 version of Windows 10, Windows 8.1, Windows 8, Windows Server 2016, Windows Server 2012 R2, or Windows Server 2012.

  • Generation 2 VMs only support virtual hard disks in VHDX format.

Enhanced Session Mode

Enhanced Session Mode allows you to perform actions including cutting and pasting, audio redirection, and volume and device mapping when using Virtual Machine Connection windows. You can also sign in to a VM with a smart card through Enhanced Session Mode. You enable Enhanced Session Mode on the Hyper-V server by selevting Allow Enhanced Session Mode in the Enhanced Session Mode Policy section of the Hyper-V server’s Properties dialog box.

You can only use Enhanced Session Mode with guest VMs running Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows 10, or Windows 8.1. To utilize Enhanced Session Mode, you must have permission to connect to the VM using Remote Desktop through the account you use to sign in to the guest VM. You can grant permission to the VM by adding the user to the Remote Desktop Users group. A user who is a member of the local Administrators group also has this permission. The Remote Desktop Services service must be running on the guest VM.

Discrete Device Assignment

Discrete Device Assignment (DDA) is available in Windows Server 2019 and Windows Server 2016 that allows you to directly assign a physical GPU or an NVMe storage device to a specific virtual machine. Each physical GPU or NVMe storage device can only be associated with one VM. DDA involves installing the device’s native driver in the VM associated with that GPU. This process works for both Windows and Linux VMs as long as the drivers are available for the VM’s operating system. DDA is supported for generation 1 or 2 VMs running Windows Server 2012 R2 or later, Windows 10, and some Linux guest operating systems.

Before assigning the physical GPU or storage device to a specific VM, you need to dismount the device from the Hyper-V host. Some device vendors provide partitioning drivers for the Hyper-V host. Partitioning drivers are different from the standard device drivers and improve the security of the DDA configuration. If a partitioning driver is available, you should install this driver prior to dismounting the device from the Hyper-V host. If no driver is available, you’ll have to use the -Force option with the Dismount-VMHostAssignableDevice cmdlet.

Once you’ve dismounted the physical device from the Hyper-V host, you can assign it to a specific guest VM using the Add-VMAssignableDevice cmdlet.

Enabling DDA requires that you disable the VM’s automatic stop action. You can do this using the following command:

Set-VM -Name VMName -AutomaticStopAction TurnOff

In addition to disabling the automatic stop action, the following functionality is also not available to VMs configured with DDA:

  • VM Save and Restore

  • VM live migration

  • Use of dynamic memory

  • Deployment of VM to a high availability cluster

Nested virtualization

Hyper-V on Windows Server 2019, Windows Server 2016, and Windows 10 client OS support nested virtualization. Nested virtualization allows you to enable Hyper-V and host virtual machines on a virtual machine running under Hyper-V as long as that VM is running Windows Server 2019, Windows Server 2016, or Windows 10. Nested virtualization can be enabled on a per-VM basis by running the following PowerShell command:

Set-VM Processor -VMName NameOfVM -ExposeVirtualiationExtensions $true

Nested virtualization dynamic memory

Once you’ve run this command, you’ll be able to enable Hyper-V on the VM. You won’t be able to adjust the memory of a virtual machine that is enabled for nested virtualization while that virtual machine is running. While it is possible to enable dynamic memory, the amount of memory allocated to a virtual machine configured for nested virtualization will not fluctuate while the VM is running.

Nested virtualization networking

To route network packets through the multiple virtual switches required during nested virtualization, you can either enable MAC address spoofing or configure network address translation (NAT).

To enable MAC address spoofing on the virtual machine that you have configured for nested virtualization, run the following PowerShell command:

Get-VMNetworkAdapter -VMName NameOfVM | Set-VMNEtworkAdapter -MacAddressSpoofing On

To enable NAT, create a virtual NAT switch in the VM that has been enabled for nested virtualization using the following PowerShell commands:

New-VMSwitch -name VMNAT -SwitchTypeInternal
New-NetNAT -Name LocalNAT -InternalIPInterfaceAddressPrefix “192.168.15.0/24”
Get-NetAdapter “vEthernet (VmNat)” | New-NetIPAddress -IPAddress 192.168.15.1 -AddressFamily IPv4 -PrefixLength 24

Once you’ve done this, you’ll need to manually assign IP addresses to VMs running under the VM enabled for nested virtualization, using the default gateway of 192.168.15.1. You can use a separate internal addressing scheme other than 192.168.15.0/24 by altering the appropriate PowerShell commands in the example above.

PowerShell Direct

PowerShell Direct allows you to create a remote PowerShell session directly from a Hyper-V host to a virtual machine hosted on that Hyper-V host without requiring the virtual machine to be configured with a network connection. PowerShell Direct requires that both the Hyper-V host and the VM be running Windows Server 2019, Windows Server 2016, or Windows 10; PowerShell Direct is not supported for earlier versions of the Windows Server or Windows client operating systems.

To use PowerShell Direct, you must be signed in locally to the Hyper-V host with Hyper-V Administrator privileges. You also must have access to valid credentials for the virtual machine. If you do not have credentials for the virtual machine, you will not be able to establish a PowerShell Direct connection.

To establish a PowerShell Direct connection, use the command:

Enter-PSSession -vmname NameOfVM

You exit the PowerShell Direct session by using the Exit-PSSession cmdlet.

HVC for Linux

HVC.exe, which is included with Windows Server 2019 and Windows 10, allows you to make a remote SSH connection from a Hyper-V host to a Linux virtual machine guest without requiring the Linux VM to have a functioning network connection. It provides similar functionality to PowerShell Direct. For HVC.exe to work, you’ll need to ensure that the Linux virtual machine has an updated kernel and that the Linux integration services are installed. You’ll also need the SSH server on the Linux VM to be installed and configured before you’ll be able to use HVC.exe to initiate an SSH connection.

Virtual hard disks

Hyper-V supports two separate virtual hard disk formats. Virtual hard disk files in .vhd format are limited to 2040 GB. Virtual hard disks in this format can be used on all supported versions of Hyper-V. Other than the size limitation, the important thing to remember is that you cannot use virtual hard disk files in .vhd format with generation 2 VMs.

Virtual hard disk files in .vhdx format are an improvement over virtual hard disks in .vhd format. The main limitation of virtual hard disks in .vhdx format is that they cannot be used with Hyper-V on Windows Server 2008 or Windows Server 2008 R2, which shouldn’t be as much of a problem now as these operating systems no longer have mainstream support. Virtual hard disks in .vhdx format have the following benefits:

  • Can be up to 64 TB in size

  • Have larger block size for dynamic and differential disks

  • Provide 4-KB logical sector virtual disks

  • Have an internal log that reduces chance of corruption

  • Support trim to reclaim unused space

You can convert hard disks between .vhd and .vhdx format. You can create virtual hard disks at the time you create the VM by using the New Virtual Hard Disk Wizard or the New-VHD Windows PowerShell cmdlet.

Fixed-sized disks

Virtual hard disks can either by dynamic, differencing, or fixed. When you create a fixed-size disk, all space used by the disk is allocated on the hosting volume at the time of creation. Fixed disks increase performance if the physical storage medium does not support Windows Offloaded Data Transfer. Improvements in Windows Server reduce the performance benefit of fixed-size disks when the storage medium supports Windows Offloaded Data Transfer. The space to be allocated to the disk must be present on the host volume when you create the disk. For example, you can’t create a 3-TB fixed disk on a volume that has only 2 TB of space.

Dynamically expanding disks

Dynamically expanding disks use an initial small file and then grow as the VM allocates data to the virtual hard disk. This means you can create a 3-TB dynamic virtual hard disk on a 2-TB volume because the entire 3 TB will not be allocated at disk creation. However, in this scenario, you would need to ensure that you extend the size of the 2-TB volume before the dynamic virtual disk outgrows the available storage space.

Differencing disks

Differencing disks are a special type of virtual hard disk that has a child relationship with a parent hard disk. Parent disks can be fixed size or dynamic virtual hard disks, but the differencing disk must be the same type as the parent disk. For example, you can create a differencing disk in .vhdx format for a parent disk that uses .vhdx format, but you cannot create a differencing disk in .vhd format for a parent disk in .vhdx format.

Differencing disks record the changes that would otherwise be made to the parent hard disk by the VM. For example, differencing disks are used to record Hyper-V VM checkpoints. One parent virtual hard disk can have multiple differencing disks associated with it.

For example, you can create a specially prepared parent virtual hard disk by installing Windows Server on a VM by running the sysprep utility within the VM and then shutting down the VM. You can use the virtual hard disk created by this process as a parent virtual hard disk. In this scenario, when you create new Windows Server VMs, you would configure the VMs to use a new differencing disk that uses the sysprepped virtual hard disk as a parent. When you run the new VM, it will write any changes that it would normally make to the full virtual hard disk to the differencing disk. In this scenario, deploying new Windows Server VMs becomes a simple matter of creating new VMs that use a differencing disk that uses the sysprepped Windows Server virtual hard disk as a parent.

You can create differencing hard disks using the New Virtual Hard Disk Wizard or the New-VHD Windows PowerShell cmdlet. You need to specify the parent disk during the creation process.

The key to using differencing disks is to ensure that you don’t make changes to the parent disk; doing so will invalidate the relationship with any child disks. Generally, differencing disks can provide storage efficiencies because only changes are recorded on child disks. For example, rather than storing 10 different instances of Windows Server 2019 in its entirety, you could create one parent disk and have 10 much smaller differencing disks to accomplish the same objective. If you store VM virtual hard disks on a volume that has been deduplicated, these efficiencies are reduced.

Modifying virtual hard disks

You can perform the following tasks to modify existing virtual hard disks:

  • Convert a virtual hard disk in .vhd format to .vhdx format

  • Convert a virtual hard disk in .vhdx format to .vhd format

  • Change the disk from fixed size to dynamically expanding or from dynamically expanding to fixed size

  • Shrink or enlarge the virtual hard disk

You convert virtual hard disk types (.vhd to .vhdx, .vhdx to .vhdx, dynamic to fixed, or fixed to dynamic) either using the Edit Virtual Hard Disk Wizard or by using the Convert-VHD Windows PowerShell cmdlet. When converting from .vhdx to .vhd, remember that virtual hard disks in .vhd format cannot exceed 2,040 GB in size. So, while it is possible to convert virtual hard disks in .vhdx format that are smaller than 2,040 GB to .vhd format, you will not be able to convert virtual hard disks that are larger than 2,040 GB.

You can only perform conversions from one format to another and from one type to another while the VM is powered off. You must shrink the virtual hard disk using the disk manager in the VM operating system prior to shrinking the virtual hard disk using the Edit Virtual Hard Disk Wizard or the Resize-VHD cmdlet. You can resize a virtual hard disk while the VM is running under the following conditions:

  • The virtualization host is running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.

  • The virtual hard disk is in .vhdx format.

  • The virtual hard disk is attached to a virtual SCSI controller.

  • The virtual hard disk must have been shrunk. You must shrink the virtual hard disk using disk manager in the host operating system prior to shrinking the virtual hard disk using the Edit Virtual Hard Disk Wizard or the Resize-VHD cmdlet.

Pass-through disks

Pass-through disks, also known as directly attached disks, allow a VM to directly access the underlying storage rather than accessing a virtual hard disk that resides on that storage. For example, with Hyper-V, you normally connect a VM to a virtual hard disk file hosted on a volume formatted with NTFS or ReFS. With pass-through disks, the VM instead accesses the disk directly, and there is no virtual hard disk file.

Pass-through disks allow VMs to access larger volumes than are possible when using virtual hard disks in .vhd format. In earlier versions of Hyper-V—such as the version available with Windows Server 2008—pass-through disks provide performance advantages over virtual hard disks. The need for pass-through disks has diminished with the availability of virtual hard disks in .vhdx format because .vhdx format allows you to create much larger volumes.

Pass-through disks can be directly attached to the virtualization host, or they can be attached to Fibre Channel or iSCSI disks. When adding a pass-through disk, you will need to ensure that the disk is offline. You can use the Disk Management console or the diskpart.exe utility on the virtualization host to set a disk to be offline.

To add a pass-through disk using Windows PowerShell, use the Get-Disk cmdlet to get the properties of the disk that you want to add as a pass-through disk. Next, pipe the result to the Add-VMHardDiskDrive cmdlet. For example, to add physical disk 3 to the VM named Alpha-Test, execute the following command:

Get-Disk 3 | Add-VMHardDiskDrive –VMName Alpha-Test

A VM that uses pass-through disks will not support VM checkpoints. Pass-through disks also cannot be backed up with backup programs that use the Hyper-V VSS writer.

Managing checkpoints

Checkpoints represent the state of a VM at a particular point in time. You can create checkpoints when the VM is running or when the VM is shut down. When you create a checkpoint of a running VM, the running VM’s memory state is also stored in the checkpoint. Restoring a checkpoint taken of a running VM returns the running VM to a restored state. Creating a checkpoint creates either an .avhd or .avhdx file (depending on whether the VM is using virtual hard disks in the .vhd or .vhdx format).

Windows Server 2019 and Windows Server 2016 support two types of checkpoints:

  • Standard checkpoints. These function just as checkpoints have functioned in previous versions of Hyper-V. These checkpoints capture the state, date, and hardware configuration of a virtual machine. They are designed for development and test scenarios.

  • Production checkpoints. Available only in Windows Server 2019 and Windows Server 2016, production checkpoints use backup technology inside the guest as opposed to the saved-state technology used in standard checkpoints. Production checkpoints are fully supported by Microsoft and can be used with production workloads, which is something that was not supported with the standard version of checkpoints available in previous versions of Hyper-V.

You can switch between standard and production checkpoints on a per-virtual machine basis by editing the Properties of the virtual machine; in the Management section of the Properties dialog box, choose between Production and Standard checkpoints.

You can create checkpoints from Windows PowerShell with the Checkpoint-VM cmdlet. The other checkpoint-related Windows PowerShell cmdlets in Windows Server actually use the VMSnapshot noun, though on Windows 10, they confusingly have aliases for the VMCheckPoint noun.

The Windows Server checkpoint related cmdlets are as follows:

  • Restore-VMSnapshot. Restores an existing VM checkpoint.

  • Export-VMSnapshot. Allows you to export the state of a VM as it exists when a particular checkpoint was taken. For example, if you took checkpoints at 2 PM and 3 PM, you could choose to export the checkpoint taken at 2 PM and then import the VM in the state that it was in at 2 PM on another Hyper-V host.

  • Get-VMSnapshot. Lists the current checkpoints.

  • Rename-VMSnapshot. Allows you to rename an existing VM checkpoint.

  • Remove-VMSnapshot. Deletes a VM checkpoint. If the VM checkpoint is part of the chain but not the final link, changes are merged with the successive checkpoint, so that the checkpoint remains a representation of the VM at the point in time when the snapshot was taken. For example, if checkpoints were taken at 1 PM, 2 PM, and 3 PM, and you delete the 2 PM checkpoint, the .avhd/.avhdx files associated with the 2 PM snapshot would be merged with the .avhd/.avhdx files associated with the 3 PM snapshot, so that the 3 PM snapshot retains its integrity.

Checkpoints do not replace backups. Checkpoints are almost always stored on the same volume as the original VM hard disks, so a failure of that volume will result in all VM storage files—both original disks and checkpoint disks—being lost. If a disk in a checkpoint chain becomes corrupted, then that checkpoint and all subsequent checkpoints will be lost. Disks earlier in the checkpoint chain will remain unaffected. Hyper-V supports a maximum of 50 checkpoints per VM.

Virtual Fibre Channel adapters

Virtual Fibre Channel allows you to make direct connections from VMs running on Hyper-V to Fibre Channel storage. If the following requirements are met, Virtual Fibre Channel is supported on Windows Server 2019 and Windows Server 2016:

  • The computer functioning as the Hyper-V virtualization host must have a Fibre Channel host bus adapter (HBA) that has a driver that supports virtual Fibre Channel.

  • SAN must be NPIV (N_Port ID) enabled.

  • The VM must be running a supported version of the guest operating system.

  • Virtual Fibre Channel LUNs cannot be used to boot Hyper-V VMs.

VMs running on Hyper-V support up to four virtual Fibre Channel adapters, each of which can be associated with a separate Storage Area Network (SAN).

Before you can use a virtual Fibre Channel adapter, you will need to create at least one virtual SAN on the Hyper-V virtualization host. A virtual SAN is a group of physical Fibre Channel ports that connect to the same SAN.

VM live migration and VM failover clusters are supported; however, virtual Fibre Channel does not support VM checkpoints, host-based backup, or live migration of SAN data.

Storage QoS

Storage Quality of Service (QoS) allows you to limit the maximum number of IOPS (input/output operations per second) for virtual hard disks. IOPS are measured in 8-KB increments. If you specify a maximum IOPS value, the virtual hard disk will be unable to exceed this value. You use Storage QoS to ensure that no single workload on a Hyper-V virtualization host consumes a disproportionate amount of storage resources.

It’s also possible to specify a minimum IOPS value for each virtual hard disk. You would do this if you wanted to be notified that a specific virtual hard disk’s IOPS has fallen below a threshold value. When the number of IOPS falls below the specified minimum, an event is written to the event log. You configure Storage QoS on a per-virtual hard disk basis.

Hyper-V storage optimization

Several technologies built into Windows Server 2019 and Windows Server 2016 allow you to optimize the performance and data storage requirements for files associated with VMs.

Deduplication

In Windows Server 2019, both ReFS and NTFS volumes support deduplication. Deduplication is a process by which duplicate instances of data are removed from a volume and replaced with pointers to the original instance. Deduplication is especially effective when used with volumes that host virtual hard disk files because many of these files contain duplicate copies of data, such as the VM’s operating system and program files.

Once installed, you can enable deduplication through the Volumes node of the File And Storage Services section of the Server Manager Console. When enabling deduplication, you specify whether you want to use a general file server data deduplication scheme or a virtual desktop infrastructure scheme. For volumes that host VM files, the VDI scheme is appropriate. You can’t enable deduplication on the operating system volume; deduplication may only be enabled on data volumes. For this reason, remember to store VM configuration files and hard disks on a volume that is separate from the operating system volume.

Storage tiering

Storage tiering is a technology that allows you to mix fast storage, such as solid-state disk (SSD), with traditional spinning magnetic disks to optimize both storage performance and capacity. Storage tiering works on the premise that a minority of the data stored on a volume is responsible for the majority of read and write operations. Storage tiering can be enabled through the storage spaces functionality, and rather than creating a large volume that consists entirely of SSDs, you create a volume comprised of both solid-state and spinning magnetic disks. In this configuration, frequently accessed data is moved to the parts of the volume hosted on the SSDs, and less frequently accessed data is moved to the parts of the volume hosted on the slower spinning magnetic disks. This configuration allows many of the performance benefits of an SSD-only volume to be realized without the cost of using SSD-only storage.

When used in conjunction with deduplication, frequently accessed deduplicated data is moved to the faster storage, providing reduced storage requirements, while improving performance over what would be possible if the volume hosting VM files were solely comprised of spinning magnetic disks. You also have the option of pinning specific files to the faster storage, which overrides the algorithms that move data according to accumulated utilization statistics. You configure storage tiering using Windows PowerShell.

Hyper-V virtual switches

Hyper-V virtual switches, called Hyper-V virtual networks in previous versions of Hyper-V, represent network connections to which the Hyper-V virtual network adapters can connect. You can configure three types of Hyper-V virtual switches: external switches, internal switches, and private switches.

External switches

An external switch connects to a physical or wireless network adapter. Only one virtual switch can be mapped to a specific physical or wireless network adapter or NIC team. For example, if a virtualization host had four physical network adapters configured as two separate NIC teams, you could configure two external virtual switches. If a virtualization host had three physical network adapters that did not participate in any NIC teams, you could configure three external virtual switches. VMs connected to the same external switch can communicate with each other as well as with external hosts connected to the network to which the network adapter mapped to the external switch is connected. For example, if an external switch is connected to a network adapter that is connected to a network that can route traffic to the Internet, a VM connected to that external virtual switch will also be able to connect to hosts on the Internet. When you create an external switch, a virtual network adapter that maps to this switch is created on the virtualization host unless you clear the option that allows the management operating system to share the network adapter. If you clear this option, the virtualization host will not be able to communicate through the network adapter associated with the external switch.

Internal switches

An internal switch allows communication between the VM and the virtualization host. All VMs connected to the same internal switch can communicate with each other and the virtualization host. For example, you could successfully initiate an RDP connection from the virtualization host to an appropriately configured VM or use the Test-NetConnection Windows PowerShell cmdlet from a Windows PowerShell prompt on the virtualization host to get a response from a VM connected to an internal network connection. VMs connected to an internal switch are unable to use that virtual switch to communicate with hosts on a separate virtualization host that are connected to an internal switch with the same name.

Private switches

VMs connected to the same private switch on a VM host can communicate with one another, but they cannot communicate directly with the virtualization host. Private switches only allow communication between VMs on the same virtualization host. For example, VMs alpha and beta are connected to private switch p_switch_a on virtualization host h_v_one. VM gamma is connected to private switch p_switch_a on virtualization host h_v_two. VMs alpha and beta will be able to communicate with each other, but they won’t be able to communicate with h_v_one or VM gamma.

Virtual machine network adapters

Generation 1 VMs support two types of network adapters: synthetic network adapters and legacy network adapters. A synthetic network adapter uses drivers that are provided when you install integration services in the VM operating system. If a VM operating system doesn’t have these drivers or if integration services are not available for this operating system, then the network adapter will not function. Synthetic network adapters are unavailable until a VM operating system that supports them is running. This means that you can’t perform a PXE boot from a synthetic network adapter if you have configured a generation 1 VM.

Legacy network adapters emulate a physical network adapter, similar to a multiport DEC/Intel 21140 10/100TX 100 MB card. Many operating systems, including those that do not support virtual machine integration services, support this network adapter. This means that if you want to run an operating system in a VM that doesn’t have virtual machine integration services support—such as a version of Linux or BSD that isn’t officially supported for Hyper-V—you’ll need to use a legacy network adapter because it is likely to be recognized by the guest VM operating system.

Legacy network adapters on generation 1 VMs also function before the VM guest operating system is loaded. This means that if you want to PXE boot a generation 1 VM—for example, if you wanted to use WDS to deploy an operating system to the VM—you’d need to configure the VM with a legacy network adapter.

Generation 2 VMs don’t separate synthetic and legacy network adapters and have only a single network adapter type. Generation 2 VMs support PXE booting from this single network adapter type. Generation 2 VMs also support “hot add” network adapters, allowing you to add or remove network adapters to or from a virtual machine while it is running. It is important to remember that only recent Windows client and server operating systems and only certain Linux operating systems are supported as generation 2 VMs.

Optimizing network performance

You can optimize network performance for VMs hosted on Hyper-V in a number of ways. For example, you can configure the virtualization host with separate network adapters connected to separate subnets. You do this to separate network traffic related to the management of the Hyper-V virtualization host from network traffic associated with hosted VMs. You can also use NIC teaming on the Hyper-V virtualization host to provide increased and fault-tolerant network connectivity. You’ll learn more about NIC teaming later in this chapter.

Bandwidth management

An additional method of optimizing network performance is to configure bandwidth management at the virtual network adapter level. Bandwidth management allows you to specify a minimum and the maximum traffic throughput for a virtual network adapter. The minimum bandwidth allocation is an amount that Hyper-V will reserve for the network adapter. For example, if you set the minimum bandwidth allocation to 10 Mbps for each VM, Hyper-V would ensure that when other VMs needed more, they would be able to increase their bandwidth utilization until they reached a limit defined by the combined minimum bandwidth allocation of all VMs hosted on the server. Maximum bandwidth allocations specify an upper limit for bandwidth utilization. By default, no minimum or maximum limits are set on virtual network adapters.

You configure bandwidth management by selevting the Enable Bandwidth Management option on a virtual network adapter and specifying a minimum and maximum bandwidth allocation in megabits per second (Mbps).

SR-IOV

SR-IOV (Single Root I/O Virtualization) increases network throughput by bypassing a virtual switch and sending network traffic straight to the VM. When you configure SR-IOV, the physical network adapter is mapped directly to the VM. As such, SR-IOV requires that the VM’s operating system include a driver for the physical network adapter. You can only use SR-IOV if the physical network adapter and the network adapter drivers used with the virtualization host support the functionality. You can only configure SR-IOV for a virtual switch during switch creation. Once you have an SR-IOV–enabled virtual switch, you can then enable SR-IOV on the virtual network adapter that connects to that switch.

Dynamic virtual machine queue

Dynamic virtual machine queue is an additional technology that you can use to optimize network performance. When a VM is connected through a virtual switch to a network adapter that supports virtual machine queue and virtual machine queue is enabled on the virtual network adapter’s properties, the physical network adapter can use Direct Memory Access (DMA) to forward traffic directly to the VM. With virtual machine queue, network traffic is processed by the CPU assigned to the VM rather than by the physical network adapter used by the Hyper-V virtualization host. Dynamic virtual machine queue automatically adjusts the number of CPU cores used to process network traffic. Dynamic virtual machine queue is automatically enabled on a virtual switch when you enable virtual machine queue on the virtual network adapter.

Virtual machine NIC teaming

NIC teaming allows you to aggregate bandwidth across multiple network adapters while also providing a redundant network connection if one of the adapters in the team fails. NIC teaming allows you to consolidate up to 32 network adapters and to use them as a single network interface. You can configure NIC teams using adapters that are from different manufacturers and that run at different speeds (though it’s generally a good idea to use the same adapter make and model in production environments).

You can configure NIC teaming at the virtualization host level if the virtualization host has multiple network adapters. The drawback is that you can’t configure NIC teaming at the host level if the network adapters are configured to use SR-IOV. If you want to use SR-IOV and NIC teaming, create the NIC team instead in the VM. You can configure NIC teaming within VMs by adding adapters to a new team using the Server Manager console or the New-NetLbfoTeam PowerShell cmdlet.

When configuring NIC teaming in a VM, ensure that each virtual network adapter that will participate in the team has MAC address spoofing enabled

Virtual machine MAC addresses

By default, VMs running on Hyper-V hosts use dynamic MAC addresses. Each time a VM is powered on, it will be assigned a MAC address from a MAC address pool. You can configure the properties of the MAC address pool through the MAC Address Range settings available through Virtual Switch Manager.

When you deploy operating systems on physical hardware, you can use two methods to ensure that the computer is always assigned the same IP address configuration. The first method is to assign a static IP address from within the virtualized operating system. The second is to configure a DHCP reservation that always assigns the same IP address configuration to the MAC address associated with the physical computer’s network adapter.

This won’t work with Hyper-V VMs in their default configuration because the MAC address may change if you power the VM off and then on. Rather than configure a static IP address using the VM’s operating system, you can instead configure a static MAC address on a per-virtual network adapter basis. This will ensure that a VM’s virtual network adapter retains the same MAC address whether the VM is restarted or even if the VM is migrated to another virtualization host.

To configure a static MAC address on a per-network adapter basis, edit the network adapter’s advanced features. When entering a static MAC address, you will need to selevt a MAC address manually. You shouldn’t use one from the existing MAC address pool because there is no way for the current virtualization hosts or other virtualization hosts on the same subnet to check whether a MAC address that is to be assigned dynamically has already been assigned statically.

Network isolation

Hyper-V supports VLAN (Virtual Local Area Network) tagging at both the network adapter and virtual switch level. VLAN tags allow the isolation of traffic for hosts connected to the same network by creating separate broadcast domains. Enterprise hardware switches also support VLANs as a way of partitioning network traffic. To use VLANs with Hyper-V, the virtualization hosts’ network adapter must support VLANs. A VLAN ID has 12 bits, which means you can configure 4,095 VLAN IDs.

You configure VLAN tags at the virtual network adapter level by selevting Enable Virtual LAN Identification in the Virtual Network Adapter Properties dialog box. VLAN tags applied at the virtual switch level override VLAN tags applied at the virtual network adapter level. To configure VLAN tags at the virtual switch level, selevt the Enable Virtual LAN Identification For Management Operating System option and specify the VLAN identifier.

Hyper-V replica

Hyper-V replica provides a replica of a VM running on one Hyper-V host that can be stored and updated on another Hyper-V host. For example, you could configure a VM hosted on a Hyper-V failover cluster in Melbourne to be replicated through Hyper-V replica to a Hyper-V failover cluster in Sydney. Hyper-V replication allows for replication across site boundaries and does not require access to shared storage in the way that failover clustering does.

Hyper-V replication is asynchronous. While the replica copy is consistent, it is a lagged copy with changes sent only as frequently as once every 30 seconds. Hyper-V replication supports multiple recovery points, with a recovery snapshot taken every hour. (This incurs a resource penalty, so it is off by default.) This means that when activating the replica, you can choose to activate the most up-to-date copy or a lagged copy. You would choose to activate a lagged copy if some form of corruption or change made the up-to-date copy problematic.

When you perform a planned failover from the primary host to the replica, you need to switch off the primary host. This ensures that the replica is in an up-to-date and consistent state. This is a drawback compared to failover or live migration where the VM will remain available during the process. A series of checks are completed before performing planned failover to ensure that the VM is off, that reverse replication is allowed back to the original primary Hyper-V host, and that the state of the VM on the current replica is consistent with the state of the VM on the current primary. Performing a planned failover will start the replicated VM on the original replica, which will now become the new primary server.

Hyper-V replica also supports unplanned failover. You perform an unplanned failover if the original Hyper-V host has failed or the site that hosts the primary replica has become unavailable. When performing unplanned failover, you can choose either the most recent recovery point or a previous recovery point. Performing unplanned failover will start the VM on the original replica, which will now become the new primary server.

Hyper-V extended replication allows you to create a second replica of the existing replica server. For example, you could configure Hyper-V replication between a Hyper-V virtualization host in Melbourne and Sydney, with Sydney hosting the replica. You could then configure an extended replica in Brisbane using the Sydney replica.

Configuring Hyper-V replica servers

To configure Hyper-V replication, you need to configure the Replication Configuration settings. The first step is to selevt Enable This Computer As A Replica Server. Next, selevt the authentication method you will use. If the computers are parts of the same Active Directory environment, you can use Kerberos. When you use Kerberos, Hyper-V replication data isn’t encrypted when transmitted across the network. If you are concerned about encrypting network data, you could configure IPsec. If you are concerned about encrypting replication traffic, another option is to use certificate-based authentication. This is useful if you are transmitting data across the public Internet without using an encrypted VPN tunnel. When using certificate-based authentication, you’ll need to import and selevt a public certificate issued to the partner server.

The final step when configuring Hyper-V replica is to selevt the servers from which the Hyper-V virtualization host will accept incoming replicated VM data. One option is to have the Hyper-V virtualization host accept replicated VMs from any authenticated Hyper-V virtualization host, using a single default location to store replica data. The other option is to configure VM replica storage on a per-server basis. For example, if you wanted to store VM replicas from one server on one volume and VM replicas from another server on a different volume, you’d configure VM replica storage on a per-server basis.

Once replication is configured on the source and destination servers, you’ll also need to enable the pre-defined firewall rules to allow the incoming replication traffic. There are two rules: one for replication using Kerberos (HTTP) on port 80 and the other for using certificate-based authentication on port 443.

Configuring VM replicas

Once you have configured the source and destination replica servers, you need to configure replication on a per-VM basis. You do this by running the Enable Replication wizard, which you can trigger by clicking Enable Replication when the VM is selevted in Hyper-V Manager. To configure VM replicas, you must perform the following steps:

  • Selevt Replica Server. Selevt the replica server name. If you are replicating to a Hyper-V failover cluster, you’ll need to specify the name of the Hyper-V replica broker. You’ll learn more about Hyper-V replica broker later in this chapter.

  • Choose Connection Parameters. Specify the connection parameters. The options will depend on the configuration of the replica servers. On this page, depending on the existing configuration, you can choose the authentication type and whether replication data will be compressed when transmitted over the network.

  • Selevt Replication VHDs. When configuring replication, you have the option of not replicating some of a VM’s virtual hard disks. In most scenarios, you should replicate all a VM’s hard disk drives. One reason not to replicate a VM’s virtual hard disk would be if the virtual hard disk only stores frequently changing temporary data that wouldn’t be required when recovering the VM.

  • Replication Frequency. Use this to specify the frequency with which changes are sent to the replica server. You can choose between intervals of 30 seconds, 5 minutes, and 15 minutes.

  • Additional Recovery Points. You can choose to create additional hourly recovery points. Doing this gives you the option of starting the replica from a previous point in time rather than the most recent. The advantage is that this allows you to roll back to a previous version of the VM if data corruption occurs and has replicated to the most recent recovery point. The replica server can store a maximum of 24 recovery points.

  • Initial Replication. The last step in configuring Hyper-V replica is choosing how to seed the initial replica. Replication works by sending changed blocks of data, so the initial replica, which sends the entire VM, will be the largest transfer. You can perform an offline transfer with external media, use an existing VM on the replica server as the initial copy (the VM for which you are configuring a replica must have been exported and then imported on the replica server) or transfer all VM data across the network. You can perform replication immediately or at a specific time in the future, such as 2 AM when network utilization is likely to be lower.

Replica failover

You perform a planned replica failover when you want to run the VM on the replica server rather than on the primary host. Planned failover involves shutting down the VM, which ensures that the replica will be up to date. Contrast this with Hyper-V live migration, which you perform while the VM is running. When performing a planned failover, you can configure the VM on the replica server to automatically start once the process completes; you can also configure reverse replication, so that the current replica server becomes the new primary server, and the current primary becomes the new replica server.

If the primary server becomes unavailable, you can trigger an unplanned failover. You would then perform the unplanned failover on the replica server (as the primary is not available). When performing an unplanned failover, you can selevt any of the up to 24 previously stored recovery points.

Hyper-V replica broker

You need to configure and deploy Hyper-V replica broker if your Hyper-V replica configuration includes a Hyper-V failover cluster as a source or destination. You don’t need to configure and deploy Hyper-V replica broker if both the source and destination servers are not participating in a Hyper-V failover cluster. You install the Hyper-V Replica Broker role using Failover Cluster Manager after you’ve enabled the Hyper-V role on cluster nodes.

Hyper-V failover clusters

One of the most common uses for failover clustering is to host Hyper-V virtual machines. The Hyper-V and failover clustering roles are even supported in Nano Server, which allows organizations to deploy failover clustering–capable virtualization hosts that have a minimal operating system footprint.

Hyper-V host cluster storage

When deployed on Hyper-V host clusters, the configuration and virtual hard disk files for highly available VMs are hosted on shared storage. This shared storage can be one of the following:

  • Serial Attached SCSI (SAS). Suitable for two-node failover clusters where the cluster nodes are near each other.

  • iSCSI storage. Suitable for failover clusters with two or more nodes. Windows Server includes iSCSI Target Software, allowing it to host iSCSI targets that can be used as shared storage by Windows failover clusters.

  • Fibre Channel. Fibre Channel/Fibre Channel over Ethernet storage requires special network hardware. While generally providing better performance than iSCSI, Fibre Channel components tend to be more expensive.

  • SMB 3.0 file shares configured as continuously available storage. This special type of file share is highly available, with multiple cluster nodes able to maintain access to the file share. This configuration requires multiple clusters. One cluster hosts the highly available storage used by the VMs, and the other cluster hosts the highly available VMs.

  • Cluster Shared Volumes (CSVs). CSVs can also be used for VM storage in Hyper-V failover clusters. As with continuously available file shares, multiple nodes in the cluster have access to the files stored on CSVs, ensuring that failover occurs with minimal disruption. As with SMB 3.0 file shares, multiple clusters are required, with one cluster hosting the CSVs and the other cluster hosting the VMs.

When considering storage for a Hyper-V failover cluster, remember the following:

  • Ensure volumes used for disk witnesses are formatted as either NTFS or ReFS.

  • Avoid allowing nodes from separate failover clusters to access the same shared storage by using LUN masking or zoning.

  • Where possible, use storage spaces to host volumes presented as shared storage.

Cluster quorum

Hyper-V failover clusters remain functional until they do not have enough active votes to retain quorum. Votes can consist of nodes that participate in the cluster as well as disk or file share witnesses. The calculation on whether the cluster maintains quorum is dependent on the cluster quorum mode. When you deploy a Windows Server failover cluster, one of the following modes will automatically be selevted, depending on the current cluster configuration:

  • Node Majority

  • Node and Disk Majority

  • Node and File Share Majority

  • No Majority: Disk Only

You can change the cluster mode manually; or, with Dynamic Quorum in Windows Server 2019, the cluster mode will change automatically when you add or remove nodes, a witness disk, or a witness share.

Node Majority

The Node Majority cluster quorum mode is chosen automatically during setup if a cluster has an odd number of nodes. When this cluster quorum mode is used, a file share or disk witness is not used. A failover cluster will retain quorum as long as the number of available nodes is more than the number of failed nodes that retain cluster membership. For example, if you deploy a nine-node failover cluster, the cluster will retain quorum as long as five cluster nodes are able to communicate with each other.

Node and Disk Majority

The Node and Disk Majority model is chosen automatically during setup if the cluster has an even number of nodes and shared storage is available to function as a disk witness. In this configuration, cluster nodes and the disk witness each have a vote when calculating quorum. As with the Node Majority model, the cluster will retain quorum as long as the number of votes that remain in communication exceeds the number of votes that cannot be contacted. For example, if you deployed a six-node cluster and a witness disk, there would be a total of seven votes. As long as four of those votes remained in communication with each other, the failover cluster would retain quorum.

Node and File Share Majority

The Node and File Share Majority model is used when a file share is configured as a witness. Each node and the file share have a vote when it comes to determining if quorum is retained. As with other models, a majority of the votes must be present for the cluster to retain quorum. Node and File Share Majority is suitable for organizations that are deploying multi-site clusters; for example, placing half the cluster nodes in one site, half the cluster nodes in another site, and the file share witness in a third site. If one site fails, the other site can retain communication with the site that hosts the file share witness, in which case quorum is retained.

No Majority: Disk Only

The No Majority: Disk Only model must be configured manually and must only be used in testing environments because the only vote that counts toward quorum is that of the disk witness on shared storage. The cluster will retain quorum as long as the witness is available, even if every node but one fails. Similarly, the cluster will be in a failed state if all the nodes are available but the shared storage hosting the disk witness goes offline.

Cluster node weight

Rather than every node in the cluster having an equal vote when determining quorum, you can configure which cluster nodes can vote to determine quorum by running the Configure Cluster Quorum Wizard. Configuring node weight is useful if you are deploying a multi-site cluster and you want to control which site retains quorum if communication between the sites is lost. You can determine which nodes in a cluster are currently assigned votes by selevting Nodes in the Failover Cluster Manager.

Dynamic quorum

Dynamic quorum allows cluster quorum to be recalculated automatically each time a node is removed from or added to a cluster. By default, dynamic quorum is enabled on Windows Server 2019 clusters. Dynamic quorum works in the following manner:

  • The vote of the witness is automatically adjusted based on the number of voting nodes in the cluster. If the cluster has an even number of nodes, the witness has a vote. If a cluster has an even number of nodes and a node is added or removed, the witness loses its vote.

  • In the event of a 50 percent node split, dynamic quorum can adjust the vote of a node. This is useful in avoiding “split brain” syndrome during site splits with multi-site failover clusters.

An advantage of dynamic quorum is that as long as nodes are evicted in a graceful manner, the cluster will reconfigure quorum appropriately. This means that you could change a nine-node cluster so that it was a five-node cluster by evicting nodes, and the new quorum model would automatically be recalculated assuming that the cluster had only five nodes. With dynamic quorum, it is a good idea to specify a witness even if the initial cluster configuration has an odd number of nodes; doing so means a witness vote will automatically be included in the event that an administrator adds or removes a node from the cluster.

Cluster networking

In lab and development environments, it’s reasonable to have failover cluster nodes that are configured with a single network adapter. In production environments with mission-critical workloads, you should configure cluster nodes with multiple network adapters, institute adapter teaming, and leverage separate networks. Separate networks should include:

  • A network dedicated for connecting cluster nodes to shared storage

  • A network dedicated for internal cluster communication

  • The network that clients use to access services deployed on the cluster

When configuring IPv4 or IPv6 addressing for failover cluster nodes, ensure that addresses are assigned either statically or dynamically to cluster node network adapters. Avoid using a mixture of statically and dynamically assigned addresses, as this will cause an error with the Cluster Validation wizard. Also ensure that cluster network adapters are configured with a default gateway. While the cluster validation wizard will not provide an error if a default gateway is not present for the network adapters of each potential cluster node, you will be unable to create a failover cluster unless a default gateway is present.

Force Quorum Resiliency

Imagine you have a five-node, multi-site cluster in Melbourne and Sydney, with three nodes in Sydney. Also, imagine that Internet connectivity to the Sydney site is lost. Within the Sydney site itself, the cluster will remain running because with three nodes, it has retained quorum. But if external connectivity to the Sydney site is not available, you may instead need to forcibly start the cluster in the Melbourne site (which will be in a failed state because only two nodes are present) using the /fq (forced quorum) switch to provide services to clients.

In the past, when connectivity was restored, this would have led to a “split brain” or partitioned cluster, as both sides of the cluster would be configured to be authoritative. To resolve this with failover clusters running Windows Server 2012 or earlier, you would need to manually restart the nodes that were not part of the forced quorum set using the /pq (prevent quorum) switch. Windows Server 2019 provides a feature known as Force Quorum Resiliency that automatically restarts the nodes that were not part of the forced quorum set, so that the cluster does not remain in a partitioned state.

Cluster Shared Volumes

Cluster Shared Volumes (CSVs) are a high-availability storage technology that allows multiple cluster nodes in a failover cluster to have read-write access to the same LUN. This has the following advantages for Hyper-V failover clusters:

  • VMs stored on the same LUN can be run on different cluster nodes. This reduces the number of LUNs required to host VMs because the VMs stored on a CSV aren’t tied to one specific Hyper-V failover cluster node; instead, the VMs can be spread across multiple Hyper-V failover cluster nodes.

  • Switch-over between nodes is almost instantaneous in the event of failover because the new host node doesn’t have to go through the process of seizing the LUN from the failed node.

CSVs are hosted from servers running the Windows Server 2012 or later operating system and allow multiple nodes in a cluster to access the same NTFS- or ReFS-formatted file system. CSVs support BitLocker, with each node performing decryption of encrypted content using a cluster computer account. CSVs also integrate with SMB (Server Message Block) Multichannel and SMB Direct, allowing traffic to be sent through multiple networks and to leverage network cards that include Remote Direct Memory Access (RDMA) technology. CSVs can also automatically scan and repair volumes without requiring storage to be taken offline. You can convert cluster storage to a CSV using the Disks node of the Failover Cluster Manager.

Active Directory detached clusters

Active Directory detached clusters, also called “clusters without network names,” are a feature of Windows Server 2012 R2 and later operating systems. Detached clusters have their names stored in DNS but do not require the creation of a computer account within Active Directory. The benefit of detached clusters is that it is possible to create them without requiring that a computer object be created in Active Directory to represent the cluster; also, the account used to create the cluster need not have permissions to create computer objects in Active Directory. Although the account used to create a detached cluster does not need permission to create computer objects in Active Directory, the nodes that will participate in the detached cluster must still be domain joined.

Preferred owner and failover settings

Cluster role preferences settings allow you to configure a preferred owner for a cluster role. When you do this, the role will be hosted on the node listed as the preferred owner. You can specify multiple preferred owners and configure the order in which a role will attempt to return to a specific cluster node.

Failover settings allow you to configure how many times a service will attempt to restart or failover in a specific period. By default, a cluster service can failover twice in a six-hour period before the failover cluster will leave the cluster role in a failed state. The failback setting allows you to configure the amount of time a clustered role that has failed over to a node that is not its preferred owner will wait before falling back to the preferred owner.

Hyper-V guest clusters

A guest cluster is a failover cluster that consists of two or more VMs as cluster nodes. You can run a guest cluster on a Hyper-V failover cluster, or you can run guest clusters with nodes on separate Hyper-V failover clusters. While deploying a guest cluster on Hyper-V failover clusters may seem as though it is taking redundancy to an extreme, there are good reasons to deploy guest clusters and Hyper-V failover clusters together, including:

  • Failover clusters monitor the health of clustered roles to ensure that they are functioning. This means that a guest failover cluster can detect when the failure of a clustered role occurs and can take steps to recover the role. For example, you deploy a SQL Server failover cluster as a guest cluster on a Hyper-V failover cluster. One of the SQL Server instances that participates in the guest cluster suffers a failure. In this scenario, failover occurs within the guest cluster, and another instance of SQL Server hosted on the other guest cluster node continues to service client requests.

  • Deploying guest and Hyper-V failover clusters together allows you to move applications to other guest cluster nodes while you are performing servicing tasks. For example, you may need to apply software updates that require a restart to the operating system that hosts a SQL Server instance. If this SQL Server instance is participating in a Hyper-V guest cluster, you could move the clustered role to another node, apply software updates to the original node, perform the restart, and then move the clustered SQL Server role back to the original node.

  • Deploying guest and Hyper-V failover clusters together allows you to live migrate guest cluster VMs from one host cluster to another host cluster while ensuring clients retain connectivity to clustered applications. For example, a two-node guest cluster hosting SQL Server is hosted on one Hyper-V failover cluster in your organization’s datacenter. You want to move the guest cluster from its current host Hyper-V failover cluster to a new Hyper-V failover cluster. By migrating one guest cluster node at a time from the original Hyper-V failover cluster to the new Hyper-V failover cluster, you’ll be able to continue to service client requests without interruption, failing over SQL Server to the guest node on the new Hyper-V failover cluster after the first node completes its migration and before migrating the second node across.

Hyper-V guest cluster storage

Just as you can configure a Hyper-V failover cluster where multiple Hyper-V hosts function as failover cluster nodes, you can configure failover clusters within VMs, where each failover cluster node is a VM. Even though failover cluster nodes must be members of the same Active Directory domain, there is no requirement that they be hosted on the same cluster. For example, you could configure a multi-site failover cluster where the cluster nodes are hosted as highly available VMs, each hosted on its own Hyper-V failover clusters in each site.

When considering how to deploy a VM guest cluster, you will need to choose how you will provision the shared storage that is accessible to each cluster node. The options for configuring shared storage for VM guest clusters include:

  • iSCSI

  • Virtual Fibre Channel

  • Cluster Shared Volumes

  • Continuously Available File Shares

  • Shared virtual hard disks

The conditions for using iSCSI, Virtual Fibre Channel, Cluster Shared Volumes, and Continuously Available File Shares with VM guest clusters are essentially the same for VMs as they are when configuring traditional physically hosted failover cluster nodes.

Shared virtual hard disk

Shared virtual hard disks are a special type of shared storage only available to VM guest clusters. With shared virtual hard disks, each guest cluster node can be configured to access the same shared virtual hard disk. Each VM cluster node’s operating system will recognize the shared virtual hard disk as shared storage when building the VM guest failover cluster.

Shared virtual hard disks have the following requirements:

  • Can be used with generation 1 and generation 2 VMs.

  • Can only be used with guest operating systems running Windows Server 2012 or later. If the guest operating systems are running Windows Server 2012, they must be updated to use the Windows Server 2012 R2 integration services components.

  • Can only be used if virtualization hosts are running the Windows Server 2012 R2 or later version of Hyper-V.

  • Must be configured to use the .vhdx virtual hard disk format.

  • Must be connected to a virtual SCSI controller.

  • When deployed on a failover cluster, the shared virtual hard disk itself should be located on shared storage, such as a Continuously Available File Share or Cluster Shared Volume. This is not necessary when configuring a guest failover cluster on a single Hyper-V server that is not part of a Hyper-V failover cluster.

  • VMs can only use shared virtual hard disks to store data. You can’t boot a VM from a shared virtual hard disk.

The configuration of shared virtual hard disks differs from the traditional configuration of VM guest failover clusters because you configure the connection to shared storage by editing the VM properties rather than connecting to the shared storage from within the VM. Windows Server 2019 and Windows Server 2016 support shared virtual hard disks being resized and used with Hyper-V replica.

Hyper-V VHD Sets

VHD Sets are a newer version of shared virtual hard disks. Hyper-V VHD Sets use a new virtual hard disk format that uses the .vhds extension. VHD Sets support online resizing of shared virtual disks, Hyper-V Replicas, and application-consistent Hyper-V checkpoints.

You can create a VHD Set file from Hyper-V Manager or by using the New-VHD cmdlet with the file type set to .vhds when specifying the virtual hard disk name. You can use the Convert-VHD cmdlet to convert an existing shared virtual hard disk file to a VHD Set file as long as you have taken the VMs that use the shared virtual hard disk file offline and have removed the shared virtual hard disk from the VM using the Remove-VHHardDiskDrive cmdlet.

Live migration

Live migration is the process of moving an operational VM from one physical virtualization host to another with no interruption to VM clients or users. Live migration is supported between cluster nodes that share storage between separate Hyper-V virtualization hosts that are not participating in a failover cluster using a SMB 3.0 file share as storage; live migration is even supported between separate Hyper-V hosts that are not participating in a failover cluster using a process called “shared-nothing live migration.”

Live migration has the following prerequisites:

  • There must be two or more servers running Hyper-V that use processors from the same manufacturer (for example, all Hyper-V virtualization hosts configured with Intel processors or all Hyper-V virtualization hosts configured with AMD processors).

  • Hyper-V virtualization hosts need to be members of the same domain, or they must be members of domains that have a trust relationship with each other.

  • VMs must be configured to use virtual hard disks or virtual Fibre Channel disks; pass-through disks are not allowed.

It is possible to perform live migration with VMs configured with pass-through disks under the following conditions:

  • VMs are hosted on a Windows Server Hyper-V failover cluster.

  • Live migration will be within nodes that participate in the same Hyper-V failover cluster.

  • VM configuration files are stored on a Cluster Shared Volume.

  • The physical disk that is used as a pass-through disk is configured as a storage disk resource that is controlled by the failover cluster. This disk must be configured as a dependent resource for the highly available VM.

If performing a live migration using shared storage, the following conditions must be met:

  • The SMB 3.0 share needs to be configured so that the source and the destination virtualization host’s computer accounts have read and write permissions.

  • All VM files (virtual hard disks, configuration files, and snapshot files) must be located on the SMB 3.0 share. You can use storage migration to move VM files to an SMB 3.0 share while the VM is running prior to performing a live migration using this method.

You must configure the source and destination Hyper-V virtualization hosts to support live migrations by enabling live migrations in the Hyper-V settings. When you do this, you specify the maximum number of simultaneous live migrations and the networks that you will use for live migration. Microsoft recommends using an isolated network for live migration traffic, though this is not a requirement.

The next step in configuring live migration is choosing which authentication protocol and live migration performance options to use. You selevt these in the Advanced Features area of the Live Migrations settings. The default authentication protocol is CredSSP (Credential Security Support Provider). CredSSP requires local sign-in to both source and destination Hyper-V virtualization hosts to perform live migration. Kerberos allows you to trigger live migration remotely. To use Kerberos, you must configure the computer accounts for each Hyper-V virtualization host with constrained delegation for the cifs and Microsoft Virtual System Migration Service services, granting permissions to the virtualization hosts that will participate in the live migration partnership. The performance options allow you to speed up live migration. Compression increases processor utilization. SMB will use SMB Direct if both network adapters used for the live migration process support Remote Direct Memory Access (RDMA) and RDMA capabilities are enabled.

Storage migration

With storage migration, you can move a VM’s virtual hard disk files, checkpoint files, smart paging files, and configuration files from one location to another. You can perform storage migration while the VM is running or while the VM is powered off. You can move data to any location that is accessible to the Hyper-V host. This allows you to move data from one volume to another, from one folder to another, or even to an SMB 3.0 file share on another computer. When performing storage migration, choose the Move The VM’s Storage option.

For example, you could use storage migration to move VM files from one Cluster Share Volume to another on a Hyper-V failover cluster without interrupting the VM’s operation. You have the option of moving all data to a single location, moving VM data to separate locations, or moving only the VM’s virtual hard disk. To move the VM’s data to different locations, selevt the items you want to move and the destination locations.

Exporting, importing, and copying VMs

A VM export creates a duplicate of a VM that you can import on the same or different Hyper-V virtualization host. When performing an export, you can choose to export the VM, which includes all its VM checkpoints, or you can choose to export just a single VM checkpoint. Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2 support exporting a running VM. With Hyper-V in Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008, it is necessary to shut down the VM before performing an export.

Exporting a VM with all its checkpoints will create multiple differencing disks. When you import a VM that was exported with checkpoints, these checkpoints will also be imported. If you import a VM that was running at the time of export, the VM is placed in a saved state. You can resume from this saved state, rather than having to restart the VM.

When importing a VM, you can choose from the following options:

  • Register The Virtual Machine In Place (Use The Existing ID). Use this option when you want to import the VM while keeping the VM files in their current locations. Because this method uses the existing VM ID, you can only use it if the original VM on which the export was created is not present on the host to which you wish to import the VM.

  • Restore The Virtual Machine (Use The Existing Unique ID). Use this option when you want to import the VM while moving the files to a new location; for example, you would choose this option if you are importing a VM that was exported to a network share. Because this method also uses the existing VM ID, you can only use it if the original VM on which the export was created is not present on the host to which you wish to import the VM.

  • Copy The Virtual Machine (Create A New Unique ID). Use this method if you want to create a separate clone of the exported VM. The exported files will be copied to a new location, leaving the original exported files unaltered. A new VM ID is created, meaning that the cloned VM can run concurrently on the same virtualization host as the original progenitor VM. When importing a cloned VM onto the same virtualization host as the original progenitor VM, ensure that you rename the newly imported VM; otherwise, you may confuse the VMs.

VM Network Health Detection

VM Network Health Detection is a feature for VMs that are deployed on Hyper-V host clusters. With VM Network Health Detection, you configure a VM’s network adapter settings and mark certain networks as being protected. You do this in the Advanced Features section of the Network Adapter Properties dialog box.

If a VM is running on a cluster node where the network marked as protected becomes unavailable, the cluster will automatically live migrate the VM to a node where the protected network is available. For example, you have a four-node Hyper-V failover cluster. Each node has multiple network adapters and a virtual switch named Alpha maps as an external virtual switch to a physical network adapter on each node. A VM, configured as highly available and hosted on the first cluster node, is connected to virtual switch Alpha. The network adapter on this VM is configured with the protected network option. After the VM has been switched on and has been running for some time, a fault occurs causing the physical network adapter mapped to virtual switch Alpha on the first cluster node to fail. When this happens, the VM will automatically be live migrated to another cluster node where virtual switch Alpha is working.

VM drain on shutdown

VM drain on shutdown is a feature that will automatically live migrate all running VMs off a node if you shut down that node without putting it into maintenance mode. If you are following best practice, you’ll put nodes into maintenance mode and live migrating running workloads away from nodes that you will restart or intend to shut down anyway. The main benefit of VM drain on shutdown is that, in the event that you are having a bad day and forget to put a cluster node into maintenance mode before shutting it down or restarting it, any running VMs will be live migrated without requiring your direct intervention.

Domain controller cloning

Windows Server 2019 and Windows Server 2016 support creating copies of domain controllers that are running as VMs as long as certain conditions are met. Cloned domain controllers have the following prerequisites:

  • The virtualization host supports VM-GenerationID, a 128-bit random integer that identifies each VM checkpoint. VM-GenerationID is supported by the version of Hyper-V available with Windows Server 2012 and later, as well as some third-party hypervisors.

  • The domain controller must be running Windows Server 2012 or later as its operating system.

  • The server that hosts PDC emulator Flexible Single Master Operations Role (FSMO) must be contactable. This server must be running Windows Server 2012 or later as its operating system. Remember that you should be running the latest version of Windows Server for domain controllers, so of course, you will be running Windows Server 2019 on the servers hosting your forest and domain FSMO roles!

  • The computer account of the domain controller that will serve as the template for cloning must be added to the Cloneable Domain Controllers security group.

Once you have met these conditions, you’ll need to create an XML configuration file named DCCloneConfig.xml using the New-ADDCCloneConfigFile Windows PowerShell cmdlet. Once created, you’ll need to edit this file and specify settings such as computer name, network settings and Active Directory site information. You should also check the template DC using the Get-ADDCCloningExcludedApplicationsList cmdlet to determine whether any services that will cause problems with the cloning are present on the template DC, such as the DHCP server service.

Shielded virtual machines

Shielded virtual machines are a special type of virtual machine that has a virtual Trusted Platform Module (TPM) chip that is encrypted using BitLocker and can only run on specific approved hosts that support what is known as a guarded fabric. Shielded VMs allow sensitive data and workloads to be run on virtualization hosts without the concern that an attacker or the administrator of the virtualization host might export the virtual machine to gain access to the sensitive data stored there. Only certain operating systems can be used for shielded guest virtual machines. You’ll learn more about shielded VMs and guarded virtualization fabrics in Chapter 19, “Hardening Windows Server and Active Directory.”

Managing Hyper-V using PowerShell

The Hyper-V PowerShell module contains a large number of cmdlets that you can use to manage all aspects of Hyper-V and virtual machines on Windows Server 2019. These cmdlets are described in Table 6-1.

Table 6-1 Remote Access PowerShell cmdlets

Noun

Verbs

Function

VFD

New

Create a new virtual floppy disk.

VHD

Convert, Merge, Resize, Optimize, Dismount, New, Mount, Set, Get, Test

Manage virtual hard disks.

VHDSet

Optimize, Get

Manage virtual hard disk set files, an update to the VHDX files from previous versions of Hyper-V.

VHDSnapshot

Get, Remove

Used to manage snapshots with VHD Sets.

VM

Wait, Move, Get, Export, Debug, Measure, Import, Save, Resume, Restart, Set, Suspend, Stop, Start, Repair, Checkpoint, Remove, New, Compare, Rename

Use to manage virtual machines.

VMAssignableDevice

Add, Get, Remove

Manage VM assignable devices.

VMBios

Set, Get

Manage VM BIOS settings.

VMComPort

Set, Get

Manage VM COM Port settings.

VMConnectAccess

Grant, Revoke, Get

Manage which users can connect to specific VMs.

VMConsoleSupport

Enable, Disable

Manage HID device support for virtual machines.

VMDvdDrive

Add, Set, Get, Remove

Manage VM DVD drive settings.

VMEventing

Disable, Enable

Enable or disable VM eventing.

VMFailover

Complete, Stop, Start

Manage VM failover.

VMFibreChannelHba

Set, Get, Remove, Add

Manage VM fibre channel HBA settings.

VMFile

Copy

Copy a file from the virtualization host to a location within the virtual machine.

VMFirmware

Set, Get

Manage VM virtual firmware settings.

VMGpuPartitionAdapter

Set, Add, Remove, Get

Manage VM GPU settings.

VMGroup

New, Get, Remove, Rename

Allows you to manage VM Groups, which you use to manage groups of machines that might make up a multi-tier application.

VMGroupMember

Remove, Add

Add and remove VMs from a specific group.

VMHardDiskDrive

Set, Add, Remove, Get

Manage VM hard disk drive settings.

VMHost

Set, Get

Configure VM host settings.

VMHostAssignableDevice

Add, Mount, Get, Remove, Dismount

Manage VM host assignable devices.

VMHostCluster

Get, Set

Configure VM host cluster settings.

VMHostNumaNode

Get

View VM host NUMA node information.

VMHostNumaNodeStatus

Get

View VM host NUMA node status.

VMHostSupportedVersion

Get

View VM host supported information data.

VMIdeController

Get

Manage VM IDE controller settings.

VMInitialReplication

Import, Start, Stop

Configure initial VM replication settings.

VMIntegrationService

Enable, Disable, Get

Manage VM integration services.

VMKeyProtector

Get, Set

Manage VM key protector settings.

VMKeyStorageDrive

Set, Remove, Get, Add

Manage VM key storage drive settings.

VMMemory

Set, Get

Manage VM memory.

VMMigration

Enable, Disable

Manage VM Migration.

VMMigrationNetwork

Add, Remove, Get, Set

Configure VM migration network settings.

VMNetworkAdapter

Disconnect, Connect, Remove, Test, Add, Set, Rename, Get

Configure VM network adapters.

VMNetworkAdapterAcl

Remove, Add, Get

Manage VM network adapter ACLs.

VMNetworkAdapterExtendedAcl

Remove, Add, Get

Manage VM Network adapter extended ACL settings.

VMNetworkAdapterFailoverConfiguration

Get, Set

Manage VM network adapter failover settings.

VMNetworkAdapterIsolation

Set, Get

Manage VM network adapter isolation settings.

VMNetworkAdapterRdma

Set, Get

Manage virtual network adapter RDMA settings.

VMNetworkAdapterRoutingDomainMapping

Set, Get, Remove, Add

Configure virtual network adapter routing domains.

VMNetworkAdapterTeamMapping

Remove, Get, Set

Configure virtual network adapter team mapping.

VMNetworkAdapterVlan

Get, Set

Manage VM network adapter VLAN settings.

VMPartitionableGpu

Get, Set

Manage VM GPU settings.

VMProcessor

Get, Set

Configure VM Processor settings.

VMRemoteFx3dVideoAdapter

Remove, Set, Get, Add

Manage RemoteFX3d video adapters.

VMRemoteFXPhysicalVideoAdapter

Disable, Get, Enable

Manage RemoteFX physical adapters.

VMReplication

Get, Stop, Suspend, Remove, Measure, Enable, Resume, Set

Manage VM replication.

VMReplicationAuthorizationEntry

Set, Remove, New, Get

Manage VM replication authorization entries.

VMReplicationConnection

Test

Check VM replication.

VMReplicationServer

Get, Set

Configure VM replication server settings.

VMReplicationStatistics

Reset

Reset VM replication statistics.

VMResourceMetering

Reset, Disable, Enable

Configure VM resource metering.

VMResourcePool

Set, Rename, Get, Measure, New, Remove

Manage VM resource pool settings.

VMSan

Remove, Disconnect, Connect, New, Set, Rename, Get

Manage VM SAN settings.

VMSavedState

Remove

Remove saved state data.

VMScsiController

Remove, Add, Get

Manage VM SCSI controller.

VMSecurity

Get, Set

Manage VM security information.

VMSecurityPolicy

Set

Configure VM security policies.

VMSnapshot

Rename, Restore, Export, Get, Remove

Manage VM checkpoints.

VMStorage

Move

Move VM storage.

VMStoragePath

Get, Remove, Add

Configure VM storage path.

VMSwitch

Set, Get, Rename, Remove, Add, New

Manage virtual switches.

VMSwitchExtension

Enable, Disable, Get

Manage extensions on a virtual switch.

VMSwitchExtensionPortData

Get

View port extensions on a virtual switch.

VMSwitchExtensionPortFeature

Set, Add, Remove, Get

Manage switch port features.

VMSwitchExtensionSwitchData

Get

View switch port data.

VMSwitchExtensionSwitchFeature

Add, Get, Set, Remove

Manage switch features.

VMSwitchTeam

Set, Get

Manage switch teams.

VMSwitchTeamMember

Add, Remove

Manage switch team membership.

VMSystemSwitchExtension

Get

View switch extensions installed on VM host.

VMSystemSwitchExtensionPortFeature

Get

View port-level features supported by switch extensions on VM host.

VMSystemSwitchExtensionSwitchFeature

Get

View switch-level features supported by virtual switch extensions on the VM host.

VMTPM

Disable, Enable

Manage virtual TPM settings.

VMTrace

Stop, Start

Manage trace settings for VM debugging.

VMVersion

Update

Update the version of the VM.

VMVideo

Get, Set

Configure video settings for virtual mac hines.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.144.229