Chapter 3

Manage virtual machines and containers

Windows Server functions as a host platform for virtual machines (VMs) and containers. Many hybrid administrators who deploy and manage Windows Server in Azure also run Windows Server as infrastructure as a service (IaaS) virtual machines. In this chapter you learn about managing Windows Server and guest virtual machines, configuring and managing Windows Server as a container host, and managing and maintaining Windows Server VMs in Azure.

Skills covered in this chapter:

Skill 3.1: Manage Hyper-V and guest virtual machines

Hyper-V is a virtualization platform that is built into Windows Server. Not only can you use Hyper-V to host virtual machines and a special type of containers, but Hyper-V is integrated into the very fabric of the Microsoft Azure cloud. Hyper-V is also available on some editions of Windows 10 and Windows 11, meaning that it’s possible to transfer a virtual machine from a Windows client to a Windows server to Azure and back without needing to alter the virtual machine’s format.

Virtual machine types

Hyper-V on Windows Server supports two different types of VM virtual hardware configurations. Generation 2 VMs are a special type of VM that differs in configuration from the VMs that are now termed “Generation 1 VMs,” which could be created on Hyper-V virtualization hosts running the Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012 operating systems. Generation 2 VMs are supported on Windows Server 2012 R2 and later operating systems and are supported in Azure.

Generation 2 VMs provide the following functionality:

  • Can boot from a SCSI virtual hard disk

  • Can boot from a SCSI virtual DVD

  • Supports UEFI firmware on the VM

  • Supports VM Secure Boot

  • Can PXE boot using standard network adapter

  • Supports Virtual TPM

Generation 2 VMs don’t attempt to replicate the hardware configuration of existing physical systems in the way that Generation 1 VMs did. There are no legacy network adapters with Generation 2 VMs, and the majority of legacy devices, such as COM ports and the diskette drive, are no longer present. Generation 2 VMs are “virtual first” and are not designed to simulate hardware for computers that have undergone physical-to-virtual (P2V) conversion. To deploy a VM that requires an emulated component such as a COM port, you have to deploy a Generation 1 VM. You configure the generation of a VM during the VM creation. After a VM is created, Hyper-V doesn’t allow you to modify the VM’s generation.

Generation 2 VMs boot more quickly and allow the installation of operating systems more quickly than generation 1 VMs. Generation 2 VMs have the following limitations:

  • You can only use Generation 2 VMs if the guest operating system is running an x64 version of Windows Server 2012 or later server or the Windows 8 or later client operating systems.

  • Generation 2 VMs only support virtual hard disks in VHDX format.

Manage VM using PowerShell remoting, PowerShell Direct, and HVC.exe

You can use a variety of methods to connect remotely to a virtual machine from a command prompt. The method you choose depends on whether you are trying to access the VM from the Hyper-V host, what operating system the VM is running, and whether the VM has network connectivity.

PowerShell remoting

PowerShell remoting is the primary method you use to run PowerShell sessions on Windows Server VMs that have network connectivity. PowerShell remoting allows you to have an interactive PowerShell session where you are signed on locally to a remote Windows Server VM. The account used to make a connection to this remote VM needs to have local Administrator privileges on the target VM. PowerShell remoting is enabled by default on Windows Server, but also by default, it requires a connection from a private network.

You can enter a remote session to a Windows Server VM that is a member of the same Active Directory forest by using the following commands, which will prompt you for the credentials that you’ll use to connect to the remote computer:

$cred = Get-Credential
Enter-PSSession -computername <computername> -Credential $cred

You can enable PowerShell remoting using the Enable-PSRemoting cmdlet if it has been disabled. PowerShell remoting relies on WSMan (Web Server Management). WSMan uses port 5985 and can be configured to support TLS over port 5986.

To enable PowerShell remoting to VMs that are not domain-joined, you must configure the trusted hosts list on the client computer from which you want to establish the remote session. You do this on the client computer using the set-item cmdlet. For example, to trust the computer at IP address 192.168.3.200, run this command:

Set-Item wsman:localhostClientTrustedHosts -Value 192.168.3.200 -Concatenate

After you’ve run the command to configure the client, you’ll be able to establish a PowerShell remote session by using the Enter-PSSession cmdlet. If you want more information about remoting, you can run the following command to bring up help text on the subject:

Help about_Remote_faq -ShowWindow
PowerShell Direct

PowerShell Direct allows you to create a remote PowerShell session directly from a Hyper-V host to a virtual machine hosted on that Hyper-V host without requiring the VM to be configured with a network connection. PowerShell Direct requires that both the Hyper-V host and the VM be running Windows Server 2016 or later server or Windows 10 or later client operating systems.

To use PowerShell Direct, you must be signed in locally to the Hyper-V host with Hyper-V Administrator privileges. You also must have access to valid credentials for the virtual machine. If you don’t, you won’t be able to establish a PowerShell Direct connection.

To establish a PowerShell Direct connection, use this command:

Enter-PSSession -vmname NameOfVM

You exit the PowerShell Direct session by using the Exit-PSSession cmdlet.

HVC for Linux

HVC.exe, which is included with Windows Server 2019 and later and Windows 10 and later, allows you to make a remote SSH connection from a Hyper-V host to a Linux virtual machine guest without requiring the Linux VM to have a functioning network connection. It provides similar functionality to PowerShell Direct. For HVC.exe to work, you must ensure that the Linux VM has an updated kernel and that the Linux integration services are installed. You also need the SSH server on the Linux VM to be installed and configured before you’ll be able to use HVC.exe to initiate an SSH connection.

Enable VM Enhanced Session Mode

Enhanced Session Mode allows you to perform actions such as cutting and pasting, audio redirection, and volume and device mapping when using Virtual Machine Connection (VMConnect) windows. For example, you can use Enhanced Session Mode to sign in to a VM with a smart card and view your local storage as a volume accessible to the VM. You enable Enhanced Session Mode on the Hyper-V server by selecting Allow Enhanced Session Mode in the Enhanced Session Mode Policy section of the Hyper-V server’s Properties dialog box.

You can only use Enhanced Session Mode with guest VMs running Windows Server 2012 R2 or later server operating systems or Windows 8.1 and later client operating systems. To use Enhanced Session Mode, you must have permission to connect to the VM using Remote Desktop through the account you use to sign in to the guest VM. You can grant permission to the VM by adding the user to the Remote Desktop Users group. A user who is a member of the local Administrators group also has this permission. The Remote Desktop Services service must be running on the guest VM.

Configure nested virtualization

Hyper-V on Windows Server 2016 and later servers and Windows 10 and later client OSs support nested virtualization. Nested virtualization allows you to enable Hyper-V and host virtual machines on a VM running under Hyper-V as long as that VM is running Windows Server 2016 and later servers or Windows 10 and later client operating systems. Nested virtualization can be enabled on a per-VM basis through Windows Admin Center or by running the following PowerShell command:

Set-VMProcessor -VMName NameOfVM -ExposeVirtualiationExtensions $true
Nested virtualization dynamic memory

You won’t be able to adjust the memory of a virtual machine that is enabled for nested virtualization while that VM is running. Although it is possible to enable dynamic memory, the amount of memory allocated to a VM configured for nested virtualization will not fluctuate while the VM is running.

Nested virtualization networking

To route network packets through the multiple virtual switches required during nested virtualization, you can either enable MAC address spoofing or configure network address translation (NAT).

To enable MAC address spoofing on the virtual machine that you have configured for nested virtualization, run the following PowerShell command:

Get-VMNetworkAdapter -VMName NameOfVM | Set-VMNEtworkAdapter -MacAddressSpoofing On

To enable NAT, create a virtual NAT switch in the VM that has been enabled for nested virtualization by using the following PowerShell commands:

New-VMSwitch -name VMNAT -SwitchTypeInternal
New-NetNAT -Name LocalNAT -InternalIPInterfaceAddressPrefix "192.168.15.0/24"
Get-NetAdapter "vEthernet (VmNat)" | New-NetIPAddress -IPAddress 192.168.15.1
-AddressFamily IPv4 -PrefixLength 24

After you’ve done this, you need to manually assign IP addresses to VMs running under the VM enabled for nested virtualization, using the default gateway of 192.168.15.1. You can use a separate internal addressing scheme other than 192.168.15.0/24 by altering the appropriate PowerShell commands in the previous code.

Need More Review? Enable Nested Virtualization

You can learn more about enabling nested virtualization at https://docs.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization.

Configure VM memory

You have two options when assigning memory to VMs. You can assign a static amount of memory, or you can configure dynamic memory. When you assign a static amount of memory, the amount of memory assigned to the VM remains the same, whether the VM is starting up, currently running, or in the process of shutting down.

When you configure dynamic memory, you can configure the following values in Windows Admin Center:

  • Startup Memory This is the amount of memory allocated to the VM during startup. The value can be the same as the minimum amount of memory, or it can be as large as the maximum amount of allocated memory. Once the VM has started, it will instead use the amount of memory configured as the Minimum Memory.

  • Minimum Memory This is the minimum amount of memory that the VM will be assigned by the virtualization host when dynamic memory is enabled. When multiple VMs are demanding memory, Hyper-V may reallocate memory away from the VM until the Minimum Memory value is met. You can reduce the Minimum Memory setting while the VM is running, but you cannot increase it while the VM is running.

  • Maximum Memory This is the maximum amount of memory that the VM will be allocated by the virtualization host when dynamic memory is enabled. You can increase the Maximum Memory setting while the VM is running, but you cannot decrease it while the VM is running.

  • Memory Buffer This is the percentage of memory that Hyper-V should allocate to the VM as a buffer.

  • Memory Weight This setting allows you to configure how memory should be allocated to this particular VM as compared to other VMs running on the same virtualization host.

Generally, when you configure dynamic memory the amount of memory used by a VM will fluctuate between the Minimum Memory and Maximum Memory values. You should monitor VM memory utilization and tune these values so that they accurately represent the VM’s actual requirements. If you allocate a Minimum Memory value below what the VM would actually need to run, this shortage might cause the virtualization host to reduce the amount of memory allocated to this Minimum Memory value, which will cause the VM to stop running.

Smart paging

Smart paging is a special technology in Hyper-V that functions in certain conditions when a VM is restarting. Smart paging uses a file on the disk to simulate memory to meet Startup Memory requirements when the Startup Memory setting exceeds the Minimum Memory setting. Startup Memory is the amount of memory allocated to the VM when it starts, but not when it is in a running state. For example, you could set Startup Memory to 2,048 MB and the Minimum Memory to 512 MB for a specific virtual machine. In a scenario where 1,024 MB of free memory was available on the virtualization host, smart paging would allow the VM to access the required 2,048 MB of memory.

Because smart paging uses disk to simulate memory, it’s only active if the following three conditions occur at the same time:

  • The VM is being restarted.

  • There is not enough memory on the virtualization host to meet the Startup Memory setting.

  • Memory cannot be reclaimed from other VMs running on the same host.

Smart paging doesn’t allow a VM to perform a “cold start” if the required amount of Startup Memory is not available but the Minimum Memory amount is. Smart paging is used only when a VM that was already running restarts and those three conditions have been met.

You can configure the location of the smart paging file on a per-VM basis. By default, smart paging files are written to the C:ProgramDataMicrosoftWindowsHyper-V folder. The smart paging file is created only when needed and is deleted within 10 minutes of the VM restarting.

Configure integration services

Integration services allow the virtualization host to extract information and perform operations on a hosted VM. By default, Windows Server 2012 R2 and later include Hyper-V integration services. If you are running a supported Linux guest VM on your Hyper-V server, you can download the Linux Integration Services for Hyper-V and Azure from the Microsoft website. Integration service installation files are available for all operating systems that are supported on Hyper-V. You can enable the following integration services:

  • Operating System Shutdown This integration service allows you to shut down the VM from the virtualization host, rather than from within the VM’s OS.

  • Time Synchronization This service synchronizes the virtualization host’s clock with the VM’s clock; it ensures that the VM clock doesn’t drift when the VM is started, stopped, or reverted to a checkpoint.

  • Data Exchange This service allows the virtualization host to read and modify specific VM registry values.

  • Heartbeat This service allows the virtualization host to verify that the VM OS is still functioning and responding to requests.

  • Backup (Volume Shadow Copy) For VMs that support Volume Shadow Copy, this service synchronizes with the virtualization host, allowing backups of the VM while the VM is in operation.

  • Guest Services Guest services allow you to copy files from the virtualization host to the VM using the Copy-VMFile Windows PowerShell cmdlet.

Configure Discrete Device Assignment

Discrete Device Assignment (DDA) allows you to directly assign a physical GPU or an NVMe storage device to a specific virtual machine. Each physical GPU or NVMe storage device can be associated with only one VM. DDA involves installing the device’s native driver in the VM associated with that GPU. This process works for both Windows and Linux VMs as long as the drivers are available for the VM’s operating system. DDA is supported for Generation 1 or 2 VMs running Windows Server 2012 R2 or later, Windows 10 or later, and some Linux guest operating systems.

Before assigning the physical GPU or storage device to a specific VM, you must dismount the device from the Hyper-V host. Some device vendors provide partitioning drivers for the Hyper-V host. Partitioning drivers are different from the standard device drivers and improve the security of the DDA configuration. If a partitioning driver is available, you should install this driver before dismounting the device from the Hyper-V host. If no driver is available, you’ll have to use the -Force option with the Dismount-VMHostAssignableDevice cmdlet.

After you’ve dismounted the physical device from the Hyper-V host, you can assign it to a specific guest VM using the Add-VMAssignableDevice cmdlet. If you want to remove a physical device from its assignment to a VM, you’ll need to stop the VM it is assigned to, then dismount it using the Remove-VMAssignableDevice cmdlet. Then, you’ll be able to assign it to another VM or make it available to the host computer by enabling it in Device Manager.

Enabling DDA requires that you disable the VM’s automatic stop action. You can do so using this command:

Set-VM -Name VMName -AutomaticStopAction TurnOff

The following functionality is also not available to VMs configured with DDA:

  • VM Save and Restore

  • VM live migration

  • Use of dynamic memory

  • Deployment of VM to a high availability cluster

RemoteFX is a technology available in Windows Server 2016 that performs a similar function to DDA in Windows Server 2019 and Windows Server 2022. RemoteFX provides a 3D virtual adapter and USB redirection support for VMs. You can only use RemoteFX if the virtualization host has a compatible GPU. RemoteFX allows one or more compatible graphics adapters to perform graphics processing tasks for multiple VMs. As with DDA, you can use RemoteFX to provide support for graphic-intensive applications, such as CAD, in virtual desktop infrastructure (VDI) scenarios. RemoteFX is deprecated from version 1803 of Windows Server on.

Configure VM resource groups

Instead of instituting resource constraints on a per-virtual machine basis, virtual machine resource controls allow you to create groups of virtual machines where each group is allocated a different proportion of the Hyper-V host’s total CPU resources. For example, you can have a group of six virtual machines used by a specific department that are limited to a specific proportion of the Hyper-V host’s CPU capacity that they cannot exceed. VM resource groups also allow you to limit what resources the Hyper-V host can use; for example, ensuring that the Hyper-V host has a limit on processor and memory use that can’t be exceeded, which would limit the processor and memory available to VMs.

Need More Review? Virtual Machine Resource Controls

You can learn more about virtual machine resource controls at https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/manage-hyper-v-cpugroups.

Configure VM CPU groups

VM CPU groups allow you to isolate VM groups to specific host processors; for example, on a multi-processor Hyper-V host, you might choose to allow a group of VMs exclusive access to specific processor cores. This can be useful in scenarios where you must ensure that different VMs are partitioned from one another, with Hyper-V network virtualization providing completely separate tenant networks and VM CPU groups ensuring that separate VM groups never share the same physical CPUs.

CPU groups are managed through the Hyper-V Host Compute Service (HCS). You cannot directly manage CPU groups through PowerShell or the Hyper-V console and instead need to download the cpugroups.exe command-line utility from the Microsoft Download Center.

Need More Review? Hyper-V CPU Groups

You can learn more about Hyper-V CPU groups at https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/manage-hyper-v-cpugroups.

Configure hypervisor scheduling types

Hyper-V supports a “classic” scheduler and a new hypervisor core scheduler. The differences between these are as follows:

  • Classic scheduler Uses fair-share round-robin method of scheduling processor tasks across the Hyper-V host, including processors used by the host and those used by guest VMs. The classic scheduler is the default type used on all versions of Hyper-V until Windows Server 2019. When used on a host with Symmetric Multi-Threading (SMT) enabled, the classic scheduler will schedule guest virtual processors from any VM running on the host so that one VM runs on one SMT thread of a processor core while another VM runs on another SMT thread of the same processor core.

  • Core scheduler The core scheduler uses SMT to ensure isolation of guest workloads. This means that a CPU core is never shared between VMs, which is not the case if SMT is enabled and the classic scheduler is used. The core scheduler ensures a strong security boundary for guest workload isolation. It also allows the use of SMT within guest VMs, allowing programming interfaces to control and distribute tasks across SMT threads. Windows Server 2019 and Windows Server 2022 use the core scheduler by default. New virtual machines created using VM version 9.0 or later will inherit the SMT properties of the physical host. VMs that may have been migrated to Windows Server 2019 or Windows Server 2022 and have been updated to VM version 9.0 or later will need to have their setting updated to enable SMT using the Set-VMProcessor cmdlet with the HWThreadCountPerCore parameter.

Need More Review? Hyper-V Scheduling Types

You can learn more about Hyper-V scheduling types at https://docs.microsoft.com/windows-server/virtualization/hyper-v/manage/about-hyper-v-scheduler-type-selection.

Manage VM checkpoints

Checkpoints represent the state of a VM at a particular point in time. You can create checkpoints when the VM is running or when the VM is shut down. When you create a checkpoint of a running VM, the running VM’s memory state is also stored in the checkpoint. Restoring a checkpoint taken of a running VM returns the running VM to a restored state. Creating a checkpoint creates either an AVHD or an AVHDX file (depending on whether the VM is using virtual hard disks in the VHD or VHDx format).

Windows Server 2016 and later support two types of checkpoints:

  • Standard checkpoints These function just as checkpoints have functioned in previous versions of Hyper-V. They capture the state, date, and hardware configuration of a virtual machine. They are designed for development and test scenarios.

  • Production checkpoints Available only in Windows Server 2016 and later, production checkpoints use backup technology inside the guest as opposed to the saved-state technology used in standard checkpoints. Production checkpoints are fully supported by Microsoft and can be used with production workloads, which is something that was not supported with the standard version of checkpoints available in previous versions of Hyper-V.

You can switch between standard and production checkpoints on a per-virtual machine basis by editing the properties of the virtual machine; in the Management section of the Properties dialog box, choose between Production and Standard checkpoints.

You can create checkpoints from Windows PowerShell with the Checkpoint-VM cmdlet. The other checkpoint-related Windows PowerShell cmdlets in Windows Server actually use the VMSnapshot noun, though on Windows 10, they confusingly have aliases for the VMCheck-Point noun.

The Windows Server checkpoint-related cmdlets are as follows:

  • Restore-VMSnapshot Restores an existing VM checkpoint.

  • Export-VMSnapshot Allows you to export the state of a VM as it exists when a particular checkpoint was taken. For example, if you took checkpoints at 2 p.m. and 3 p.m., you could choose to export the checkpoint taken at 2 p.m. and then import the VM in the state that it was in at 2 p.m. on another Hyper-V host.

  • Get-VMSnapshot Lists the current checkpoints.

  • Rename-VMSnapshot Allows you to rename an existing VM checkpoint.

  • Remove-VMSnapshot Deletes a VM checkpoint. If the VM checkpoint is part of the chain but not the final link, changes are merged with the successive checkpoint so that the checkpoint remains a representation of the VM at the point in time when the snapshot was taken. For example, if checkpoints were taken at 1 p.m., 2 p.m., and 3 p.m., and you delete the 2 p.m. checkpoint, the AVHD/AVHDX files associated with the 2 p.m. snapshot would be merged with the AVHD/AVHDX files associated with the 3 p.m. snapshot so that the 3 p.m. snapshot retains its integrity.

Checkpoints do not replace backups. Checkpoints are almost always stored on the same volume as the original VM hard disks, so a failure of that volume will result in all VM storage files—both original disks and checkpoint disks—being lost. If a disk in a checkpoint chain becomes corrupted, then that checkpoint and all subsequent checkpoints will be lost. Disks earlier in the checkpoint chain will remain unaffected. Hyper-V supports a maximum of 50 checkpoints per VM.

Implement high availability for virtual machines

There are several methods that you can use to make virtual machines fault tolerant and resilient. These include configuring VM replication as well as running VMs on more traditional failover clusters as well as combinations of these technologies.

Hyper-V Replica

Hyper-V Replica provides a replica of a VM running on one Hyper-V host that can be stored and updated on another Hyper-V host. For example, you could configure a VM hosted on a Hyper-V failover cluster in Melbourne to be replicated through Hyper-V Replica to a Hyper-V failover cluster in Sydney. Hyper-V Replica allows for replication across site boundaries and does not require access to shared storage in the way that failover clustering does.

Hyper-V Replica is asynchronous. While the replica copy is consistent, it is a lagged copy with changes sent only as frequently as once every 30 seconds. Hyper-V Replica supports multiple recovery points, with a recovery snapshot taken every hour. (This incurs a resource penalty, so the setting is off by default.) This means that when activating the replica, you can choose to activate the most up-to-date copy or a lagged copy. You would choose to activate a lagged copy in the event that some form of corruption or change made the up-to-date copy problematic.

When you perform a planned failover from the primary host to the replica, you need to switch off the primary host. This ensures that the replica is in an up-to-date and consistent state. This is a drawback compared to failover or live migration, where the VM will remain available during the process. A series of checks are completed before performing planned failover to ensure that the VM is off, that reverse replication is allowed back to the original primary Hyper-V host, and that the state of the VM on the current replica is consistent with the state of the VM on the current primary. Performing a planned failover will start the replicated VM on the original replica, which will now become the new primary server.

Hyper-V Replica also supports unplanned failover. You perform an unplanned failover in the event that the original Hyper-V host has failed or the site that hosts the primary replica has become unavailable. When performing an unplanned failover, you can choose either the most recent recovery point or a previous recovery point. Performing unplanned failover will start the VM on the original replica, which will now become the new primary server.

Hyper-V extended replication allows you to create a second replica of the existing replica server. For example, you could configure Hyper-V replication between a Hyper-V virtualization host in Melbourne and Sydney, with Sydney hosting the replica. You could then configure an extended replica in Brisbane using the Sydney replica.

Configuring Hyper-V replica servers

To configure Hyper-V Replica, you must configure the Replication Configuration settings. The first step is to select Enable This Computer As A Replica Server. Next, select the authentication method you are going to use. If the computers are parts of the same Active Directory environment, you can use Kerberos. When you use Kerberos, Hyper-V replication data isn’t encrypted when transmitted across the network. If you are concerned about encrypting network data, you could configure IPsec. If you are concerned about encrypting replication traffic, another option is to use certificate-based authentication. This is useful if you are transmitting data across the public internet without using an encrypted VPN tunnel. When using certificate-based authentication, you’ll need to import and select a public certificate issued to the partner server.

The final step when configuring Hyper-V Replica is to select the servers from which the Hyper-V virtualization host will accept incoming replicated VM data. One option is to have the Hyper-V virtualization host accept replicated VMs from any authenticated Hyper-V virtualization host, using a single default location to store replica data. The other option is to configure VM replica storage on a per-server basis. For example, if you wanted to store VM replicas from one server on one volume and VM replicas from another server on a different volume, you’d configure VM replica storage on a per-server basis.

Once replication is configured on the source and destination servers, you’ll also need to enable the predefined firewall rules to allow the incoming replication traffic. There are two rules: one for replication using Kerberos (HTTP) on port 80 and the other for using certificate-based authentication on port 443.

Configuring VM replicas

After you have configured the source and destination replica servers, you have to configure replication on a per-VM basis. You do so by running the Enable Replication Wizard, which you can trigger by selecting Enable Replication when the VM is selected in Hyper-V Manager. To configure VM replicas, you must perform the following steps:

  • Select Replica Server Select the replica server name. If you are replicating to a Hyper-V failover cluster, you’ll need to specify the name of the Hyper-V Replica Broker. You’ll learn more about Hyper-V Replica Broker later in this chapter.

  • Choose Connection Parameters Specify the connection parameters. The options will depend on the configuration of the replica servers. On this page, depending on the existing configuration, you can choose the authentication type and whether replication data will be compressed when transmitted over the network.

  • Select Replication VHDs When configuring replication, you have the option of not replicating some of a VM’s virtual hard disks. In most scenarios, you should replicate all of a VM’s hard disk drives. One reason not to replicate a VM’s virtual hard disk would be if the virtual hard disk only stores frequently changing temporary data that wouldn’t be required when recovering the VM.

  • Replication Frequency Use this page to specify the frequency with which changes are sent to the replica server. You can choose between intervals of 30 seconds, 5 minutes, and 15 minutes.

  • Additional Recovery Points You can choose to create additional hourly recovery points. Doing so gives you the option of starting the replica from a previous point in time rather than the most recent. The advantage is that this allows you to roll back to a previous version of the VM in the event that data corruption occurs and the VM has replicated to the most recent recovery point. The replica server can store a maximum of 24 recovery points.

  • Initial Replication The last step in configuring Hyper-V Replica is choosing how to seed the initial replica. Replication works by sending changed blocks of data, so the initial replica, which sends the entire VM, will be the largest transfer. You can perform an offline transfer with external media, use an existing VM on the replica server as the initial copy (the VM for which you are configuring a replica must have been exported and then imported on the replica server), or transfer all VM data across the network. You can perform replication immediately or at a specific time in the future, such as 2 a.m. when network utilization is likely to be lower.

Replica failover

You perform a planned replica failover when you want to run the VM on the replica server rather than on the primary host. Planned failover involves shutting down the VM, which ensures that the replica will be up to date. Contrast this with Hyper-V live migration, which you perform while the VM is running. When performing a planned failover, you can configure the VM on the replica server to automatically start once the process completes; you can also configure reverse replication so that the current replica server becomes the new primary server and the current primary becomes the new replica server.

In the event that the primary server becomes unavailable, you can trigger an unplanned failover. You would then perform the unplanned failover on the replica server (as the primary is not available). When performing an unplanned failover, you can select any of the up to 24 previously stored recovery points.

Hyper-V Replica Broker

You need to configure and deploy Hyper-V Replica Broker if your Hyper-V replica configuration includes a Hyper-V failover cluster as a source or destination. You don’t need to configure and deploy Hyper-V Replica Broker if both the source and destination servers are not participating in a Hyper-V failover cluster. You install the Hyper-V Replica Broker role using Failover Cluster Manager after you’ve enabled the Hyper-V role on cluster nodes.

Hyper-V failover clusters

One of the most common uses for failover clustering is to host Hyper-V virtual machines. Hyper-V failover clusters allow VMs to move to another virtualization host in the event that the original experiences a disruption or when you need to rebalance the way that VMs across the cluster use Hyper-V host resources.

Hyper-V host cluster storage

When deployed on Hyper-V host clusters, the configuration and virtual hard disk files for highly available VMs are hosted on shared storage. This shared storage can be one of the following:

  • Serial Attached SCSI (SAS) Suitable for two-node failover clusters where the cluster nodes are in close proximity to each other.

  • iSCSI storage Suitable for failover clusters with two or more nodes. Windows Server includes iSCSI Target Software, allowing it to host iSCSI targets that can be used as shared storage by Windows failover clusters.

  • Fibre Channel Fibre Channel/Fibre Channel over Ethernet storage requires special network hardware. While generally providing better performance than iSCSI, Fibre Channel components tend to be more expensive.

  • SMB 3.0 file shares configured as continuously available storage This special type of file share is highly available, with multiple cluster nodes able to maintain access to the file share. This configuration requires multiple clusters. One cluster hosts the highly available storage used by the VMs, and the other cluster hosts the highly available VMs.

  • Cluster Shared Volumes (CSVs) CSVs can also be used for VM storage in Hyper-V failover clusters. As with continuously available file shares, multiple nodes in the cluster have access to the files stored on CSVs, ensuring that failover occurs with minimal disruption. As with SMB 3.0 file shares, multiple clusters are required, with one cluster hosting the CSVs and the other cluster hosting the VMs.

When considering storage for a Hyper-V failover cluster, remember the following:

  • Ensure volumes used for disk witnesses are formatted as either NTFS or ReFS.

  • Avoid allowing nodes from separate failover clusters to access the same shared storage by using LUN masking or zoning.

  • Where possible, use storage spaces to host volumes presented as shared storage.

Cluster quorum

Hyper-V failover clusters remain functional until they do not have enough active votes to retain quorum. Votes can consist of nodes that participate in the cluster as well as disk or file share witnesses. The calculation on whether the cluster maintains quorum is dependent on the cluster quorum mode. When you deploy a Windows Server failover cluster, one of the following modes will automatically be selected, depending on the current cluster configuration:

  • Node Majority

  • Node and Disk Majority

  • Node and File Share Majority

  • No Majority: Disk Only

You can change the cluster mode manually, or with Dynamic Quorum in Windows Server, the cluster mode will change automatically when you add or remove nodes, a witness disk, or a witness share. The following quorum modes are available:

  • Node Majority This cluster quorum mode is chosen automatically during setup if a cluster has an odd number of nodes. When this cluster quorum mode is used, a file share or disk witness is not used. A failover cluster will retain quorum as long as the number of available nodes is more than the number of failed nodes that retain cluster membership. For example, if you deploy a nine-node failover cluster, the cluster will retain quorum as long as five cluster nodes are able to communicate with each other.

  • Node and Disk Majority This model is chosen automatically during setup if the cluster has an even number of nodes and shared storage is available to function as a disk witness. In this configuration, cluster nodes and the disk witness each have a vote when calculating quorum. As with the Node Majority model, the cluster will retain quorum as long as the number of votes that remain in communication exceeds the number of votes that cannot be contacted. For example, if you deployed a six-node cluster and a witness disk, there would be a total of seven votes. As long as four of those votes remained in communication with each other, the failover cluster would retain quorum.

  • Node and File Share Majority This model is used when a file share is configured as a witness. Each node and the file share have a vote when it comes to determining if quorum is retained. As with other models, a majority of the votes must be present for the cluster to retain quorum. Node and File Share Majority is suitable for organizations that are deploying multisite clusters; for example, placing half the cluster nodes in one site, half the cluster nodes in another site, and the file share witness in a third site. If one site fails, the other site is able to retain communication with the site that hosts the file share witness, in which case quorum is retained.

  • No Majority: Disk Only This model must be configured manually and must only be used in testing environments because the only vote that counts toward quorum is that of the disk witness on shared storage. The cluster will retain quorum as long as the witness is available, even if every node but one fails. Similarly, the cluster will be in a failed state if all the nodes are available but the shared storage hosting the disk witness goes offline.

Cluster node weight

Rather than every node in the cluster having an equal vote when determining quorum, you can configure which cluster nodes can vote to determine quorum by running the Configure Cluster Quorum Wizard. Configuring node weight is useful if you are deploying a multisite cluster and you want to control which site retains quorum in the event that communication between the sites is lost. You can determine which nodes in a cluster are currently assigned votes by selecting Nodes in the Failover Cluster Manager.

Dynamic quorum

Dynamic quorum allows cluster quorum to be recalculated automatically each time a node is removed from or added to a cluster. By default, dynamic quorum is enabled on Windows Server clusters. Dynamic quorum works in the following manner:

  • The vote of the witness is automatically adjusted based on the number of voting nodes in the cluster. If the cluster has an even number of nodes, the witness has a vote. If a cluster has an even number of nodes and a node is added or removed, the witness loses its vote.

  • In the event of a 50 percent node split, dynamic quorum can adjust the vote of a node. This is useful in avoiding “split brain” syndrome during site splits with multisite failover clusters.

An advantage of dynamic quorum is that as long as nodes are evicted in a graceful manner, the cluster will reconfigure quorum appropriately. This means that you could change a nine-node cluster to a five-node cluster by evicting nodes, and the new quorum model would automatically be recalculated assuming that the cluster only had five nodes. With dynamic quorum, it is a good idea to specify a witness even if the initial cluster configuration has an odd number of nodes; doing so means a witness vote will automatically be included in the event that an administrator adds or removes a node from the cluster.

Cluster networking

In lab and development environments, it’s reasonable to have failover cluster nodes that are configured with a single network adapter. In production environments with mission-critical workloads, you should configure cluster nodes with multiple network adapters, institute adapter teaming, and leverage separate networks. Separate networks should include:

  • A network dedicated for connecting cluster nodes to shared storage

  • A network dedicated for internal cluster communication

  • The network that clients use to access services deployed on the cluster

When configuring IPv4 or IPv6 addressing for failover cluster nodes, ensure that addresses are assigned either statically or dynamically to cluster node network adapters. Avoid using a mixture of statically and dynamically assigned addresses, as this will cause an error with the Cluster Validation Wizard. Also ensure that cluster network adapters are configured with a default gateway. While the Cluster Validation Wizard will not provide an error if a default gateway is not present for the network adapters of each potential cluster node, you will be unable to create a failover cluster unless a default gateway is present.

You can use the Validate-DCB tool to validate cluster networking configuration. This tool is useful in situations where you need to check networking configuration to support RDMA and Switch Embedded Teaming (SET).

Force Quorum Resiliency

Imagine you have a five-node, multisite cluster in Melbourne and Sydney, with three nodes in Sydney. Also, imagine that internet connectivity to the Sydney site is lost. Within the Sydney site itself, the cluster will remain running because with three nodes, it has retained quorum. But if external connectivity to the Sydney site is not available, you may instead need to forcibly start the cluster in the Melbourne site (which will be in a failed state because only two nodes are present) using the /fq (forced quorum) switch to provide services to clients.

In the past, when connectivity was restored, this would have led to a “split brain” or partitioned cluster, as both sides of the cluster would be configured to be authoritative. To resolve this with failover clusters running Windows Server 2012 or earlier, you would need to manually restart the nodes that were not part of the forced quorum set using the /pq (prevent quorum) switch. Windows Server 2019 and later provide a feature known as Force Quorum Resiliency that automatically restarts the nodes that were not part of the forced quorum set so that the cluster does not remain in a partitioned state.

Cluster Shared Volumes

Cluster Shared Volumes (CSVs) are a high-availability storage technology that allows multiple cluster nodes in a failover cluster to have read-write access to the same LUN. This has the following advantages for Hyper-V failover clusters:

  • VMs stored on the same LUN can be run on different cluster nodes. This reduces the number of LUNs required to host VMs because the VMs stored on a CSV aren’t tied to one specific Hyper-V failover cluster node; instead, the VMs can be spread across multiple Hyper-V failover cluster nodes.

  • Switch-over between nodes is almost instantaneous in the event of failover because the new host node doesn’t have to go through the process of seizing the LUN from the failed node.

CSVs are hosted from servers running the Windows Server 2012 or later operating systems and allow multiple nodes in a cluster to access the same NTFS- or ReFS-formatted file system. CSVs support BitLocker, with each node performing decryption of encrypted content using a cluster computer account. CSVs also integrate with SMB (Server Message Block) Multichannel and SMB Direct, allowing traffic to be sent through multiple networks and to leverage network cards that include Remote Direct Memory Access (RDMA) technology. CSVs can also automatically scan and repair volumes without requiring storage to be taken offline. You can convert cluster storage to a CSV using the Disks node of the Failover Cluster Manager.

Active Directory detached clusters

Active Directory detached clusters, also called “clusters without network names,” are a feature of Windows Server 2012 R2 and later operating systems. Detached clusters have their names stored in DNS but do not require the creation of a computer account within Active Directory. The benefit of detached clusters is that it is possible to create them without requiring that a computer object be created in Active Directory to represent the cluster; also, the account used to create the cluster need not have permissions to create computer objects in Active Directory. Although the account used to create a detached cluster does not need permission to create computer objects in Active Directory, the nodes that will participate in the detached cluster must still be domain-joined.

Preferred owner and failover settings

Cluster role preferences settings allow you to configure a preferred owner for a cluster role. When you do this, the role will be hosted on the node listed as the preferred owner. You can specify multiple preferred owners and configure the order in which a role will attempt to return to a specific cluster node.

Failover settings allow you to configure how many times a service will attempt to restart or fail over in a specific period. By default, a cluster service can fail over twice in a six-hour period before the failover cluster will leave the cluster role in a failed state. The failback setting allows you to configure the amount of time a clustered role that has failed over to a node that is not its preferred owner will wait before falling back to the preferred owner.

VM Network Health Detection

VM Network Health Detection is a feature for VMs that are deployed on Hyper-V host clusters. With VM Network Health Detection, you configure a VM’s network adapter settings and mark certain networks as being protected. You do this in the Advanced Features section of the Network Adapter properties dialog box.

In the event that a VM is running on a cluster node where the network marked as protected becomes unavailable, the cluster will automatically live migrate the VM to a node where the protected network is available. For example, say you have a four-node Hyper-V failover cluster. Each node has multiple network adapters, and a virtual switch named Alpha maps as an external virtual switch to a physical network adapter on each node. A VM, configured as highly available and hosted on the first cluster node, is connected to a virtual switch, Alpha. The network adapter on this VM is configured with the protected network option. After the VM has been switched on and has been running for some time, a fault occurs, causing the physical network adapter mapped to the virtual switch Alpha on the first cluster node to fail. When this happens, the VM will automatically be live migrated to another cluster node where the virtual switch Alpha is working.

VM drain on shutdown

VM drain on shutdown is a feature that will automatically live migrate all running VMs off a node if you shut down that node without putting it into maintenance mode. If you are following best practice, you’ll be putting nodes into maintenance mode and live migrating running workloads away from nodes that you will restart or intend to shut down anyway. The main benefit of VM drain on shutdown is that, in the event that you are having a bad day and forget to put a cluster node into maintenance mode before shutting it down or restarting it, any running VMs will be live migrated without requiring your direct intervention.

Hyper-V guest clusters

A guest cluster is a failover cluster that consists of two or more VMs as cluster nodes. You can run a guest cluster on a Hyper-V failover cluster, or you can run guest clusters with nodes on separate Hyper-V failover clusters. While deploying a guest cluster on Hyper-V failover clusters may seem as though it is taking redundancy to an extreme, there are good reasons to deploy guest clusters and Hyper-V failover clusters together:

  • Failover clusters monitor the health of clustered roles to ensure that they are functioning. This means that a guest failover cluster can detect when the failure of a clustered role occurs and can take steps to recover the role. For example, say you deploy an SQL Server failover cluster as a guest cluster on a Hyper-V failover cluster. One of the SQL Server instances that participates in the guest cluster suffers a failure. In this scenario, failover occurs within the guest cluster, and another instance of SQL Server hosted on the other guest cluster node continues to service client requests.

  • Deploying guest and Hyper-V failover clusters together allows you to move applications to other guest cluster nodes while you are performing servicing tasks. For example, you may need to apply software updates that require a restart to the operating system that hosts an SQL Server instance. If this SQL Server instance is participating in a Hyper-V guest cluster, you could move the clustered role to another node, apply software updates to the original node, perform the restart, and then move the clustered SQL Server role back to the original node.

  • Deploying guest and Hyper-V failover clusters together allows you to live migrate guest cluster VMs from one host cluster to another host cluster while ensuring clients retain connectivity to clustered applications. For example, suppose a two-node guest cluster hosting SQL Server is hosted on one Hyper-V failover cluster in your organization’s datacenter. You want to move the guest cluster from its current host Hyper-V failover cluster to a new Hyper-V failover cluster. By migrating one guest cluster node at a time from the original Hyper-V failover cluster to the new Hyper-V failover cluster, you’ll be able to continue to service client requests without interruption, failing over SQL Server to the guest node on the new Hyper-V failover cluster after the first node completes its migration and before migrating the second node across.

Hyper-V guest cluster storage

Just as you can configure a Hyper-V failover cluster where multiple Hyper-V hosts function as failover cluster nodes, you can configure failover clusters within VMs, where each failover cluster node is a VM. Even though failover cluster nodes must be members of the same Active Directory domain, there is no requirement that they be hosted on the same cluster. For example, you could configure a multisite failover cluster where the cluster nodes are hosted as highly available VMs, each hosted on its own Hyper-V failover clusters in each site.

When considering how to deploy a VM guest cluster, you’ll need to choose how you will provision the shared storage that is accessible to each cluster node. The options for configuring shared storage for VM guest clusters include:

  • iSCSI

  • Virtual Fibre Channel

  • Cluster Shared Volumes

  • Continuously Available File Shares

  • Shared virtual hard disks

The conditions for using iSCSI, Virtual Fibre Channel, Cluster Shared Volumes, and Continuously Available File Shares with VM guest clusters are essentially the same for VMs as they are when configuring traditional physically hosted failover cluster nodes.

Shared virtual hard disk

Shared virtual hard disks are a special type of shared storage only available to VM guest clusters. With shared virtual hard disks, each guest cluster node can be configured to access the same shared virtual hard disk. Each VM cluster node’s operating system will recognize the shared virtual hard disk as shared storage when building the VM guest failover cluster.

Shared virtual hard disks have the following requirements:

  • Can be used with Generation 1 and Generation 2 VMs.

  • Can only be used with guest operating systems running Windows Server 2012 or later. If the guest operating systems are running Windows Server 2012, they must be updated to use the Windows Server 2012 R2 integration services components.

  • Can only be used if virtualization hosts are running the Windows Server 2012 R2 or later version of Hyper-V.

  • Must be configured to use the VHDX virtual hard disk format.

  • Must be connected to a virtual SCSI controller.

  • When deployed on a failover cluster, the shared virtual hard disk itself should be located on shared storage, such as a Continuously Available File Share or Cluster Shared Volume. This is not necessary when configuring a guest failover cluster on a single Hyper-V server that is not part of a Hyper-V failover cluster.

  • VMs can only use shared virtual hard disks to store data. You can’t boot a VM from a shared virtual hard disk.

The configuration of shared virtual hard disks differs from the traditional configuration of VM guest failover clusters because you configure the connection to shared storage by editing the VM properties rather than connecting to the shared storage from within the VM. Windows Server 2016 and later support shared virtual hard disks being resized and used with Hyper-V Replica.

Hyper-V VHD Sets

VHD Sets are a newer version of shared virtual hard disks. Hyper-V VHD Sets use a new virtual hard disk format that uses the .vhds extension. VHD Sets support online resizing of shared virtual disks, Hyper-V Replica, and application-consistent Hyper-V checkpoints.

You can create a VHD Set file from Hyper-V Manager or by using the New-VHD cmdlet with the file type set to VHDS when specifying the virtual hard disk name. You can use the Convert-VHD cmdlet to convert an existing shared virtual hard disk file to a VHD Set file as long as you have taken the VMs that use the shared virtual hard disk file offline and removed the shared virtual hard disk from the VM using the Remove-VHHardDiskDrive cmdlet.

Hyper-V live migration

Live migration is the process of moving an operational VM from one physical virtualization host to another with no interruption to VM clients or users. Live migration is supported between cluster nodes that share storage between separate Hyper-V virtualization hosts that are not participating in a failover cluster using an SMB 3.0 file share as storage; live migration is even supported between separate Hyper-V hosts that are not participating in a failover cluster using a process called “shared nothing live migration.”

Live migration has the following prerequisites:

  • There must be two or more servers running Hyper-V that use processors from the same manufacturer (for example, all Hyper-V virtualization hosts configured with Intel processors or all Hyper-V virtualization hosts configured with AMD processors).

  • Hyper-V virtualization hosts need to be members of the same domain, or they must be members of domains that have a trust relationship with each other.

  • VMs must be configured to use virtual hard disks or virtual Fibre Channel disks; pass-through disks are not allowed.

It is possible to perform live migration with VMs configured with pass-through disks under the following conditions:

  • VMs are hosted on a Windows Server Hyper-V failover cluster.

  • Live migration will be within nodes that participate in the same Hyper-V failover cluster.

  • VM configuration files are stored on a Cluster Shared Volume.

  • The physical disk that is used as a pass-through disk is configured as a storage disk resource that is controlled by the failover cluster. This disk must be configured as a dependent resource for the highly available VM.

If performing a live migration using shared storage, the following conditions must be met:

  • The SMB 3.0 share needs to be configured so that the source and the destination virtualization host’s computer accounts have read and write permissions.

  • All VM files (virtual hard disks, configuration files, and snapshot files) must be located on the SMB 3.0 share. You can use storage migration to move VM files to an SMB 3.0 share while the VM is running before performing a live migration using this method.

You must configure the source and destination Hyper-V virtualization hosts to support live migrations by enabling live migrations in the Hyper-V settings. When you do this, you specify the maximum number of simultaneous live migrations and the networks that you will use for live migration. Microsoft recommends using an isolated network for live migration traffic, though this is not a requirement.

The next step in configuring live migration is choosing which authentication protocol and live migration performance options to use. You select these in the Advanced Features area of the Live Migrations settings. The default authentication protocol is CredSSP (Credential Security Support Provider). CredSSP requires local sign-in to both the source and the destination Hyper-V virtualization host to perform live migration. Kerberos allows you to trigger live migration remotely. To use Kerberos, you must configure the computer accounts for each Hyper-V virtualization host with constrained delegation for the CIFS (Common internet File System) and Microsoft Virtual System Migration Service services, granting permissions to the virtualization hosts that will participate in the live migration partnership. The performance options allow you to speed up live migration. Compression increases processor utilization. SMB will use SMB Direct if both the network adapters used for the live migration process support remote direct memory access (RDMA) and RDMA capabilities are enabled.

Manage VHD and VHDX files

Hyper-V supports two separate virtual hard disk formats. Virtual hard disk files in VHD format are limited to 2,040 GB. Virtual hard disks in this format can be used on all supported versions of Hyper-V. Other than the size limitation, the important thing to remember is that you cannot use virtual hard disk files in VHD format with Generation 2 VMs.

Virtual hard disk files in VHDX format are an improvement over virtual hard disks in VHD format. The main limitation of virtual hard disks in VHDX format is that they cannot be used with Hyper-V on Windows Server 2008 or Windows Server 2008 R2, which shouldn’t be as much of a problem now since these operating systems no longer have mainstream support. Virtual hard disks in VHDX format have the following benefits:

  • Can be up to 64 TB in size

  • Have larger block size for dynamic and differential disks

  • Provide 4 KB logical sector virtual disks

  • Have an internal log that reduces chance of corruption

  • Support TRIM to reclaim unused space

You can convert hard disks between VHD and VHDX format. You can create virtual hard disks at the time you create the VM, by using the New Virtual Hard Disk Wizard, or by using the New-VHD PowerShell cmdlet. You can convert virtual hard disks using the Convert-VHD PowerShell cmdlet.

Fixed-sized disks

Virtual hard disks can be dynamic, differencing, or fixed. When you create a fixed-size disk, all space used by the disk is allocated on the hosting volume at the time of creation. Fixed disks increase performance if the physical storage medium does not support Windows Offloaded Data Transfer (ODX). Improvements in Windows Server reduce the performance benefit of fixed-size disks when the storage medium supports ODX. The space to be allocated to the disk must be present on the host volume when you create the disk. For example, you can’t create a 3 TB fixed disk on a volume that has only 2 TB of space.

Dynamically expanding disks

A dynamically expanding disk uses a small file initially and then grows as the VM allocates data to the virtual hard disk. This means you can create a 3 TB dynamic virtual hard disk on a 2 TB volume because the entire 3 TB will not be allocated at disk creation. However, in this scenario, you would need to ensure that you extend the size of the 2 TB volume before the dynamic virtual disk outgrows the available storage space.

Differencing disks

Differencing disks are a special type of virtual hard disk that has a child relationship with a parent hard disk. Parent disks can be fixed size or dynamic virtual hard disks, but the differencing disk must be the same type as the parent disk. For example, you can create a differencing disk in VHDX format for a parent disk that uses VHDX format, but you cannot create a differencing disk in VHD format for a parent disk in VHDX format.

Differencing disks record the changes that would otherwise be made to the parent hard disk by the VM. For example, differencing disks are used to record Hyper-V VM checkpoints. One parent virtual hard disk can have multiple differencing disks associated with it.

For example, you can create a specially prepared parent virtual hard disk by installing Windows Server on a VM by running the sysprep utility within the VM and then shutting the VM down. You can use the virtual hard disk created by this process as a parent virtual hard disk. In this scenario, when you create new Windows Server VMs, you would configure the VMs to use a new differencing disk that uses the sysprepped virtual hard disk as a parent. When you run the new VM, it will write any changes that it would normally make to the full virtual hard disk to the differencing disk. In this scenario, deploying new Windows Server VMs becomes a simple matter of creating new VMs that use a differencing disk that uses the sysprepped Windows Server virtual hard disk as a parent.

You can create differencing hard disks using the New Virtual Hard Disk Wizard or the New-VHD Windows PowerShell cmdlet. You must specify the parent disk during the creation process.

The key to using differencing disks is to ensure that you don’t make changes to the parent disk; doing so will invalidate the relationship with any child disks. Generally, differencing disks can provide storage efficiencies because the only changes are recorded on child disks. For example, rather than storing 10 different instances of Windows Server 2022 in its entirety, you could create one parent disk and have 10 much smaller differencing disks to accomplish the same objective. If you store VM virtual hard disks on a volume that has been deduplicated, these efficiencies are reduced.

Modifying virtual hard disks

You can perform the following tasks to modify existing virtual hard disks:

  • Convert a virtual hard disk in VHD format to VHDX format.

  • Convert a virtual hard disk in VHDX format to VHD format.

  • Change the disk from fixed size to dynamically expanding or from dynamically expanding to fixed size.

  • Shrink or enlarge the virtual hard disk.

You convert virtual hard disk type (VHD to VHDX, VHDX to VHD, dynamic to fixed, or fixed to dynamic) by using the Edit Virtual Hard Disk Wizard or by using the Convert-VHD PowerShell cmdlet. When converting from VHDX to VHD, remember that virtual hard disks in VHD format cannot exceed 2,040 GB in size. So, while it is possible to convert virtual hard disks in VHDX format that are smaller than 2,040 GB to VHD format, you will not be able to convert virtual hard disks that are larger than 2,040 GB.

You can only perform conversions from one format to another and from one type to another while the VM is powered off. You must shrink the virtual hard disk using the disk manager in the VM operating system before shrinking the virtual hard disk using the Edit Virtual Hard Disk Wizard or the Resize-VHD cmdlet. You can resize a virtual hard disk while the VM is running under the following conditions:

  • The virtualization host is running Windows Server 2012 R2 or later.

  • The virtual hard disk is in VHDX format.

  • The virtual hard disk is attached to a virtual SCSI controller.

  • The virtual hard disk must have been shrunk. You must shrink the virtual hard disk using the disk manager in the host operating system before shrinking the virtual hard disk using the Edit Virtual Hard Disk Wizard or the Resize-VHD cmdlet.

Pass-through disks

Pass-through disks, also known as directly attached disks, allow a VM to directly access the underlying storage rather than accessing a virtual hard disk that resides on that storage. For example, with Hyper-V you normally connect a VM to a virtual hard disk file hosted on a volume formatted with NTFS or ReFS. With pass-through disks, the VM instead accesses the disk directly, and there is no virtual hard disk file.

Pass-through disks allow VMs to access larger volumes than are possible when using virtual hard disks in VHD format. In earlier versions of Hyper-V—such as the version available with Windows Server 2008—pass-through disks provide performance advantages over virtual hard disks. The need for pass-through disks has diminished with the availability of virtual hard disks in VHDX format because that format allows you to create much larger volumes.

Pass-through disks can be directly attached to the virtualization host, or they can be attached to Fibre Channel or iSCSI disks. When adding a pass-through disk, you will need to ensure that the disk is offline. You can use the Disk Management console or the diskpart.exe utility on the virtualization host to set a disk to be offline.

To add a pass-through disk using PowerShell, use the Get-Disk cmdlet to get the properties of the disk that you want to add as a pass-through disk. Next, pipe the result to the Add-VMHardDiskDrive cmdlet. For example, to add physical disk 3 to the VM named Alpha-Test, execute the following command:

Get-Disk 3 | Add-VMHardDiskDrive -VMName Alpha-Test

A VM that uses pass-through disks will not support VM checkpoints. Pass-through disks also cannot be backed up with backup programs that use the Hyper-V VSS writer.

Virtual Fibre Channel adapters

Virtual Fibre Channel allows you to make direct connections from VMs running on Hyper-V to Fibre Channel storage. If the following requirements are met, Virtual Fibre Channel is supported on Windows Server 2016 and later:

  • The computer functioning as the Hyper-V virtualization host must have a Fibre Channel host bus adapter (HBA) that has a driver that supports Virtual Fibre Channel.

  • SAN must be NPIV (N_Port ID) enabled.

  • The VM must be running a supported version of the guest operating system.

  • Virtual Fibre Channel LUNs cannot be used to boot Hyper-V VMs.

VMs running on Hyper-V support up to four virtual Fibre Channel adapters, each of which can be associated with a separate storage area network (SAN). Before you can use a Virtual Fibre Channel adapter, you will need to create at least one virtual SAN on the Hyper-V virtualization host. A virtual SAN is a group of physical Fibre Channel ports that connect to the same SAN. VM live migration and VM failover clusters are supported; however, virtual Fibre Channel does not support VM checkpoints, host-based backup, or live migration of SAN data.

Storage QoS

Storage Quality of Service (QoS) allows you to limit the maximum number of IOPS (input/output operations per second) for virtual hard disks. IOPS are measured in 8 KB increments. If you specify a maximum IOPS value, the virtual hard disk will be unable to exceed this value. You use Storage QoS to ensure that no single workload on a Hyper-V virtualization host consumes a disproportionate amount of storage resources.

It’s also possible to specify a minimum IOPS value for each virtual hard disk. You would do this if you wanted to be notified that a specific virtual hard disk’s IOPS has fallen below a threshold value. When the number of IOPS falls below the specified minimum, an event is written to the event log. You configure Storage QoS on a per-virtual hard disk basis.

Hyper-V storage optimization

Several technologies built into Windows Server allow you to optimize the performance and data storage requirements for files associated with VMs.

Deduplication

In Windows Server 2019 and later, both ReFS and NTFS volumes support deduplication. Deduplication is a process by which duplicate instances of data are removed from a volume and replaced with pointers to the original instance. Deduplication is especially effective when used with volumes that host virtual hard disk files because many of these files contain duplicate copies of data, such as the VM’s operating system and program files.

Once deduplication is installed, you can enable it through the Volumes node of the File and Storage Services section of the Server Manager console. When enabling deduplication, you specify whether you want to use a general file server data deduplication scheme or a virtual desktop infrastructure (VDI) scheme. For volumes that host VM files, the VDI scheme is appropriate. You can’t enable deduplication on the operating system volume; deduplication may only be enabled on data volumes. For this reason, remember to store VM configuration files and hard disks on a volume that is separate from the operating system volume.

Storage tiering

Storage tiering is a technology that allows you to mix fast storage, such as solid-state disk (SSD), with traditional spinning magnetic disks to optimize both storage performance and capacity. Storage tiering works on the premise that a minority of the data stored on a volume is responsible for the majority of read and write operations. Storage tiering can be enabled through the storage spaces functionality, and rather than creating a large volume that consists entirely of SSDs, you create a volume consisting of both solid-state and spinning magnetic disks. In this configuration, frequently accessed data is moved to the parts of the volume hosted on the SSDs, and less frequently accessed data is moved to the parts of the volume hosted on the slower spinning magnetic disks. This configuration allows many of the performance benefits of an SSD-only volume to be realized without the cost of using SSD-only storage.

When used in conjunction with deduplication, frequently accessed deduplicated data is moved to the faster storage, providing reduced storage requirements, while improving performance over what would be possible if the volume hosting VM files were solely composed of spinning magnetic disks. You also have the option of pinning specific files to the faster storage, which overrides the algorithms that move data according to accumulated utilization statistics. You configure storage tiering using PowerShell.

Storage migration

With storage migration, you can move a VM’s virtual hard disk files, checkpoint files, smart paging files, and configuration files from one location to another. You can perform storage migration while the VM is either running or powered off. You can move data to any location that is accessible to the Hyper-V host. This allows you to move data from one volume to another, from one folder to another, or even to an SMB 3.0 file share on another computer. When performing storage migration, choose the Move the VM’s Storage option.

For example, you could use storage migration to move VM files from one Cluster Share Volume to another on a Hyper-V failover cluster without interrupting the VM’s operation. You have the option of moving all data to a single location, moving VM data to separate locations, or moving only the VM’s virtual hard disk. To move the VM’s data to different locations, select the items you want to move and the destination locations.

Exporting, importing, and copying VMs

A VM export creates a duplicate of a VM that you can import on the same or a different Hyper-V virtualization host. When performing an export, you can choose to export the VM, which includes all its VM checkpoints, or you can choose to export just a single VM checkpoint. Windows Server 2012 R2 and later support exporting a running VM. With Hyper-V in Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008, it is necessary to shut down the VM before performing an export.

Exporting a VM with all of its checkpoints will create multiple differencing disks. When you import a VM that was exported with checkpoints, these checkpoints will also be imported. If you import a VM that was running at the time of export, the VM is placed in a saved state. You can resume from this saved state, rather than having to restart the VM.

When importing a VM, you can choose from the following options:

  • Register The Virtual Machine In Place (Use The Existing ID) Use this option when you want to import the VM while keeping the VM files in their current locations. Because this method uses the existing VM ID, you can only use it if the original VM on which the export was created is not present on the host to which you wish to import the VM.

  • Restore The Virtual Machine (Use The Existing Unique ID) Use this option when you want to import the VM while moving the files to a new location; for example, you would choose this option if you are importing a VM that was exported to a network share. Because this method also uses the existing VM ID, you can only use it if the original VM on which the export was created is not present on the host to which you wish to import the VM.

  • Copy The Virtual Machine (Create A New Unique ID) Use this method if you want to create a separate clone of the exported VM. The exported files will be copied to a new location, leaving the original exported files unaltered. A new VM ID is created, meaning that the cloned VM can run concurrently on the same virtualization host as the original progenitor VM. When importing a cloned VM onto the same virtualization host as the original progenitor VM, ensure that you rename the newly imported VM; otherwise you may confuse the VMs.

Configure Hyper-V network adapter

Generation 1 VMs support two types of network adapters: synthetic network adapters and legacy network adapters. A synthetic network adapter uses drivers that are provided when you install integration services in the VM operating system. If a VM operating system doesn’t have these drivers or if integration services are not available for this operating system, then the network adapter will not function. Synthetic network adapters are unavailable until a VM operating system that supports them is running. This means that you can’t perform a PXE boot from a synthetic network adapter if you have configured a Generation 1 VM.

Legacy network adapters emulate a physical network adapter, similar to a multiport DEC/Intel 21140 10/100TX 100 MB card. Many operating systems, including those that do not support virtual machine integration services, support this network adapter. This means that if you want to run an operating system in a VM that doesn’t have virtual machine integration services support—such as a version of Linux or BSD that isn’t officially supported for Hyper-V—you’ll need to use a legacy network adapter because it is likely to be recognized by the guest VM operating system.

Legacy network adapters on Generation 1 VMs also function before the VM guest operating system is loaded. You could use a legacy network adapter to PXE boot a Generation 1 VM to deploy an operating system through WDS..

Generation 2 VMs don’t separate synthetic and legacy network adapters and only have a single network adapter type. Generation 2 VMs support PXE booting from this single network adapter type. Generation 2 VMs also support “hot add” network adapters, allowing you to add or remove network adapters to or from a virtual machine while it is running. It is important to remember that only recent Windows client and server operating systems and only certain Linux operating systems are supported as Generation 2 VMs.

Virtual machine MAC addresses

By default, VMs running on Hyper-V hosts use dynamic MAC addresses. Each time a VM is powered on, it will be assigned a MAC address from a MAC address pool. You can configure the properties of the MAC address pool through the MAC Address Range settings available through Virtual Switch Manager.

When you deploy operating systems on physical hardware, you can use two methods to ensure that the computer is always assigned the same IP address configuration. The first method is to assign a static IP address from within the virtualized operating system. The second is to configure a DHCP reservation that always assigns the same IP address configuration to the MAC address associated with the physical computer’s network adapter.

This won’t work with Hyper-V VMs in their default configuration because the MAC address may change if you power the VM off and then on. Rather than configure a static IP address using the VM’s operating system, you can instead configure a static MAC address on a per-virtual network adapter basis. This will ensure that a VM’s virtual network adapter retains the same MAC address whether the VM is restarted or even if the VM is migrated to another virtualization host.

To configure a static MAC address on a per-network adapter basis, edit the network adapter’s advanced features. When entering a static MAC address, you will need to select a MAC address manually. You shouldn’t use one from the existing MAC address pool because there is no way for the current virtualization hosts or other virtualization hosts on the same subnet to check whether a MAC address that is to be assigned dynamically has already been assigned statically.

Network isolation

Hyper-V supports VLAN (virtual local area network) tagging at both the network adapter and the virtual switch levels. VLAN tags allow the isolation of traffic for hosts connected to the same network by creating separate broadcast domains. Enterprise hardware switches also support VLANs as a way of partitioning network traffic. To use VLANs with Hyper-V, the virtualization hosts’ network adapter must support VLANs. A VLAN ID has 12 bits, which means you can configure 4,095 VLAN IDs.

You configure VLAN tags at the virtual network adapter level by selecting Enable Virtual LAN Identification in the Virtual Network Adapter Properties dialog box. VLAN tags applied at the virtual switch level override VLAN tags applied at the virtual network adapter level. To configure VLAN tags at the virtual switch level, select the Enable Virtual LAN Identification For Management Operating System option and specify the VLAN identifier.

Optimizing network performance

You can optimize network performance for VMs hosted on Hyper-V in a number of ways. For example, you can configure the virtualization host with separate network adapters connected to separate subnets. You do this to separate network traffic related to the management of the Hyper-V virtualization host from network traffic associated with hosted VMs. You can also use NIC teaming on the Hyper-V virtualization host to provide increased and fault-tolerant network connectivity. You’ll learn more about NIC teaming later in this chapter in the section “Configure NIC teaming.”

Bandwidth Management

An additional method of optimizing network performance is to configure bandwidth management at the virtual network adapter level. Bandwidth management allows you to specify a minimum and the maximum traffic throughput for a virtual network adapter. The minimum bandwidth allocation is an amount that Hyper-V will reserve for the network adapter. For example, if you set the minimum bandwidth allocation to 10 Mbps for each VM, Hyper-V would ensure that when other VMs needed more, they would be able to increase their bandwidth utilization until they reached a limit defined by the combined minimum bandwidth allocation of all VMs hosted on the server. Maximum bandwidth allocations specify an upper limit for bandwidth utilization. By default, no minimum or maximum limits are set on virtual network adapters.

You configure bandwidth management by selecting the Enable Bandwidth Management option on a virtual network adapter and specifying a minimum and maximum bandwidth allocation in megabits per seconds (Mbps).

SR-IOV

SR-IOV (Single Root I/O Virtualization) increases network throughput by bypassing a virtual switch and sending network traffic straight to the VM. When you configure SR-IOV, the physical network adapter is mapped directly to the VM. As such, SR-IOV requires that the VM’s operating system include a driver for the physical network adapter. You can only use SR-IOV if the physical network adapter and the network adapter drivers used with the virtualization host support the functionality. You can only configure SR-IOV for a virtual switch during switch creation. Once you have an SR-IOV enabled virtual switch, you can then enable SR-IOV on the virtual network adapter that connects to that switch.

Dynamic virtual machine queue

Dynamic Virtual Machine Queue is an additional technology that you can use to optimize network performance. When a VM is connected through a virtual switch to a network adapter that supports Virtual Machine Queue and Virtual Machine Queue is enabled on the virtual network adapter’s properties, the physical network adapter can use Direct Memory Access (DMA) to forward traffic directly to the VM. With Virtual Machine Queue, network traffic is processed by the CPU assigned to the VM rather than by the physical network adapter used by the Hyper-V virtualization host. Dynamic Virtual Machine Queue automatically adjusts the number of CPU cores used to process network traffic. Dynamic Virtual Machine Queue is automatically enabled on a virtual switch when you enable Virtual Machine Queue on the virtual network adapter.

Configure NIC teaming

NIC teaming allows you to aggregate bandwidth across multiple network adapters while also providing a redundant network connection in the event that one of the adapters in the team fails. NIC teaming allows you to consolidate up to 32 network adapters and to use them as a single network interface. You can configure NIC teams using adapters that are from different manufacturers and that run at different speeds (though it’s generally a good idea to use the same adapter make and model in production environments).

You can configure NIC teaming at the virtualization host level if the virtualization host has multiple network adapters. The drawback is that you can’t configure NIC teaming at the host level if the network adapters are configured to use SR-IOV. If you want to use SR-IOV and NIC teaming, create the NIC team instead in the VM. You can configure NIC teaming within VMs by adding adapters to a new team using the Server Manager console or the New-NetLbfoTeam PowerShell cmdlet.

When configuring NIC teaming in a VM, ensure that each virtual network adapter that will participate in the team has MAC address spoofing enabled.

Configure Hyper-V switch

Hyper-V virtual switches, called Hyper-V virtual networks in previous versions of Hyper-V, represent network connections to which the Hyper-V virtual network adapters can connect. You can configure three types of Hyper-V virtual switches: external switches, internal switches, and private switches.

External switches

An external switch connects to a physical or wireless network adapter. Only one virtual switch can be mapped to a specific physical or wireless network adapter or NIC team. For example, if a virtualization host had four physical network adapters configured as two separate NIC teams, you could configure two external virtual switches. If a virtualization host had three physical network adapters that did not participate in any NIC teams, you could configure three external virtual switches. VMs connected to the same external switch can communicate with each other as well as external hosts connected to the network to which the network adapter mapped to the external switch is connected. For example, if an external switch is connected to a network adapter that is connected to a network that can route traffic to the internet, a VM connected to that external virtual switch will also be able to connect to hosts on the internet. When you create an external switch, a virtual network adapter that maps to this switch is created on the virtualization host unless you clear the option that allows the management operating system to share the network adapter. If you clear this option, the virtualization host will not be able to communicate through the network adapter associated with the external switch.

Internal switches

An internal switch allows communication between the VM and the virtualization host. All VMs connected to the same internal switch are able to communicate with each other and the virtualization host. For example, you could successfully initiate an RDP connection from the virtualization host to an appropriately configured VM or use the Test-NetConnection PowerShell cmdlet from a PowerShell prompt on the virtualization host to get a response from a VM connected to an internal network connection. VMs connected to an internal switch are unable to use that virtual switch to communicate with hosts on a separate virtualization host that are connected to an internal switch with the same name.

Private switches

VMs connected to the same private switch on a VM host can communicate with one another, but they cannot communicate directly with the virtualization host. Private switches only allow communication between VMs on the same virtualization host. For example, say VM alpha and beta are connected to private switch p_switch_a on virtualization host h_v_one. VM gamma is connected to private switch p_switch_a on virtualization host h_v_ two. VMs alpha and beta will be able to communicate with each other, but they won’t be able to communicate with h_v_one or VM gamma.

images Exam Tip

The Validate-DCB cmdlet allows you to verify that Switch Embedded Teaming (SET) and remote direct memory access (RDMA) are configured properly.

Skill 3.2: Create and manage containers

A container is a portable isolated application execution environment. Containers allow developers to bundle an application and its dependencies in a single image. This image can easily be exported, imported, and deployed on different container hosts, from a developer’s laptop computer, to Server Core on bare-metal hardware, and eventually to being hosted and run on Azure. Because an application’s dependencies are bundled with the application within a container, IT operations can deploy a container as soon as it is handed off without worrying if an appropriate prerequisite software package has been installed or if a necessary setting has been configured. Because a container provides an isolated environment, a failure that occurs within one container will only impact that container and will not impact other containerized applications running on the container host.

Understand container concepts

Containers can be conceptually challenging. To understand how containers work, you first need to understand some of the terminology involving containers. While some of these concepts will be fleshed out more completely later in this chapter, you should understand the following terms at a high level:

  • Container images A container image is a template from which a container is generated. There are two general types of container images, which are usually created for a workload: a base OS image and a specific image. The difference between these containers on Windows Server is as follows:

    • Container base OS image This is an image of the operating system on which other container images are built.

    • Container image A container image stores changes made to a running container base OS image or another container image. For example, you can start a new container from a container base OS image, make modifications such as installing Java or a Windows feature, and then save those modifications as a new container image. The new container image only stores the changes you make, and therefore, it is much smaller than the parent container base OS image. You can then create an additional container image that stores modifications made to the container image that has an installed Java and Windows feature. Each container image only stores the changes made to the image from which it was run.

  • Container instance When you run a container, a copy of the container image you are starting is created and runs on the container host. You can make changes to the container instance but they won’t be written back to the original container image. Container instances can be thought of as a disposable temporary version of the container. If problems occur with a container instance, you just end that instance and create a new instance to replace it. You can create many instances of a container to run in parallel with each other. The orchestration of large fleets of containers can be accomplished using technologies such as Kubernetes, which is beyond the scope of the AZ-800 exam.

  • Sandbox The sandbox is the environment in which you can make modifications to an existing container image before you commit the changes to create a new container image. If you don’t commit those changes to a new image, those changes will be lost when the container is removed.

  • Image dependency A new container image has the container image from which it was created as a dependency. For example, you can create a container image named WebServer-1.0 that has the IIS role installed from the Server Core base OS image and the Server Core base OS image is a dependency for the WebServer-1.0 image. This dependency is very specific, and you can’t use an updated version of the Server Core base OS image as a replacement for the version that you used when creating the WebServer-1.0 image. You can then export this container image that only records the changes made to another container host. You can start the image as long as the dependency container OS base image is present on that container host. You can have multiple images in a dependency chain.

  • Container host A container host is a computer that runs containers. A container host can be virtual, physical, or even a cloud-based platform as a service (PaaS) solution.

  • Container registries Container registries are central storehouses of container images. Though it’s possible to transfer containers or copy them to file shares using tools like FTP, common practice is to use a container registry as a repository for container images. Public container registries, such as the one that hosts the base OS images for Microsoft, also store previous versions of the container base OS images. This allows you to retrieve earlier versions of the container base OS image that other images may depend on. Container registries can be public or private. When working with your own organization’s container images, you have the option of creating and maintaining a private container registry.

Windows Server supports two container isolation modes: process isolation mode and Hyper-V isolation mode. In previous versions of Microsoft documentation, these isolation modes were occasionally called “Windows Server containers” and “Hyper-V containers.” Windows 10 and Windows 11 support only Hyper-V isolation mode. Windows Server 2016 and later support process isolation and Hyper-V isolation modes.

Process isolation

Process isolation mode provides a container with process and namespace isolation. Containers running in this isolation mode share a kernel with all other containers running on the container host. This is similar to the manner in which containers run on Linux container hosts. Process isolation is the default mode used when running a container. If you want to ensure that the container is being used, start the container using the --isolation=process option.

Hyper-V isolation

A container running in Hyper-V isolation mode runs in a highly optimized virtual machine that also provides an isolated application execution environment. Hyper-V isolation mode containers don’t share the kernel with the container host, nor do they share the kernel with other containers on the same container host. You can only use Hyper-V isolation mode containers if you have enabled the Hyper-V role on the container host. If the container host is a Hyper-V virtual machine, you must enable nested virtualization. By default, a container uses the process isolation mode. You can start a container in Hyper-V isolation mode by using the --isolation=hyperv option.

For example, to create a Hyper-V container from the microsoft/windowsservercore image, issue this command:

Docker run -it --isolation=hyperv mcr.microsoft.com/windows/servercore cmd
Installing and configuring Docker

Containers on Windows Server are managed using the Docker engine. The advantage here is that the command syntax of Docker on Windows is almost identical to the command-line tools in Docker on Linux. Although there is a community-maintained PowerShell module for managing containers on Windows Server available on GitHub, PowerShell is not the primary tool for Windows Server container management, and very few people use PowerShell to manage containers.

For Windows Server administrators unfamiliar with Docker syntax, the commands include extensive help support. Typing Docker at the command prompt will provide an overview of the high-level Docker functionality. You can learn more about specific commands by using the --help command parameter. For example, the following command will provide information about the Docker Image command:

docker image --help
Installing Docker

Docker is not included with the Windows Server installation media, and you don’t install it as a role or feature. Instead, you have to install Docker from the PowerShell Gallery. Although this is unusual for an important role on a Windows Server operating system, it does have the advantage of ensuring that you have the latest version of Docker.

The simplest way to install Docker is to first ensure that your Windows Server computer has internet connectivity. You then run the following command from an elevated PowerShell prompt:

Install-Module -Name DockerMsftProvider -Repository PSGallery -Force

Next, install the most recent version of Docker by executing the following command, ensuring that you answer Yes when prompted to install software from a source not marked as trusted:

Install-Package -Name docker -ProviderName DockerMsftProvider

You’ll then need to restart the computer that will function as the container host to complete the installation of Docker. You can update the version of Docker when a new one becomes available by rerunning the following command:

Install-Package -Name docker -ProviderName DockerMsftProvider
DAemon.json

If you want to change the default Docker Engine settings, such as whether to create a default NAT network, you need to create and configure the Docker Engine configuration file. This file doesn’t exist by default. When it is present, the settings in the file override the Docker Engine’s default settings.

You should create this file in the c:ProgramDataDockerconfig folder. Before editing the daemon.json file, stop the Docker service using the following command:

Stop-Service docker

You only need to add settings that you want to change to the configuration file. For example, if you only want to configure the Docker Engine to accept incoming connections on port 2701, you add the following lines to daemon.json:

{
    "hosts": ["tcp://0.0.0.0:2701"]
}

The Windows Docker Engine doesn’t support all possible Docker configuration file options. The ones that you can configure are shown here:

{
    "authorization-plugins": [],
    "dns": [],
    "dns-opts": [],
    "dns-search": [],
    "exec-opts": [],
    "storage-driver": "",
    "storage-opts": [],
    "labels": [],
    "log-driver": "",
    "mtu": 0,
    "pidfile": "",

    "cluster-store": "",
    "cluster-advertise": "",
    "debug": true,
    "hosts": [],
    "log-level": "",
    "tlsverify": true,
    "tlscacert": "",
    "tlscert": "",
    "tlskey": "",
    "group": "",
    "default-ulimits": {},
    "bridge": "",
    "fixed-cidr": "",
    "raw-logs": false,
    "registry-mirrors": [],
    "insecure-registries": [],
    "disable-legacy-registry": false
}

These options allow you to do the following when starting the Docker Engine:

  • authorization-plugins Specify the authorization plug-ins the Docker Engine should load

  • dns Specify the DNS server the containers should use for name resolution

  • dns-opts Specify the DNS options you want to use

  • dns-search Specify the DNS search domains you want to use

  • exec-opts Specify runtime execution options

  • storage-driver Specify the storage driver

  • storage-opts Specify the storage driver options

  • labels Specify Docker Engine labels

  • log-driver Specify the default driver for the container logs

  • mtu Specify the container network MTU

  • pidfile Specify the path to use for the daemon PID file

  • group Specify the local security group that has permissions to run Docker commands

  • cluster-store Specify cluster store options

  • cluster-advertise Specify the cluster address you want to advertise

  • debug Enable debug mode

  • hosts Specify daemon sockets you want to connect to

  • log-level Specify level of logging detail

  • tlsverify Use TLS and perform verification

  • tlscacert Specify which certificate authorities to trust

  • tlscert Specify the location of the TLS certificate file

  • tlskey Specify the location of the TLS key file

  • group Specify the UNIX socket group

  • default-ulimits Specify default ulimits for containers

  • bridge Attach containers to network bridge

  • fixed-cidr IPv4 subnet for static IP address

  • raw-logs Specify the log format you want to use

  • registry-mirrors Specify the preferred registry mirror

  • insecure-registries Allow insecure registry communication

  • disable-legacy-registry Block contacting legacy registries

Once you have made the necessary modifications to the daemon.json file, you should start the Docker service by running this PowerShell command:

Start-Service docker

Manage Windows Server container images

Container registries are repositories for the distribution of container images. From the Microsoft Container registry, you can retrieve the following Microsoft published container images:

  • Windows Server Core This is the base image for the Windows Server Core container operating system. This image supports traditional .NET Framework applications. You can retrieve this image using the following Docker command:

    docker pull mcr.microsoft.com/windows/servercore:ltsc2022
  • Nanoserver This is the base image for the Nano Server container operating system. This is a Windows Server image with all unnecessary elements stripped out. It is suitable for hosting .NET Core applications. The Nano Server container image is what became of the Nano Server installation option that was available with the RTM version of Windows Server 2016. Because the image is stripped down to the essentials, it is far smaller than the Windows Server Core image and can be deployed and run more quickly. You can retrieve this image using the following Docker command:

    docker pull mcr.microsoft.com/windows/nanoserver:ltsc2022
  • Windows This is an image that provides the full Windows API set but doesn’t include all the server roles and features that are available in the Server Core image. You should only use this option if the application you are trying to host has a dependency that is not included in the Windows Server Core container image. Use this image with Windows Server 2016 and Windows Server 2019 container hosts, but not with Windows Server 2022 container hosts if you are using process isolation. You can run this image on Windows Server 2022 hosts if you’re using Hyper-V isolation. You can retrieve this image using the following Docker command:

    docker pull mcr.microsoft.com/windows:20H2
  • Windows Server This is an image that provides the full Windows API set but doesn’t include all the server roles and features that are available in the Server Core image. You should only use this option if the application you are trying to host has a dependency that is not included in the Windows Server Core container image. This version of the container image only works on Windows Server 2022 container hosts. You can retrieve this image using the following Docker command:

    docker pull mcr.microsoft.com/windows/server:ltsc2022

In cases where multiple images exist, such as Windows Server Core and Nano Server, you can use the -a option with the Docker pull command to retrieve all images. This can be helpful if you don’t know the image ID of the specific image that you wish to retrieve.

When you pull an image from a registry, the action will also download any dependency images that are required. For example, if you pull an image that was built on a specific version of the Windows Server Core base image and you don’t have that image on the container host, that image will also be downloaded from a container registry.

You can view a list of images that are installed on a Windows Server container host by using the following command:

docker image list

You can use Windows Admin Center to view a list of container images that are installed on a Windows Server container host. The Windows Admin Center also allows you to delete images that are installed on a Windows Server container host.

Service accounts for Windows containers

Although containers based on the Server Core and Nano Server operating systems have most of the same characteristics as a virtual machine or a bare-metal deployment of the Server Core or Nano Server versions of Windows Server, one thing that you can’t do with containers is to join them to a domain. This is because containers are supposed to be temporary, rather than permanent, and domain-joining them would clog up Active Directory with unnecessary computer accounts.

While containers can’t be domain-joined, it is possible to use a group-managed service account (gMSA) to provide one or more containers with a domain identity similar to that used by a device that is realm-joined. Performing this task requires downloading the Windows Server Container Tools and ensuring that the container host is a member of the domain that hosts the gMSA. When you perform this procedure, the container’s LocalSystem and Network Service accounts use the gMSA. This gives the container the identity represented by the gMSA.

To configure gMSA association with a container, perform the following steps:

  1. Ensure that the Windows Server container host is domain-joined.

  2. Add the container host to a specially created domain security group. This domain security group can have any name.

  3. Create a gMSA and grant gMSA permission to the specially created domain security group of which the container host is a member.

  4. Install the gMSA on the container host.

  5. Use the New-CredentialSpec cmdlet on the container host to generate the gMSA credentials in a file in JSON format. This cmdlet is located in the CredentialSpec PowerShell module, which is available as a part of the Windows Server Container Tools. For example, if you created a gMSA named MelbourneAlpha, you would run the following command:

    New-CredentialSpace -Name MelbourneAlpha -AccountName MelbourneAlpha
  6. You can verify that the credentials have been saved in JSON format by running the Get-CredentialSpec cmdlet. By default, credentials are stored in the c:ProgramDataDockerCredentialSpecs folder.

  7. Start the container using the option --security-opt "credentialspec=" and specify the JSON file containing the credentials associated with the gMSA. For example, run the following command if the credentials are stored in the file twt_webapp01.json:

    docker run --security-opt "credentialspec=file://twt_webapp01.json" --hostname
    webapp01 -it
    
    mcr.microsoft.com/windows/servercore:ltsc2019 powershell

Once you’ve configured the container to indirectly use the gMSA for its Local System and Network Service accounts, you can give the container permission to access domain resources by providing access to the gMSA. For example, if you want to provide the container with access to the contents of a file share hosted on a domain member, you can configure permissions so that the gMSA has access to the file share.

Updating images

One of the concepts that many IT operations personnel find challenging is that you don’t update a container that is deployed in production. Instead, you create a fresh container from the original container image, update that container, and then save that updated container as a new container image. You then remove the container in production and deploy a new container to production that is based on the newly updated image.

For example, suppose you have a container that hosts a web app deployed from a container image named WebApp1 that is deployed in production. The developers in your organization release an update to WebApp1 that involves changing some existing settings. Rather than modifying the container in production, you start another container from the WebApp1 image, modify the settings, and then create a new container image named WebApp2. You then deploy a new container into production from the WebApp2 container image and remove the original container (that hasn’t been updated).

Although you can manually update your container base OS images by applying software updates, Microsoft releases updated versions of the container base images each time a new software update is released. After a new container OS base image is released, you or your organization’s developers should update existing images that are dependent on the container OS base image. Regularly updated container base OS images provide an excellent reason for eventually moving toward using Dockerfiles to automate the process of building containers. If you have a Dockerfile configured for each container image used in your organization, updating your container base OS images when a new container base OS image is released is a quick, painless, and automated process.

Managing container images

To save a Docker image for transfer to another computer, use the docker save command. When you save a Docker image, you save it in TAR format. For example, to export the container image tailwind_app to the c:archive folder, issue this command:

docker save tailwind_app -o c:archive	ailwind_app.tar

You can load a Docker image from a saved image using the docker load command. For example, to load the container image from the c:archive ailwind_app.tar file created earlier, use this command:

docker load < c:archive	ailwind_app.tar

When you have multiple container images that have the same name, you can remove an image by using the image ID. You can determine the image ID by running the command docker images. You can also use Windows Admin Center to view this information. You can’t remove a container image until the last container instance created from that image either directly or indirectly has been deleted.

You then remove the image by using the command docker rmi with the image ID. For example, to remove the container image with the ID a896e5590871, use this command:

docker rmi a896e5590871

You can also remove a container image by selecting the image in the list of containers in Windows Admin Center and selecting Delete.

While you can transfer container images between hosts by exporting and importing them as TAR files, the usual way to distribute container images is to use a container registry. A container registry is a service that allows you to store and distribute container images. Public registries are available for anyone to interact with, and private registries allow you to limit access to authorized individuals.

Before uploading (also termed pushing) an image to a registry, it’s necessary to tag the container image with the fully qualified name of the registry login server. For example, to tag the image tailwind-app when you are using the Azure Container Registry instance twt-registry.azurecr.io, run the command

docker tag tailwind-app twt-registry.azurecr.io/tailwind-app

You upload a container image to a container registry using the docker push command. For example, to upload a container image named tailwind-app to registry twt-registry.azurecr.io, use the command

docker push twt-registry.azurecr.io /tailwind-app

Manage container instances

Container instances are separate copies of the container image that have their own existence. Although the analogy is far from perfect, each container instance is like a separate virtual machine, with its own IP address and ability to interact with other hosts on the network.

Starting a container

You create a new container instance by using the docker run command and specifying the container image that will form the basis of the container. You can start a container instance by selecting a container image that is present locally or by specifying a container image that is stored in a registry. For example, to start a container from the Microsoft/windowsserver core image stored in the Microsoft container image registry, run this command:

docker run mcr.microsoft.com/windows/servercore:ltsc2022

You can start a container and run an interactive session either by specifying cmd.exe or PowerShell.exe by using the -it option with docker run. Interactive sessions allow you to directly interact with the container through the command line from the moment the container starts. Detached mode starts a container, but it doesn’t start an interactive session with that container.

For example, to start a container from the Microsoft/windowsservercore image and to enter an interactive PowerShell session within that container once it’s started, use this command:

docker run -it mcr.microsoft.com/windows/servercore:ltsc2022 powershell.exe

If you want to start an interactive session on a container that you started in detached mode, use the command docker exec -i <containername> powershell.exe.

By default, containers use network address translation. This means that if you are running an application or service on the container that you want to expose to the network, you’ll need to configure port mapping between the container host’s network interface and the container. For example, if you wanted to start a container in detached mode, mapping port 8080 on the container host to port 80 on the container, you would run the following command:

docker run -d -p 8080:80 mcr.microsoft.com/windows/servercore:ltsc2022

You can verify which containers are running by using the docker ps command or by using Windows Admin Center. The problem with the simple docker ps command option is that this will only show you the running containers and won’t show you any that are in a stopped state. You can see which containers are on a container host, including containers that aren’t currently running, by using the docker ps -a command. If you have loaded the Containers extension into Windows Admin Center, you can also view the containers that are on a Windows Server container host using Windows Admin Center. In some cases, it will be necessary to start a stopped container. You can start a stopped container using the docker start <containername> command.

One thing that you’ll notice about containers is that they appear to be assigned random names, such as sarcastic_hedgehog, fraggle_advocate, dyspeptic_hamster, and sententious_muppet. Docker assigns random names rather than asking you for one because containers are a more ephemeral type of application host than a VM; because they are likely to only have a short life span, it isn’t worth assigning any name to them that you’d need to remember. The reason for the structure of the random names is that they are easy to remember in the short term, which makes containers that you must interact with on a short-term basis easier to address than when using hexadecimal container IDs.

Modifying a running container

After you have a container running, you can enter the container and make the modifications that you want to make to ensure that the application the container hosts will run correctly. This might involve creating directories, using the dism.exe command to add roles, or using the wget PowerShell alias to download and install binaries such as Java. For example, the following code, when run from inside a container, downloads and installs an older version of Java into a container based on the Server Core base OS container:

wget -Uri "http://javadl.sun.com/webapps/download/AutoDL?BundleId=107944" -outfile javaInstall.exe -UseBasicParsing
REG ADD HKLMSoftwarePoliciesMicrosoftWindowsInstaller /v DisableRollback /t REG_DWORD /d 1 | Out-Null ./javaInstall.exe /s INSTALLDIR=C:Java REBOOT=Disable | Out-Null

Once you are finished modifying the container, you can type Exit to exit the container. A container must be in a shutdown state before you can capture it as a container image. You use the docker stop <containername> cmdlet to shut down a container.

Configure container networking

Each container has a virtual network adapter. This virtual network adapter connects to a virtual switch, through which inbound and outbound traffic is sent. Networking modes determine how network adapters function in terms of IP addressing, meaning whether they use NAT or are connected to the same network as the container host.

Windows containers support the following networking modes:

  • NAT Each container is assigned an IP address from the private 172.16.0.0 /12 address range. When using NAT, you can configure port forwarding from the container host to the container endpoint. If you create a container without specifying a network, the container will use the default NAT network. The Docker service creates its own default NAT network. When the container host reboots, the NAT network will not be created until the Docker service has restarted. Any container that was attached to the existing NAT network and that is configured to persist after reboot (for example, because it uses the –restart always option) will reattach to the NAT network that is newly created when the Docker service restarts.

  • Transparent Each container endpoint connects to the physical network. The containers can have IP addresses assigned statically or through DHCP.

  • Overlay Use this mode when you have configured the Docker engine to run in swarm mode. Overlay mode allows container endpoints to be connected across multiple container hosts.

  • L2 Bridge Container endpoints are on the same IP subnet used by the container host. IP addresses must be assigned statically. All containers on the container host have the same MAC address.

  • L2 Tunnel This mode is only used when you deploy containers in Azure.

You can list available networks using the docker network ls command. You can also use the Networks blade of the Windows Admin Center to view available container networks. You can create multiple container networks on a container host, but you need to keep in mind the following limitations:

  • If you are creating multiple networks of the transparent or L2 bridge type, you need to ensure that each network has a separate network adapter.

  • If you create multiple NAT networks on a container host, additional NAT networks prefixes will be partitioned from the container host’s NAT network’s address space. (By default, this is 172.16.0.0/12.) For example, these will be 172.16.1.0/24, 172.16.2.0/24, and so on.

NAT

Network address translation (NAT) allows each container to be assigned an address in a private address space, whereas connecting to the container host’s network uses the container host’s IP address. The default NAT address range for containers is 172.16.0.0 /16. In the event that the container host’s IP address is in the 172.16.0.0 /12 range, you will need to alter the NAT IP prefix.

You can also allow connections to custom NAT networks when a container is run by allowing use of the --network parameter and specifying the custom NAT network name. To do so, you need to have the bridge: none option specified in the daemon.json file. The command to run a container and join it to the CustomNat network created earlier is:

docker run -it --network=CustomNat <ContainerImage> <cmd>

Port mappings allow ports on the container host to be mapped to ports on the container. For example, to map port 8080 on the container host to port 80 on a new container created from the windowsservercore image and to run powershell.exe interactively on the container, create the container using the following command:

docker run -it -p 8080:80 microsoft/windowsservercore powershell.exe

Port mappings must be specified when the container is created or when it is in a stopped state. You can specify them using the -p parameter or the EXPOSE command in a Dockerfile when using the -P parameter. If you do not specify a port on the container host but you do specify one on the container itself, a random port will be assigned. For example, run this command:

docker run -itd -p 80 mcr.microsoft.com/windows/servercore/windowsservercore: ltsc2022
powershell.exe

A random port on the container host can be mapped to port 80 on the container. You can determine which port is randomly assigned by using the docker ps command. When you configure port mapping, firewall rules on the container host will be created automatically that will allow traffic through.

Transparent

Transparent networks allow each container endpoint to connect to the same network as the container host. You can use the transparent networking mode by creating a container network that has the driver name transparent. The driver name is specified with the -d option, as shown in this command:

docker network create -d transparent TransparentNetworkName

If the container host is a virtual machine running on a Hyper-V host and you want to use DHCP for IP address assignment, it’s necessary to enable MACAddressSpoofing on the VM network adapter. The transparent networking mode supports IPv6. If you are using transparent network mode, you can use a DHCPv6 server to assign IPv6 addresses to containers.

If you want to manually assign IP addresses to containers, when you create the transparent network you must specify the subnet and gateway parameters. These network properties must match the network settings of the network to which the container host is connected.

For example, if your container host is connected to a network that uses the 192.168.30.0/24 network and uses 192.168.30.1 as the default gateway, you would create a transparent network that will allow static address assignment for containers on the network called TransNet by running this command:

docker network create -d transparent --subnet=192.168.30.0/24 --gateway=192.168.30.1
TransNet

Once the transparent network is created with the appropriate settings, you can specify an IP address for a container by using the --ip option. For example, to start a new container from the microsoft/windowsservercore image, enter the command prompt within the container. To assign it the IP address 192.168.30.101 on the TransNet network, run this command:

docker run -it --network:TransNet --ip 192.168.30.101 microsoft/windowsservercore cmd.
exe

As when you use transparent network, containers are connected directly to the container host’s network; you don’t need to configure port mapping into the container.

Layer 2 Bridge

Layer 2 Bridge (L2 Bridge) networks are similar to transparent networks in that they allow containers to have IP addresses on the same subnets as the container host. They differ in that IP addresses must be assigned statically. This is because all containers on the container host that use an L2 Bridge network have the same MAC address.

When creating an L2 Bridge network, you must specify the network type as l2bridge. You must also specify subnet and default gateway settings that match the subnet and default gateway settings of the container host. For example, to create an L2 Bridge network named L2BridgeNet for the IP address range 192.168.88.0/24 and with the default gateway address 192.168.88.1, use the following command:

docker network create -d l2bridge -subnet=192.168.88.0/24 --gateway=192.168.88.1
L2BridgeNet

Create Windows Server container images

The key to creating new container images is configuring a running container instance in a desired way and then capturing that instance as a new image. There are a variety of methods you can use to do this, from performing manual configuration tasks interactively within the container instance to automating the build of container instances that are then captured as images.

Creating a new image from a container

Once the container is in the desired state and shut down, you can capture the container to a new container image. You do this using the docker commit <container_name> <new_image_name> command. For example, to commit the container image elegant_wombat to the image name tailwind_app, run this command:

docker commit elegant_wombat tailwind_app

After you have committed a container to a container image, you can remove the container using the docker rm command. For example, to remove the elegant_wombat container, issue this command:

docker rm elegant_wombat
Using Dockerfiles

Dockerfiles are text files that allow you to automate the process of creating new container images. You use a Dockerfile with the docker build command to automate container creation, which is very useful when you need to create new container images from regularly updated base OS container images.

Dockerfiles have the elements shown in Table 3-1.

TABLE 3-1 Dockerfile element

Instruction

Description

FROM

Specifies the container image used in creating the new image creation. For example:

FROM mcr.microsoftcom/windows/servercore:ltsc2022

RUN

Specifies commands to be run and captures them into the new container image.For example:

RUN wget -Uri "http://javadl.sun.com/webapps/download/AutoDL?BundleId=107944"

-outfile javaInstall.exe -UseBasicParsing

RUN REG ADD HKLMSoftwarePoliciesMicrosoftWindowsInstaller /v

DisableRollback /t REG_DWORD /d 1 | Out-Null

RUN ./javaInstall.exe /s INSTALLDIR=C:Java REBOOT=Disable | Out-Null

COPY

Copies files and directories from the container host filesystem to the filesystem of the container. For Windows containers, the destination format must use forward slashes. For example:

COPY example1.txt c:/temp/

ADD

Can be used to add files from a remote source, such as a URL. For example:

ADD https://www.python.org/ftp/python/3.9.9/python-3.9.9.exe /temp/python-9.9.9.exe

WORKDIR

Specifies a working directory for the RUN and CMD instructions.

CMD

A command to be run when deploying an instance of the container image.

For example, the following Dockerfile will create a new container from the mcr.microsoft.com/windows/servercore:ltsc2022 image, create a directory named ExampleDirectory, and then install the iis-webserver feature:

FROM mcr.microsoftcom/windows/servercore:ltsc2022
RUN mkdir ExampleDirectory
RUN dism.exe /online /enable-feature /all /featurename:iis-webserver /NoRestart

To create a container image named example_image, change into the directory that hosts the Dockerfile (no extension) file, and run the following command:

docker build -t example_image .

images Exam Tip

If you need to create a container image for an application that requires .NET Core, you should choose the Nano Server base image.

Skill 3.3: Manage Azure Virtual Machines that run Windows Server

This objective deals with managing and configuring Windows Server VMs that run in Azure. In hybrid environments, many organizations treat Azure IaaS VMs in the same manner they would Windows Server instances that are hosted in a traditional remote third-party datacenter. In this section you learn what you need to know about managing Windows Server IaaS VMs in Azure.

Administer IaaS VMs

In your on-premises environment, you would not allow most users of a VM to have access to a VM in Hyper-V Manager and would only allow them to connect directly to the VM using RDP or PowerShell. You should do the same thing in Azure since most people who use IaaS VMs in Azure don’t need to interact with them through the Azure console.

Azure IaaS VMs are only visible to users in the Azure portal if they have a role that grants them that right. The default Azure IaaS VM role-based access control (RBAC) roles are as follows:

  • Virtual Machine Contributor Users who hold this role can manage virtual machines through the Azure console and perform operations such as restarting and deleting the virtual machine. Membership in this role does not provide the user with access to the VM itself. It also does not provide access to the virtual network or storage account to which the VM is connected.

  • Virtual Machine Administrator Login If the VM is configured to allow login using Azure AD accounts, assigning this role grants the user local administrator privileges in the virtual machine. Users who hold this role can view the details of a VM in the portal but cannot change the VM’s properties.

  • Virtual Machine User Login Users who hold this role are able to view the details of a virtual machine in the Azure portal and can log in using their Azure AD account with user permissions. Users who hold this role cannot change the VM’s properties.

Users who aren’t members of the Virtual Machine Administrator Login or Virtual Machine User Login roles can still access IaaS VMs if they have a local account on the virtual machine. Put another way, just because you can’t see it in the portal doesn’t mean you can’t RDP to it if you have the IaaS VM’s network address and the correct ports are open.

Manage data disks

Most Azure IaaS VM sizes deploy automatically with an operating system disk and a temporary disk. On Windows Server VMs the operating system disk will be assigned as volume C and the temporary disk will be assigned volume D. You can also add separate data disks to VMs, with the number and type of data disks available dependent on which VM size or SKU you select.

Although data stored on the temporary disk should persist when you perform a simple restart operation on an IaaS VM, data on the temporary disk may be lost during Azure maintenance events or when an IaaS VM is redeployed in a new size. You should assume that the data on a temporary disk may disappear, so make sure you don’t use the disk to store anything important that you may need in the future. By default, temporary disks are not encrypted.

You can perform the following actions with a data disk:

  • Attach data disks Data disks can be attached to an IaaS VM while the IaaS VM is running. If you have created a new data disk and attached it to a Windows Server IaaS VM, you’ll need to initialize and then format the disk before you can start to use it.

  • Detach data disks You can detach data disks from an IaaS VM while the IaaS VM is running. Before performing this operation, you should take the disk offline using the Disk Management console, Windows Admin Center, the diskpart.exe command-line utility, or the Set-Disk PowerShell cmdlet.

You can’t change the size of a virtual machine data disk attached to a running or shutdown (but not deallocated) IaaS VM. If you want to change the size of a data disk, you need to first detach it. If you don’t want to detach the disk, you should stop and deallocate the IaaS VM and then perform the resize operation.

Shared disks

Shared disks are a managed disk functionality that you can use to attach a single Azure managed disk to multiple IaaS VMs simultaneously. Shared disks allow you to create guest clusters from Azure IaaS VMs where the shared disk functions as shared storage. The functionality of Azure shared managed disks is conceptually similar to the functionality of shared virtual hard disks used with guest clusters on Hyper-V. Only ultra disks, premium SSDs, and standard SSDs can be configured as shared disks for IaaS VMs.

Disk images and snapshots

Managed disks support images and snapshots. An image of an IaaS VM can be created from a generalized IaaS VM (on Windows Server you would use sysprep.exe to perform the generalization operation) only when that IaaS VM has been deallocated. IaaS VM images include all disks attached to the IaaS VM and can be used to create new IaaS VMs.

Snapshots are point-in-time copies of an IaaS VM virtual hard disk. Snapshots can be used as a point-in-time backup or as a copy of the virtual machine for troubleshooting scenarios. Snapshots can be created of OS and data disks. You should only take snapshots of shutdown IaaS VMs to ensure that data is consistent across the disk since managed disk snapshots do not use Volume Shadow Copy services to ensure point-in-time consistency. Unlike disk images, the IaaS VM does not need to be deallocated for you to take a snapshot. Snapshots exist only at a per-disk level, which means that you shouldn’t use them in any situation where data is striped across multiple disks. You can create snapshots of an operating system disk and create a new IaaS VM from that snapshot.

IaaS VM encryption

Beyond storage service encryption, you can configure BitLocker disk encryption for Windows Server IaaS VMs. To support IaaS VM disk encryption, the Windows Server IaaS VM must be able to do the following. (By default, IaaS VMs do meet these conditions unless you remove the default network security group rules.)

  • The server must be able to connect to the key vault endpoint so that the Windows Server IaaS VM can store and retrieve encryption keys.

  • The server must be able to connect to an Azure Active Directory endpoint at login.microsoftonline.com so that it can retrieve a token that allows it to connect to the key vault that holds the encryption keys.

  • The server must be able to connect to an Azure storage endpoint that hosts the Azure extension repository and the Azure storage account that stores the VM’s virtual hard disks.

  • If the Windows Server IaaS VM is domain-joined, do not configure BitLocker-related Group Policies other than the Configure User Storage Of BitLocker Recovery Information: Allow 256-Bit Recovery Key policy. This policy is usually configured automatically during the encryption process, and the encryption process will fail if a Group Policy conflict in any TPM- or BitLocker-related policies exists.

Resize Azure VM

Azure allows you to resize an IaaS VM. Resizing an IaaS VM allows you to change the IaaS VM’s processor and memory allocation. Resizing an IaaS VM might also alter the number of disks that can be associated with an IaaS VM; cheaper SKUs will be limited to less storage, and in some cases, they will be limited to standard rather than premium storage types. If the Azure IaaS VM is running, the only drawback is that the IaaS VM will need to restart for the resize to occur.

To resize an Azure IaaS VM, perform the following steps:

  1. Open the VM’s page in the Azure portal under Virtual Machines.

  2. In the left menu under Settings, select Size.

  3. Select the new size that you wish to apply to the IaaS VM.

  4. Click Resize.

The virtual machine will then restart, and the new size will apply.

Need More Review? Resize Azure VM

You can learn more about resizing an Azure VM at https://docs.microsoft.com/azure/virtual-machines/windows/resize-vm.

Configure continuous delivery for an Azure VM

The most common method of configuring continuous delivery for Windows Server IaaS VMs is to use ARM templates with specific IaaS VM configurations stored in Git repositories. When you deploy a new IaaS VM using the Azure portal, you can choose to create an ARM template with the deployment options that you can use later. There also exist many sample Azure IaaS Windows Server IaaS VM configurations that you can deploy or modify at https://azure.microsoft.com/resources/templates.

Need More Review? Continuous Delivery For Azure VMS

You can learn more about continuous delivery for Azure VMs at https://docs.microsoft.com/azure/architecture/solution-ideas/articles/cicd-for-azure-vms.

Configure connections to VMs

Once an IaaS VM is deployed, you need to consider how to allow administrative connections to the IaaS VM. You’ll need to ensure that any network security groups and firewalls between your administrative host and the target IaaS VM are configured to allow the appropriate administrative traffic, though tools such as Just-in-Time VM access and Azure Bastion can automate that process. As a general rule, you shouldn’t open the Remote Desktop port (TCP port 3389) or a remote PowerShell port (HTTP port 5985, HTTPS port 5986) so that any host from any IP address on the internet has access. If you must open these ports on a network security group applied at the IaaS VM’s network adapter level, try to limit the scope of any rule to the specific public IP address or subnet that you will be initiating the connection from.

Connecting with an Azure AD Account

When you deploy an IaaS VM, you usually configure a username and password for a local Administrator account. If the computer is standalone and not AD DS or Azure AD DS joined, you have to decide whether you want to limit access to the accounts that are present that you have configured on the IaaS VM itself or if you also want to allow local sign-on using Azure AD accounts.

Sign-on using Azure AD is supported for IaaS VMs running Windows Server 2019 and Windows Server 2022. The IaaS VMs need to be configured on a virtual network that allows access to the following endpoints over TCP port 443:

You can enable Azure AD logon for a Windows Server IaaS VM when creating the IaaS VM using the Azure portal or when creating an IaaS VM using Azure Cloud Shell. When you do this, the AADLoginForWindows extension is enabled on the IaaS VM. If you have an existing VM Windows Server IaaS VM, you can use the following Azure CLI command to install and configure the AADLoginForWindows extension (where ResourceGroupName and VMName are unique to your deployment):

Az vm extension set --publisher Microsoft.Azure.ActiveDirectory --name
AADLoginForWindows --resource-group ResourceGroupName --vm-name VMName

Once the IaaS VM has the AADLoginForWindows extension configured and enabled, you can determine what permissions the Azure AD user account has on the IaaS VM by adding them to the following Azure roles:

  • Virtual Machine Administrator Login Accounts assigned this role are able to sign on to the IaaS VM with local administrator privileges.

  • Virtual Machine User Login Accounts assigned this role are able to sign on to the VM with regular user privileges.

Remote PowerShell

You can initiate a remote PowerShell session from hosts on the internet. Another option is to run a Cloud Shell session in a browser and perform PowerShell remote administration in this manner. Cloud Shell is a browser-based CLI and a lot simpler to use than adding the Azure CLI to your local computer. There is a Cloud Shell icon on the top panel of the Azure console.

You can enable Remote PowerShell on an Azure IaaS Windows VM by performing the following steps from Cloud Shell:

  1. Ensure that Cloud Shell has PowerShell enabled by running the pwsh command.

  2. At the PowerShell prompt in Cloud Shell, type the following command to enter local Administrator credentials for the Azure IaaS Windows VM:

    $cred=get-credential
  3. At the PowerShell prompt in Cloud Shell, type the following command to enable PowerShell remoting on the Windows Server IaaS VM, where the VM name is 2022-IO-A and the resource group that hosts the VM is 2022-IO-RG:

    Enable-AzVMPSRemoting -Name 2022-IO-A -ResourceGroupName 2022-IO-RG -Protocol
    https -OSType Windows
  4. Once this command has completed executing, you can use the Enter-AzVM cmdlet to establish a remote PowerShell session. For example, run this command to connect to the IaaS VM named 2022-IO-A in resource group 2022-IO-RG:

    Enter-AzVM -name 2022-IO-A -ResourceGroupName 2019-22-RG -Credential $cred
Azure Bastion

Azure Bastion allows you to establish an RDP session to a Windows Server IaaS VM through a standards-compliant browser such as Microsoft Edge or Google Chrome rather than having to use a remote desktop client. You can think of Azure Bastion as “jumpbox as a service” because it allows access to IaaS VMs that do not have a public IP address. Before the release of Azure Bastion, the only way to gain access to an IaaS VM that didn’t have a public IP address was either through a VPN to the virtual network that hosted the VM or by deploying a jumpbox VM with a public IP address from which you then created a secondary connection to the target VM. If you have configured an SSH server on the IaaS VM, Bastion also supports creating SSH connections to Linux IaaS VMs or Windows Server configured with the SSH server service.

Prior to deploying Azure Bastion, you need to create a special subnet named AzureBastion-Subnet on the virtual network that hosts your IaaS VMs. Once you deploy Azure Bastion, the service will manage the network security group configuration to allow you to successfully make connections.

Just-in-Time VM access

Rather than have management ports, such as the port used for Remote Desktop Protocol, TCP port 3389, open to hosts on the internet all the time, Just-in-Time (JIT) VM access allows you to open a specific management port for a limited duration of time and only open that port to a small range of IP addresses. You only need to use JIT if you require management port access to an Azure IaaS VM from a host on the internet. If the IaaS VM only has a private IP address or you can get by using a browser-based RDP session, then Azure Bastion is likely a better option. JIT also allows an organization to log on that has requested access, so you can figure out exactly which member of your team was the one who signed in and messed things up, which led to you writing up a report about the service outage. JIT VM access requires that you use the Azure Security Center, although doing so incurs an extra monthly cost per IaaS VM. Keep that in mind if you are thinking about configuring JIT.

Windows Admin Center in Azure Portal

Windows Admin Center, available in the Azure portal, allows you to manage Windows Server IaaS VMs. When you deploy Windows Admin Center in the Azure portal, WAC will be deployed on each Azure IaaS VM that you wish to manage. Once this is done, you can navigate directly to an Azure portal blade containing the WAC interface instead of loading Windows Admin Center up directly on your administrative workstation or jumpbox server.

Azure Serial Console

Azure Serial Console allows you to access a text-based console to interact with IaaS VMs through the Azure Portal. The serial connections connect to the COM1 serial port of the Windows Server IaaS VM and allow access independent of the network or operating system state. This functionality replicates the Emergency Management Services (EMS) access that you can configure for Windows Server, which you are most likely to use in the event that an error has occurred that doesn’t allow you to access the contents of the server using traditional administrative methods.

You can use the Serial Console for an IaaS VM as long as the following prerequisites have been met:

  • Boot diagnostics are enabled for the IaaS VM.

  • The Azure account accessing the Serial Console is assigned the Virtual Machine Contributor role for the IaaS VM and the boot diagnostics storage account.

  • A local user account is present on the IaaS VM that supports password authentication.

Need More Review? Serial Console

You can learn more about the Serial Console at https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/serial-console-overview.

Manage Azure VM network configuration

IaaS VMs connect to virtual networks. An IaaS virtual network is a collection of subnets that share a common private IP address space in the RFC 1918 range. For example, you might create a virtual network that uses the 192.168.0.0/16 address space and create subnets, such as 192.168.10.0/24. Azure IaaS virtual machines connect to the subnets in an Azure virtual network.

IaaS virtual networks

Azure IaaS VMs can only use virtual networks that are in the same location as the IaaS virtual machine. For example, if you are deploying an IaaS VM to Australia South East, you’ll only be able to connect the IaaS VM directly to a virtual network in Australia South East.

Once you have deployed the virtual network, you can configure the following properties for the virtual network using the Azure console:

  • Address Space This property allows you to add additional address spaces to the Azure virtual network. You partition these address spaces using subnets.

  • Connected Devices This property lists the current devices that are connected to the Azure virtual network. The property includes a list of VM network adapters and the internal IP address to which those adapters are assigned.

  • Subnets This property shows subnets that you create within the address space and allows you to put different Azure virtual machines on separate subnets within the same virtual network.

  • DNS Servers This property allows you to configure the IP address of the DNS servers assigned by the DHCP server used by the Azure virtual network. Use this to configure DNS settings when you deploy a domain controller on an Azure IaaS VM or when you deploy Azure AD DS for an Azure virtual network.

  • Properties This property allows you to change which subscription the Azure virtual network is associated with.

  • Locks This setting allows you to apply configuration locks, which block changes being made to the settings of the resource unless the lock is first removed.

  • Automation Script This setting allows you to access the JSON template file that you can use to reproduce and redeploy virtual networks with similar settings.

By default, hosts that are located on one subnet in a virtual network will automatically be able to communicate with other subnets on a virtual network. You can modify this behavior by configuring user-defined routes, network security groups, Azure Firewall, or network virtual appliances, which allow you to configure the subnets within a virtual network in a manner similar to the way you might segment traffic on an on-premises network.

IP addressing

A virtual machine on an Azure network will have an internally assigned IP address in the range specified by the virtual network it is associated with. You can configure this assignment to be static or dynamic. It is important to remember that you assign an IP address as dynamic or static on the network adapter object within Azure—you don’t manage IP address configuration from within the IaaS virtual machine operating system.

You may also assign a public IP address to the IaaS VM, but you should only do this in the event that the VM needs to be directly accessible to hosts on the internet. An IaaS VM with an internally assigned IP address can perform outbound communication to hosts on the internet without a public IP address, so a public address is necessary only if hosts on the internet need to directly communicate with the IaaS VM.

Even VMs that do need to be accessible to hosts on the internet can avoid having a public IP address if they are sitting behind a load balancer, web application, or network virtual appliance. A network virtual appliance (NVA) is similar to a traditional perimeter firewall or application gateway device and mediates traffic flow to a web application or VM running in the cloud.

You can determine which IP addresses are assigned to an Azure virtual machine on the Network Interfaces blade of the IaaS VM’s properties. You can also use this blade to apply a region-specific DNS name to the network interface so that you can connect to the IaaS VM using a fully qualified domain name (FQDN) rather than an IP address. If you have a DNS server or you have configured an Azure DNS zone, you can then create a CNAME record that points to the region-specific DNS name for future connections. This saves you from always connecting to a cloudapp.azure.com address, and by using a CNAME, the FQDN will also remain pointing to the IaaS VM if the IaaS VM changes IP address.

Network security groups

A network security group (NSG) is a packet filter for mediating traffic at the virtual network subnet level and also at the level of an IaaS VM’s network adapter. When you create a virtual machine, an NSG is automatically applied to the IaaS VM’s network adapter interface, and you can choose whether you want to allow traffic on management ports, such as TCP port 3389 and to the IaaS VM.

An NSG rule has the following elements:

  • Priority NSG rules are processed in order, with lower numbers processed before higher ones. The moment traffic meets the conditions of a rule, that rule is enforced, and no further rules are processed.

  • Source Specifies the source address or subnet of traffic. Can also include a service tag, which allows you to specify a particular Azure service or an application security group (a way of identifying the network identity of a series of workloads that make up an application).

  • Destination Specifies the destination address or subnet of traffic. Can also include a service tag, which allows you to specify a particular Azure service or an application security group.

  • Protocol Can be configured to TCP, UDP, ICMP, or Any.

  • Port Range Allows you to specify either an individual port or a range of ports.

  • Direction Specifies whether the rule applies to inbound or outbound traffic.

  • Action Allows you to specify whether you want to allow or deny the traffic.

Network security groups are fairly basic in that they only work on the basis of IP address information and cannot be configured on the basis of an FQDN. Azure offers more advanced ways of mediating traffic flow, including Azure Firewall and network virtual appliances, which can be used in conjunction with NSGs; however, these topics are beyond the scope of the AZ-800 exam.

VPNs and IaaS virtual networks

You can configure IaaS virtual networks to support VPN connections by configuring a VPN gateway. IaaS virtual network VPN gateways support site-to-site connections. A site-to-site connection allows you to connect an IaaS virtual network to an existing network, just as you might connect a branch office network to a head office network in your on-premises environment. IaaS virtual network VPN gateways also support point-to-site VPN connections. This allows you to connect individual host computers to IaaS virtual networks. Windows Server 2022 includes simplified setup of a point-to-site VPN connection through deployment of Azure Network Adapter, covered in more detail in Chapter 4, “Implement and manage an on-premises and hybrid networking infrastructure.”

images Exam Tip

You can’t resize a data disk that is attached to a running IaaS VM. You either have to detach the disk, perform the resize operation, and reattach the disk or you’ll need to shut down the IaaS VM.

Chapter summary

  • VM enhanced session mode allows local devices to be connected to remote VMs. It requires either local admin access on the VM or that the account used to connect be a member of the Remote Desktop Users group.

  • You can connect directly from a Hyper-V host to a VM by using PowerShell Direct or HVC.exe even if the VM does not have network connectivity.

  • VMs can be configured with static or dynamic memory, with dynamic memory only consuming memory resources that the VM requires.

  • Hyper-V integration services allow VMs to synchronize time with the Hyper-V host, volume checkpoints for backup, data exchange, and the ability to use the Copy-VMFile cmdlet.

  • Discrete Device Assignment allows you to directly assign GPUs or NVMe storage to a VM.

  • VM resource and CPU groups allow you to configure CPU resources to be used only by specific VMs.

  • Hyper-V Replica lets you replicate a VM from one Hyper-V host to another.

  • Hyper-V failover clusters ensure that VMs keep running with minimal disruption in the event a Hyper-V host fails. These clusters require some form of shared storage for the virtual machine files.

  • Virtual hard disks in VHDX format can be up to 64 TB in size.

  • NIC teaming allows you to combine multiple network adapters to improve network throughput.

  • Container images are created by running and modifying a container instance and then saving that instance as a new image.

  • If you want to run Windows and Linux container images simultaneously, you will need to use the Hyper-V isolation mode.

  • You can attach and detach data disks to IaaS VMs while the VMs are running, but new data disks must still be initialized and formatted in the Windows Server operating system.

  • Resizing an Azure IaaS VM always requires that the VM restart.

  • You can connect an Azure IaaS VM to multiple virtual networks as long as you have multiple network adapters.

Thought experiment

In this thought experiment, demonstrate your skills and knowledge of the topics covered in this chapter. You can find answers to this thought experiment in the next section.

You are responsible for the Windows Server hybrid deployment at Tailwind Traders. For your on-premises deployment, you have several clusters of Windows Server 2022 servers hosting the Hyper-V role. These servers are deployed using the Server Core installation option. Several VMs running on these Hyper-V hosts need to be configured from a PowerShell prompt, but they do not currently have their network configuration set.

Developers at Tailwind are using .NET Core for a new application that they will host in Windows containers. They wish to ensure that container images are as small as possible to make transferring them between developer workstations, container hosts, and Azure as simple as possible.

Several Windows Server IaaS VMs have been deployed in the Australia East datacenter. These IaaS VMs host workloads that can only be configured and managed directly, so it will be necessary every two weeks to make an RDP connection to these hosts. Unless the IaaS VMs are in the process of being managed by authorized users, it should not be possible to initiate an RDP connection to these hosts.

  1. What tool can you use from the command prompt of the Server Core Hyper-V hosts to create a PowerShell connection to VMs that are not configured to interact with the network?

  2. Which container image should you use as the basis for the .NET Core applications?

  3. Which Azure service can you use to provide time-limited RDP access to the Windows Server IaaS VMs in the Australia East datacenter?

Thought experiment answers

This section contains the solution to the thought experiment. Each answer explains why the answer choice is correct.

  1. You can use PowerShell Direct to establish a connection from the Server Core Hyper-V hosts to VMs that are not configured to interact with the network.

  2. You should use the Nano Server container image as the basis for the .NET core applications.

  3. You can use Just-in-Time VM access to provide time-limited RDP access to the Windows Server IaaS VMs in the Australia East datacenter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.136.9