© Andy Syrewicze, Richard Siddaway 2018
Andy Syrewicze and Richard SiddawayPro Microsoft Hyper-V 2019https://doi.org/10.1007/978-1-4842-4116-5_11

11. Using Failover Cluster Manager to Manage Hyper-V Clusters

Andy Syrewicze1  and Richard Siddaway2
(1)
Jenison, MI, USA
(2)
Baston, Lincolnshire, UK
 

In the previous chapter, you saw how to create a Hyper-V cluster. As with anything in IT, creating the cluster is the easy bit. You’ve then got to manage it. You’ve already seen how to manage Hyper-V hosts using Hyper-V Manager, which is an excellent tool for managing one or more stand-alone hosts. When you cluster Hyper-V hosts, you manage them as a cluster, using Failover Cluster Manager.

Caution

If you’re using external storage in your cluster (iSCSI, SAN, etc.), and you have to shut your cluster down, you should shut down the nodes first and then the storage server. If you shut down the storage first, you could lose the connections to your storage and potentially lose data. When starting the cluster, reverse this, so that the storage is online before you start any of the nodes. You can bring the nodes up in any order.

This chapter is a continuation of Chapter 10. A Hyper-V cluster isn’t of much use if you don’t have any VMs, so you’ll start by learning how to create and manage VMs on your cluster. You’ll learn how to manage the cluster’s networking and storage. Eventually, you’ll have to add extra nodes into the cluster (or move nodes out of the cluster), which will be explained toward the end of the chapter.

Note

Migrating virtual machines (VMs) between hosts won’t be covered in this chapter. We’ll postpone migrations until Chapter 14.

Before getting into the technical topics of the chapter, we should say something about how you manage your cluster. Many administrators see a cluster with two, four, or many nodes and just see it as that many servers. That is a big mistake!

When you’re managing a cluster, especially large clusters, you have to think about the whole environment, not just the individual servers. You must keep three things firmly in mind:
  1. 1.

    People

     
  2. 2.

    Processes

     
  3. 3.

    Technology

     

The people who manage your cluster must understand the technologies they’re working with. They must understand, use, and be committed to the processes that are put in place to manage the cluster. Most of all, they have to be people that are trusted to manage a critical piece of your organization’s environment.

When you make a change to a cluster, very often that change, such as creating a new VMswitch, has to occur on all nodes in the cluster. If your administrators have a process to follow, the chances of something going wrong are minimized, even more so if the process is backed by automation of some kind. The vast majority of major problems we’ve seen with Hyper-V environments can be traced back to human error, often because people have stepped outside the process and just done stuff! Set up automated processes for as many administration tasks as you can, to ensure that administration is efficient and safe. Don’t forget your change control processes. They protect you and your administrators as well as the organization.

Technology is the third thing to consider. Make sure you’re using the correct technology to get the job done. A failover cluster is only created to protect some vital aspect of your organization’s IT environment. Make sure you have the right technology to do the job, including hardware, software, and management tools.

Hyper-V clusters are managed with Failover Cluster Manager rather than Hyper-V Manager. This includes the management of VMs, which is our cue to investigate that topic.

Managing Virtual Machines

Managing VMs means managing their life cycle.
  • Creation

  • Modification

  • Destruction

The first step in this process is creating a VM.

Creating Virtual Machines

In early chapters of this book, you learned how to create VMs, using Hyper-V Manager or PowerShell, when working with a single Hyper-V host. You use Failover Cluster Manager to create VMs when your Hyper-V hosts are clustered.

ANTIVIRUS SOFTWARE

It’s very important to the performance of your Hyper-V hosts, whether they are physical hosts or you’re using nested virtualization, as in this chapter, that you exclude the files used for the VMs from your antivirus (AV) scanning.

Scanning a .vhdx file every time you do something in the VM will slow performance dramatically. Your VMs should have AV protection, which will catch any malicious actions on the VM, but you shouldn’t impose AV scanning on your VMs (or their files) from the host.

To create a VM on your cluster,
  1. 1.

    Open Failover Cluster Manager.

     
  2. 2.

    Expand HVCL01.

     
  3. 3.

    Click Roles.

     
  4. 4.

    Under Actions, click Virtual Machines…

     
  5. 5.

    Click New Virtual Machine…

     
  6. 6.

    Select the target node, as shown in Figure 11-1. In this case, you’ll use W19HVC01.

     
../images/470351_1_En_11_Chapter/470351_1_En_11_Fig1_HTML.jpg
Figure 11-1

Selecting the node to host a new VM

  1. 7.

    A check will be made to determine if the Hyper-V management tools are installed on the machine you are using. You need Hyper-V Manager as well as the PowerShell tools.

     
  2. 8.

    The New Virtual Machine Wizard will be displayed. This is the same wizard you’ve used through Hyper-V Manager.

     
  3. 9.

    Click Next, to skip the Before You Begin page.

     
  4. 10.

    Provide a name for the new VM and the location on the disk to store the VM, as shown in Figure 11-2.

     
../images/470351_1_En_11_Chapter/470351_1_En_11_Fig2_HTML.jpg
Figure 11-2

Setting the name and location for the VM

  1. 11.

    The location should be on one of the Clustered Shared Volumes (CSV) you have created; otherwise, the other nodes in the cluster can’t access the VM. In this case, you’re using Vol1, which was created in Chapter 10. In Vol1, a folder called VirtualMachines has been created to house all VMs on that volume. A folder with the same name has also been created on Vol2.

     
  2. 12.

    Click Next.

     
  3. 13.

    Select Generation 2.

     
  4. 14.

    Click Next.

     
  5. 15.

    Leave Startup memory as 1024MB.

     
  6. 16.

    Select Use Dynamic Memory.

     
  7. 17.

    Click Next.

     
  8. 18.

    Click Next, to skip configuring the network. You’ll configure the VMswitches on the cluster nodes later.

     
  9. 19.

    Set the Virtual Hard Disk size to 80GB.

     
  10. 20.

    Modify the disk location, if desired.

     
  11. 21.

    Click Next.

     
  12. 22.

    Accept default of Install an operating system later.

     
  13. 23.

    Click Next.

     
  14. 24.

    Check details and change, if needed.

     
  15. 25.

    Click Finish.

     
  16. 26.

    The VM will be created and then configured for high availability (HA).

     
  17. 27.

    On the Summary page (Figure 11-3), view the report, if required.

     
../images/470351_1_En_11_Chapter/470351_1_En_11_Fig3_HTML.jpg
Figure 11-3

Summary page after New Virtual Machine and High Availability Wizard have completed

  1. 28.

    Click Finish.

     
  2. 29.

    The new VM is visible in the Roles pane, as shown in Figure 11-4.

     
../images/470351_1_En_11_Chapter/470351_1_En_11_Fig4_HTML.jpg
Figure 11-4

A high-availability VM viewed in Failover Cluster Manager. Notice the standard VM actions in the Actions pane. Additional actions, such as Move, will be explained in later chapters.

TRY IT YOURSELF

Create a VM, CLTST01, on the Hyper-V cluster, using the steps in this section. You should also install an OS onto the VM.

If you’ve created your Hyper-V cluster using nested virtualization, you should see something like Figure 11-5, once your VM is created and the OS has been installed.
../images/470351_1_En_11_Chapter/470351_1_En_11_Fig5_HTML.jpg
Figure 11-5

Nested virtualization. The VM on the right of the figure (CLTST01) is a Windows Server 2019 instance of Server Core running on the VM that is the cluster node HVC01, also running on Windows Server 2019.

ABOVE AND BEYOND

When you create a Hyper-V cluster, give some thought as to where you’re going to store the tools you need to create and manage your VMs. Your tool set will include, but not be limited to, the following:
  • ISO images for operating installation

  • Other software installation packages

  • Utility tools

  • Scripts

These items should be stored on cluster-shared storage, so that they are available from all nodes. We created a 100GB volume called Images on our test cluster for this purpose.
New-Volume -FriendlyName 'Images' -FileSystem CSVFS_NTFS `
-StoragePoolFriendlyName "S2D*" -Size 100GB

The need for a volume containing your tool set will become more important if you use System Center Virtual Machine Manager (SCVMM) in your environment. You’ll need a volume to store the templates SCVMM can use when creating VMs.

Once you’ve created a VM, how do you administer it?

Virtual Machine Administration

You use Failover Cluster Manager for VM administration on a Hyper-V cluster, because it exposes the VM migration tools for moving VMs between cluster nodes and storage locations.

It is possible to use Hyper-V Manager to manage the VMs on your cluster, but you can only view and manage the VMs on a single node at a time. If you want a view of all of the VMs on the cluster, you need to use Failover Cluster Manager.

You can run Failover Cluster Manager from one of the cluster nodes or an administration machine.

TRY IT YOURSELF

Connect to the cluster using Failover Cluster Manager from a remote administration machine (not part of the cluster). Compare with using Failover Cluster Manager on a cluster node. Any differences?

Add Network Adapter to Virtual Machine

If you’ve worked through the practical sections, including creating a VM on the cluster, as in the previous section, you’ll realize that the VM can’t communicate on the network, because there isn’t a virtual switch to which the network adapter that’s automatically created when you build a new VM can connect.

We stated earlier that you should automate the creation and configuration of the nodes in your Hyper-V cluster. This ensures that all nodes are configured in an identical manner. This will become even more important when you look at the implications of migrating VMs between hosts, in Chapter 14.

Creating identical virtual switches on all the nodes in your cluster is best performed by a script. If you use Hyper-V Manager, you’ll have to connect to each node in turn, to perform the action. The code to add a switch to all your nodes is
$vms = 'W19HVC01', 'W19HVC02'
foreach ($vm in $vms) {
  $cred = Get-Credential "$vmAdministrator"
  $s = New-PSSession -VMName $vm -Credential $cred
  $sb = {
     New-VMSwitch -Name 'CLAN' -NetAdapterName 'LAN'
  }
  Invoke-Command -Session $s -ScriptBlock $sb
}

Using PowerShell Direct, you connect to each node in turn and run the New-VMSwitch cmdlet. The switch type will automatically be set to External, because you’ve specified to connect the switch to a network adapter.

Note

You’ll probably have to restart the nodes when you make these changes, as Windows moves the IP address from the physical network adapter to the switch.

TRY IT YOURSELF

Create the virtual switches on all of the nodes in your cluster. Which is quicker and more efficient: using Hyper-V Manager or running a script?

Your two nodes are configured identically, so now you can connect the network adapter to your virtual switch.
Get-VM -Name CLTST01 |
Get-VMNetworkAdapter |
Connect-VMNetworkAdapter -SwitchName CLAN

Get the VM and pipe the object into Get-NetworkAdapter (there’s only one on the VM at this stage), then pipe into Connect-VMNetworkAdapter, supplying the switch name you want to use, in this case, CLAN.

TRY IT YOURSELF

Connect the VM’s automatically created network adapter to the virtual switch you just created. How would you modify the script, if the VM had multiple network adapters?

Your VM now has a connected network adapter, so you can perform any final configuration work.

One common task you’ll need to perform is adding storage to a VM.

Add Virtual Disk to Virtual Machine

You’ve seen how to manage VMs in previous chapters, so we’re not going to repeat all of that material. But as a quick example, to show that managing a VM on a cluster isn’t that different from doing so on a stand-alone host, let’s add a virtual disk to the VM we’ve just created.

To add a disk to a VM on a Hyper-V cluster,
  1. 1.

    Open Failover Cluster Manager.

     
  2. 2.

    Expand HVCL01.

     
  3. 3.

    Click Roles.

     
  4. 4.

    Click on a VM—CLTST01.

     
  5. 5.

    Click Settings in the Actions pane.

     
  6. 6.

    Click SCSI Controller.

     
  7. 7.

    Click Hard Drive.

     
  8. 8.

    Click Add.

     
  9. 9.

    Select Virtual hard disk.

     
  10. 10.

    Click New.

     
  11. 11.

    Click Next, to bypass the Before You Begin page.

     
  12. 12.

    Click Dynamically expanding.

     
  13. 13.

    Click Next.

     
  14. 14.

    Change name to Data01.vhdx.

     
  15. 15.

    Change location to match the VM (clustered shared volume).

     
  16. 16.

    Select Create new blank virtual hard disk.

     
  17. 17.

    Set the size to 100GB.

     
  18. 18.

    Click Next.

     
  19. 19.

    Click Finish.

     

TRY IT YOURSELF

Add a new virtual hard disk to the VM you created on the cluster.

The new disk is a raw partition and must be initialized and formatted. Use the Windows Disk Management tools, if you’ve created a VM with a GUI. If your VM is Server Core, then you can use PowerShell to initialize and format the disk.
Get-Disk |
where PartitionStyle -eq 'RAW' |
Initialize-Disk -PartitionStyle GPT -PassThru |
New-Partition -DriveLetter 'E' -UseMaximumSize |
Format-Volume -FileSystem NTFS -NewFileSystemLabel 'Data disk 01' `
-Confirm:$false

You start by getting the new disk, which has the partition style of RAW, meaning that it hasn’t been formatted. You must initialize the disk. We’ve used a partition style of GPT to match the default in modern Windows systems. You could use the older MBR style, if preferred. A partition is created on the disk, using all of the available space. A drive letter of E has been assigned. If you already have a number of disks in the machine and want the system to allocate the drive letter, change -DriveLetter 'E' to -AssignDriveLetter. The final step is to format the disk to NTFS and supply a system label for the disk.

The examples you’ve seen in this section show that you can use all of the knowledge and skills you’ve built up so far to manage VMs on stand-alone Hyper-V hosts, to also manage VMs on clusters. You just have to be aware that you’re working on a cluster and that some things are slightly different.

When you created the cluster in Chapter 10, you added two 1TB disks to each node. The universal constant in IT is the need for more storage, so how can you expand the storage in your cluster?

Managing Storage

Managing the storage for your cluster includes two main areas. First, you must plan your storage. When you created the cluster in Chapter 10, the instructions said to create four 1TB disks, to be used as shared storage. In reality, you must plan how much storage you’ll need and how it might grow.

Second, you must be able to add storage to your cluster. In this chapter, we’ll show you how to increase the storage available through Storage Spaces Direct. Other storage technologies will be covered in Chapter 12, when you learn about guest clusters.

Planning your cluster’s storage is a multistep process.

Planning Storage Needs

In Chapter 1, you learned about the steps involved in moving an organization from a physical to a virtual environment. One of the steps in that process was calculating the amount of storage used by the physical storage. That is the starting point for the cluster storage requirements. On top of that figure, you’ll have to allocate space—for either existing VMs will grow or new VMs will be required, and they’ll all need storage.

Note

Always, always, always err on the side of caution when deciding the amount of storage you’ll need. Storage is relatively cheap, and it’s easier (and often cheaper) to buy more than you think you’ll need, rather than rush to add extra capacity later.

Having determined the amount of disk space your VMs will consume, you must determine how that storage will be configured. You’ll have to ask yourself a number of questions.
  • Which technology will you use, for example, SAN, iSCSI, or Storage Spaces Direct?

  • What level of redundancy do you want? Mirroring doubles the required storage, and RAID 5 will take one disk out of the array for parity.

  • How many volumes do you need?

  • What level of free space do you want to keep on the disks?

Once you’ve determined your storage needs, you must think about capacity planning.

The art of capacity planning for your storage consists of three steps:
  1. 1.

    Determine the threshold at which you require extra disk space.

     
  2. 2.

    Measure how much of your storage is used on a regular basis.

     
  3. 3.

    Graph the results to determine a trend.

     

Note

Remember that there can be significant, sudden jumps in the amount of storage used, owing to new projects, so be sure to include those in your planning.

Once you have determined a trend, you can predict when you’ll run out of storage. This allows you start the discussions about acquiring new storage in sufficient time to prevent a lack of storage becoming a problem.

Assume that you’ve been performing the capacity planning discussed above and have seen the need for new storage. How do you add the storage to the cluster?

Add Storage to Cluster

The cluster you created in Chapter 10 uses Storage Spaces Direct for storage. Microsoft recommends that you use one (1) storage pool per cluster. A storage pool can contain up to 416 disks, which don’t have to be of the same size.

Let’s add two more disks to each node and then incorporate them into the storage pool. First create the disks.
New-VHD -Path 'C:Virtual StorageHVC01data03.vhdx' -Dynamic -SizeBytes 1TB
New-VHD -Path 'C:Virtual StorageHVC01data04.vhdx' -Dynamic -SizeBytes 1TB
New-VHD -Path 'C:Virtual StorageHVC02data03.vhdx' -Dynamic -SizeBytes 1TB
New-VHD -Path 'C:Virtual StorageHVC02data04.vhdx' -Dynamic -SizeBytes 1TB
Each node in the cluster will get an additional two disks: HVC0Xdata03 and HVC0Xdata04. Add the disks to the relevant nodes.
Add-VMHardDiskDrive -VMName W19HVC01 -ControllerType SCSI `
-Path 'C:Virtual StorageHVC01data03.vhdx' -ControllerNumber 1
Add-VMHardDiskDrive -VMName W19HVC01 -ControllerType SCSI `
-Path 'C:Virtual StorageHVC01data04.vhdx' -ControllerNumber 1
Add-VMHardDiskDrive -VMName W19HVC02 -ControllerType SCSI `
-Path 'C:Virtual StorageHVC02data03.vhdx' -ControllerNumber 1
Add-VMHardDiskDrive -VMName W19HVC02 -ControllerType SCSI `
-Path 'C:Virtual StorageHVC02data04.vhdx' -ControllerNumber 1

The new disks will be automatically added to the storage pool. The disks will automatically rebalance their contents to give the most even distribution possible.

You can view the disks in the storage pool.
  1. 1.

    Open Failover Cluster Manager.

     
  2. 2.

    Expand HVCL01.

     
  3. 3.

    Select Storage.

     
  4. 4.

    Select Pools.

     
  5. 5.

    Select the Physical Disks tab at the bottom of the GUI.

     
Alternatively, you can run the following code:
Get-StoragePool -FriendlyName s2D* |
Get-PhysicalDisk |
foreach {
  $node = $psitem | Get-StorageNode -PhysicallyConnected |
          select -ExpandProperty Name
  $size = [math]::Round( ($psitem.Size / 1GB), 2)
  $free =
  [math]::Round( ( ($psitem.Size - $psitem.VirtualDiskFootprint) / 1GB), 2)
  $props = [ordered] @{
    Node = ($node -split '.')[0]
    DeviceID = $psitem.DeviceId
    Type = $psitem.MediaType
    'Size(GB)' = $size
    'Free(GB)' = $free
    FreePerc = [math]::Round( ( ($free / $size) * 100 ), 2)
  }
  New-Object -TypeName PSobject -Property $props
} |
sort Node, DeviceID |
Format-Table

This code, adapted from https://blogs.technet.microsoft.com/filecab/2016/11/21/deep-dive-pool-in-spaces-direct/ , gets the physical disks in the storage pool and for each disk determines the node, calculates the size and free space in GB, and calculates the percentage of free space available. When the percentage of free space available drops to a value that’s too low (40% is a good starting point), it’s time to think about adding more disk.

TRY IT YOURSELF

Use the procedure in this section to add one or more disks to the nodes in your cluster and add them into the storage pool.

If you think the disks need to be rebalanced, you can perform that yourself.
Optimize-StoragePool -FriendlyName "S2D*"
The progress of the optimization can be followed using
PS>  Get-StorageJob | Format-List Name, ElapsedTime, JobState, PercentComplete, BytesProcessed, BytesTotal
Name            : S2D on HVCL01-Optimize
ElapsedTime     : 00:32:48
JobState        : Running
PercentComplete : 99
BytesProcessed  : 2437393940480
BytesTotal      : 2445983875072

You’ve added extra disks to the cluster, but the CSV are still the original size. To utilize the new storage capacity, either create new volumes (see Chapter 10) or extend the current volumes.

Extending a Cluster Shared Volume

Extending the CSV to use additional storage capacity is a multistep process. After adding the additional disks, and once the rebalancing has completed, the first step is to extend the virtual disks you created on the storage pool.
Get-VirtualDisk -FriendlyName Vol1 | Resize-VirtualDisk -Size 768GB
Get-VirtualDisk -FriendlyName Vol2 | Resize-VirtualDisk -Size 768GB
The next task is to increase the partition size, so that the new area of the disk is available. You could create a new partition with the space, but that would be wasteful, as you’d end up with lots of small bits of disk that had too much wasted space. To increase the partition size, use this code:
Get-Disk -FriendlyName Vol? |
foreach {
  $part = Get-Disk -FriendlyName ($_.FriendlyName) |
  Get-Partition | where Type -eq 'Basic'
  $size =  $part |
  Get-PartitionSupportedSize |
  select  -ExpandProperty SizeMax
  $part | Resize-Partition -Size $size
}

This is where a consistent naming convention comes in handy. Get the disks name Vol1?—Vol1 and Vol2, in this case—and extract the partition information for the CSV. Find the maximum possible size of the partition, based on the new disk size, and resize the partition to that size.

TRY IT YOURSELF

Use the procedure in this section to extend the virtual disks created from the storage pool on your cluster.

CSV are accessible by each node in the cluster. Each volume (virtual disk in Storage Spaces Direct terminology) has an owner node. You may have to change the owner node, for example, if you plan to take a node offline.

Change a Node That Owns a Disk

Clusters are used to provide resiliency, so you expect the nodes to be online most, if not all, of the time. If you take down a node, any disks owned by that node will failover to another node. This is the expected behavior of the cluster. It’s a better practice to manually failover any disks owned by a node rather than relying on the automatic failover, because the failover is under your control and happens before the current owner goes offline.

ABOVE AND BEYOND

Automatic failover of resources is usually very reliable. Very occasionally, you may see problems. A couple of examples we’ve seen may help, if you ever have to troubleshoot failover issues.

First, if the nodes aren’t all patched to the same level, there was a known bug in automatic failover that caused the cluster to lose contact with the storage when the node went offline.

The second example involves moving a cluster between datacenters and changing the IP addresses of the nodes and the cluster. This can cause problems with the storage failing over between nodes.

To perform a manual failover, do the following:
  1. 1.

    Open Failover Cluster Manager.

     
  2. 2.

    Expand HVCL01.

     
  3. 3.

    Expand Storage.

     
  4. 4.

    Select disks.

     
  5. 5.

    Right-click the disk, or disks, to failover.

     
  6. 6.

    Click Move.

     
  7. 7.

    Click Select Node…

     
  8. 8.

    Select the node in the dialog box.

     
  9. 9.

    Click OK.

     
  10. 10.

    The failover will occur within a few seconds. The Owner Node in the Disks pane will change to match the node you selected.

     

TRY IT YOURSELF

Select one of the data disks and initiate a failover to the other node in the cluster, using the procedure in this section.

You can also use PowerShell, as follows:
Move-ClusterSharedVolume -Name 'Cluster Virtual Disk (Vol1)' `
-Node W19HVC02

You performed most of the networking configuration when you created the cluster in Chapter 10, but you must think about adding a migration network.

Create Migration Network

Migrating VMs between the nodes in the cluster can involve significant network traffic. Microsoft recommends configuring a specific network for migration, so that migrations don’t impact user access to the VMs. A 1GB NIC provides sufficient bandwidth for migration traffic in a small environment with low migration requirements. In larger networks, you may look at using a number of teamed network adapters on the migration network, but you should throttle the migration VLAN to 2GB, as a starting point.

Adding a network to the cluster specifically for the traffic involved in migrating VMs between nodes is a good idea, as it removes the traffic from the other cluster networks, especially the network over which users access the VMs.

You saw how to add networks to the cluster in the section “Creating a Hyper-V HA Cluster,” so this is another recap. We’ll use 192.168.30.0/26 for the network. That’ll supply sufficient IP addresses for the cluster.

To add a migration network, perform the following tasks on each node:
  1. 1.

    Create a new private switch on your Hyper-V host (or create a new VLAN, if use physical hosts).

    New-VMSwitch -Name Migration -SwitchType Private

     
  2. 2.

    Add an adapter to each node using the new switch.

    Add-VMNetworkAdapter -VMName W19HVC01 -SwitchName Migration

    Add-VMNetworkAdapter -VMName W19HVC02 -SwitchName Migration

     
  3. 3.

    On W19HVC01, rename the new adapter to Migration. Set the IP address to 192.168.30.1, with a subnet mask of 255.255.255.192. Remember: New adapters are always named Ethernet.

     
  4. 4.

    On W19HVC02, rename the new adapter to Migration. Set the IP address to 192.168.30.2, with a subnet mask of 255.255.255.192.

     
  5. 5.

    In Failover Cluster Manager ➤ Networks, rename the new network to Migration network. Ensure it’s configured to allow cluster traffic only.

     
  6. 6.

    In the Actions pane, click Live Migration Settings…

     
  7. 7.

    Unselect all networks except the Migration network.

     
  8. 8.

    Click OK.

     

TRY IT YOURSELF

Create and configure a migration network for your Hyper-V cluster.

These instructions can be used to create other networks for your cluster. Remember to configure the VMswitch to be of type Internal or External, if you require the cluster nodes to communicate with just the VM host or external clients, respectively.

TRY IT YOURSELF

Configure a management network for your Hyper-V cluster.

So far in this chapter, you’ve seen how to create and manage VMs on the cluster, how to manage the cluster’s storage, and how to manage the networking aspects of the cluster. One last thing must be covered in this chapter: how to add and remove nodes from the cluster.

Managing Cluster Nodes

Servers use their resources—CPU, memory, disk, and network—to support workloads. Clusters can be thought of in the same way, except that the resources are provided through the individual nodes. We’ve stated before that you should put as much memory as possible in your Hyper-V hosts, as it’s usually the resource that controls the number of VMs that you can run on the host.

A cluster of Hyper-V hosts will eventually run out of resources and can’t support any further increase in the number of VMs it’s running. You’ve seen how to add disk to the cluster. If your nodes are at capacity, as far as CPU and memory are concerned, your only option is to add one or more nodes to the cluster.

Add a Cluster Node

The new node should be configured in exactly the same way as the existing nodes, including any virtual switches. We keep saying that you should automate your node creation with scripts, but once you’ve had to build a few nodes, and spent the time correcting configuration mistakes, you’ll appreciate why we repeat the message.

If you’re using Storage Spaces Direct, decide if you’re going to add more disk storage capacity with this node and, if so, how much. This isn’t a real issue when adding nodes, but it could become one when removing nodes.

Once your new node is built and configured, you can add it to the cluster. The procedure to add a node is as follows:
  1. 1.

    Open Failover Cluster Manager.

     
  2. 2.

    Expand HVCL01.

     
  3. 3.

    Select Nodes.

     
  4. 4.

    In the Action pane, click Add Node…

     
  5. 5.

    Click Next, to bypass the Before You Begin page.

     
  6. 6.

    Supply the name of the new node and Click Add. The node will be verified, which may take some time.

     
  7. 7.

    Click Next.

     
  8. 8.

    On the Validation Warning page, select Yes to run the validation report and No to skip the report.

     
  9. 9.

    Click Next.

     
  10. 10.

    On the Confirmation page, click Next.

     
  11. 11.

    The node will be added to the cluster.

     
  12. 12.

    View the report, if required.

     
  13. 13.

    Click Finish.

     

Note

If you don’t run the validation report while adding the node and get an error, then go back and repeat the exercise, but this time, run the validation report, to help determine the reason for the failure.

You’ll recognize that the wizard is similar to the one you used to create the cluster initially.

If you want to add the cluster node using PowerShell rather than the GUI, use the following:
PS>  Add-ClusterNode -Name W19HVC03 -Type Node -NoStorage

Use -NoStorage to prevent the addition of any shared storage on the new node to the cluster, during the process of adding the node to the cluster.

TRY IT YOURSELF

Create a machine to be an extra node for your cluster and add it to the cluster.

Sometimes, it’s necessary to remove a node from the cluster, for example, if the node has experienced a motherboard failure or must be retired.

Remove a Cluster Node

Before you remove a cluster node from your Hyper-V cluster, you should ensure that
  • All VMs have been migrated to other nodes.

  • Any shared storage attached to the node, especially, using Storage Spaces Direct has been removed. If you have multiple disks attached to the node that are used in Storage Spaces Direct, remove them one at a time, to allow the storage pool to reconfigure itself. If you remove multiple disks at the same time, you risk corrupting your storage pool.

Removing a node from the cluster is also referred to as evicting a node. To evict a node,
  1. 1.

    Open Failover Cluster Manager.

     
  2. 2.

    Expand HVCL01.

     
  3. 3.

    Select Nodes.

     
  4. 4.

    Right-click the node to be evicted.

     
  5. 5.

    Select More Actions from the context menu.

     
  6. 6.

    Click Evict.

     
  7. 7.

    Click Yes on the confirming dialog box.

     
  8. 8.

    The node will be evicted from the cluster and will no longer be displayed under Nodes in Failover Cluster Manager.

     
You can evict a node from the cluster using PowerShell, as follows:
PS>  Remove-ClusterNode -Name W19HVC03

You’ll be prompted to confirm the removal of the node.

TRY IT YOURSELF

Remove the new node from the cluster.

That brings you to the end of the chapter, and the techniques you’ve learned in it will enable you to successfully manage your Hyper-V cluster. All that remains is for you to complete the lab, to consolidate your knowledge.

Lab Work

  1. 1.

    Complete all of the Try It Yourself sections in this chapter. In particular, you should practice the techniques using the GUI and PowerShell. The ability to use both will make you a better administrator.

     
  2. 2.

    You will also have to install an OS into the VM you created on the cluster.

     
  3. 3.

    If you have time, create a second virtual machine, CLTST02, on the cluster. Use a different CSV for the storage.

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.147.215