CHAPTER 12

image

Implementing Clustering

Engineers may find the process of building and configuring a cluster to be complex and that they can implement many variations of the pattern. Although DBAs may not always need to build a cluster themselves, they do need to be comfortable with the technology and often need to provide their input into the process. They may also take part in troubleshooting issues discovered with the cluster.

For these reasons, this chapter begins by looking at how to build a cluster at the Windows level and discusses some of the possible configurations. We then demonstrate how to build an AlwaysOn failover cluster instance (FCI). Finally, we explore managing a cluster, post implementation, including performing rolling patch upgrades and removing nodes from a cluster.

Building the Cluster

Before you install a SQL Server AlwaysOn failover cluster instance, you must prepare the servers that form the cluster (known as nodes) and build a Windows cluster across them. The following sections demonstrate how to perform these activities.

Image Note  To support demonstrations in this chapter, we use a domain called PROSQLADMIN. The domain contains a domain controller, which is also configured to serve up five iSCSI disks (Data, Logs, TempDB, MSDTC, and Quorum). Two servers also act as cluster nodes named ClusterNode1 and ClusterNode2. Each server has two NICs (network interface cards) on different subnets; one will be used for data and the other for the cluster heartbeat.

Installing the Failover Cluster Feature

In order to build the cluster, the first thing we need to do is install the failover cluster feature on each of the nodes. To do this, we need to select the Add Roles And Features option in Server Manager. This causes the Add Roles And Features Wizard to display. The first page of this wizard offers guidance on prerequisites, as shown in Figure 12-1.

9781484207116_Fig12-01.jpg

Figure 12-1. The Before You Begin page

On the Installation Type page, ensure that Role-Based Or Feature-Based Installation is selected, as illustrated in Figure 12-2.

9781484207116_Fig12-02.jpg

Figure 12-2. The Installation Type page

On the Server Selection page, ensure that the cluster node that you are currently configuring is selected. This is illustrated in Figure 12-3.

9781484207116_Fig12-03.jpg

Figure 12-3. The Server Selection page

The Server Roles page of the wizard allows you to select any server roles that you want configured. As shown in Figure 12-4, this can include roles such as Application Server or DNS Server, but in our case, this is not appropriate, so we simply move to the next screen.

9781484207116_Fig12-04.jpg

Figure 12-4. The Server Roles page

On the Features page of the wizard, we need to select both .NET Framework 3.5 Features and Failover Clustering, as shown in Figure 12-5. This satisfies the prerequisites for building the Windows cluster and also the AlwaysOn failover cluster instance.

9781484207116_Fig12-05.jpg

Figure 12-5. The Features page

When you select Failover Clustering, the wizard presents you with a screen (Figure 12-6) that asks if you want to install the management tools in the form of a checkbox. If you are managing the cluster directly from the nodes, check this option.

9781484207116_Fig12-06.jpg

Figure 12-6. Selecting management tools

On the final page of the wizard, you see a summary of the features that are to be installed, as shown in Figure 12-7. Here, you can specify the location of the Windows media if you need to. You can also choose whether the server should automatically restart. If you are building out a new server, it makes sense to check this box. However, if the server is already in production when you add the feature, make sure you consider what is currently running on the box, and whether you should wait for a maintenance window to perform a restart if one is needed.

9781484207116_Fig12-07.jpg

Figure 12-7. The Confirmation page

Instead of installing the cluster services through Server Manager, you can install them from PowerShell. The script in Listing 12-1 achieves the same results as the preceding steps.

Creating the Cluster

Once clustering has been installed on both nodes, you can begin building the cluster. To do this, connect to the server that you intended to be the active node using a domain account, and then run Failover Cluster Manager from Administrative Tools.

The Before You Begin page of the Create Cluster Wizard warns that the domain account you use to install the cluster must be an administrator of the cluster nodes and that Microsoft only supports clusters that pass all verification tests, as shown in Figure 12-8.

9781484207116_Fig12-08.jpg

Figure 12-8. The Before You Begin page

On the Select Servers screen of the wizard, you need to enter the names of the cluster nodes. Even if you enter just the short names of the servers, they will be converted to fully qualified names in the format server.domain. This is illustrated in Figure 12-9. In our case, our cluster nodes are named ClusterNode1 and ClusterNode2, respectively.

9781484207116_Fig12-09.jpg

Figure 12-9. The Select Servers page

On the Validation Warnings page, you are asked if you wish to run the validation tests against the cluster. You should always choose to run this validation for production servers, because Microsoft will not offer support for the cluster unless it has been validated. Choosing to run the validation tests invokes the Validate A Configuration wizard. You can also run this wizard independently from the Management pane of Failover Cluster Manager. The Validation Warnings page is shown in Figure 12-10.

9781484207116_Fig12-10.jpg

Figure 12-10. The Validation Warning page

Image Tip  There are some situations in which validation is not possible, and in these instances, you need to select the No, I Do Not Require Support… option. For example, some DBAs choose to install one-node clusters instead of stand-alone instances so that they can be scaled up to full clusters in the future, if need be. This approach can cause operational challenges for Windows administrators, however, so use it with extreme caution.

After you complete the Before You Begin page of the Validate A Configuration Wizard, you see the Testing Options page. Here, you are given the option of either running all validation tests or selecting a subset of tests to run, as illustrated in Figure 12-11. Normally when you are installing a new cluster, you want to run all validation tests, but it is useful to be able to select a subset of tests if you invoke the Validate A Configuration Wizard independently after you make a configuration change to the cluster.

9781484207116_Fig12-11.jpg

Figure 12-11. The Testing Options page

On the Confirmation page of the wizard, illustrated in Figure 12-12, you are presented with a summary of tests that will run and the cluster nodes that they will run against. The list of tests is comprehensive and includes the following categories:

  • Inventory (such as identifying any unsigned drivers)
  • Network (such as checking for a valid IP configuration)
  • Storage (such as validating the ability to fail disks over, between nodes)
  • System Configuration (such as validating the configuration of Active Directory)

9781484207116_Fig12-12.jpg

Figure 12-12. The Confirmation page

The Summary page, shown in Figure 12-13, provides the results of the tests and also a link to an HTML version of the report. Make sure to examine the results for any errors or warnings. You should always resolve errors before continuing, but some warnings may be acceptable. For example, if you are building your cluster to host AlwaysOn Availability Groups, you may not have any shared storage. This will generate a warning but is not an issue in this scenario. AlwaysOn Availability Groups are discussed in further detail in Chapter 13.

9781484207116_Fig12-13.jpg

Figure 12-13. The Summary page

The View Report button displays the full version of the validation report, as shown in Figure 12-14. The hyperlinks take you to a specific category within the report, where further hyperlinks are available for each test. These allow you to drill down to messages generated for the specific test, making it easy to identify errors.

9781484207116_Fig12-14.jpg

Figure 12-14. The Failover Cluster Validation Report

Clicking Finish on the Summary page returns you to the Create Cluster Wizard, where you are greeted with the Access Point For Administering The Cluster page. On this screen, illustrated in Figure 12-15, you need to enter the virtual name of your cluster and the IP address for administering the cluster. We name our cluster PROSQLADMIN-C and assign an IP address of 192.168.0.20.

9781484207116_Fig12-15.jpg

Figure 12-15. The Access Point For Administering The Cluster page

Image Note  The virtual name and IP address are bound to whichever node is active, meaning that the cluster is always accessible in the event of failover.

In our case, the cluster resides within a single site and single subnet, so only one network range is displayed. If you are configuring a multi-subnet cluster, however, then the wizard detects this, and multiple networks display. In this scenario, you need to enter an IP address for each subnet.

Image Note  Each of the two NICs within a node is configured on a separate subnet so that the heartbeat between the nodes is segregated from the public network. However, a cluster is only regarded as multi-subnet if the data NICs of the cluster nodes reside in different subnets.

Image Tip  If you do not have permissions to create AD (Active Directory) objects in the OU (organizational unit) that contains your cluster, then the VCO (virtual computer object) for the cluster must already exist and you must have the Full Control permission assigned.

The Confirmation page displays a summary of the cluster that is created. You can also use this screen to specify whether or not all eligible storage should be added to the cluster, which is generally a useful feature. This screen is displayed in Figure 12-16.

9781484207116_Fig12-16.jpg

Figure 12-16. The Confirmation page

After the cluster has been built, the Summary page shown in Figure 12-17 displays. This screen summarizes the cluster name, IP address, nodes, and quorum model that have been configured. It also provides a link to an HTML (Hypertext Markup Language) version of the report.

9781484207116_Fig12-17.jpg

Figure 12-17. The Summary page

The Create Cluster report displays a complete list of tasks that have been completed during the cluster build, as shown in Figure 12-18.

9781484207116_Fig12-18.jpg

Figure 12-18. The Create Cluster report

We could also have used PowerShell to create the cluster. The script in Listing 12-2 runs the cluster validation tests using the Test-Cluster cmdlet, before using the New-Cluster cmdlet to configure the cluster.

Figure 12-19 shows the results of running this script. The first part of the output provides the details of the validation report that has been generated. The second part confirms the name of the cluster that has been created.

9781484207116_Fig12-19.jpg

Figure 12-19. Validate and create cluster output

Configuring the Cluster

Many cluster configurations can be altered, depending on the needs of your environment. This section demonstrates how to change some of the more common configurations.

Changing the Quorum

If we examine our cluster in the Failover Cluster Manager, we can instantly see one configuration change that we need to make. The cluster has chosen the Logs volume as the quorum drive, as shown in Figure 12-20.

9781484207116_Fig12-20.jpg

Figure 12-20. Cluster disks

We can modify this by entering the context menu of the cluster and by selecting More Actions | Configure Cluster Quorum Settings, which causes the Configure Cluster Quorum Wizard to be invoked. On the Select Quorum Configuration Option page, shown in Figure 12-21, we choose the Select The Quorum Witness option.

9781484207116_Fig12-21.jpg

Figure 12-21. The Select Quorum Configuration Option page

On the Select Quorum Witness page, we select the option to configure a disk witness. This is illustrated in Figure 12-22.

9781484207116_Fig12-22.jpg

Figure 12-22. The Select Quorum Witness page

On the Configure Storage Witness page of the wizard, we can select the correct disk to use as a quorum. In our case, this is Disk 5, as illustrated in Figure 12-23.

9781484207116_Fig12-23.jpg

Figure 12-23. The Configure Storage Witness page

The Summary page of the wizard, shown in Figure 12-24, details the configuration changes that will be made to the cluster. It also highlights that dynamic quorum management is enabled and that all nodes, plus the quorum disk, have a vote in the quorum. Advanced quorum configurations are discussed in Chapter 13.

9781484207116_Fig12-24.jpg

Figure 12-24. The Summary Page

We can also perform this configuration from the command line by using the PowerShell command in Listing 12-3. Here, we use the Set-ClusterQuorum cmdlet and pass in the name of the cluster, followed by the quorum type that we wish to configure. Because disk is included in this quorum type, we can also pass in the name of the cluster disk that we plan to use, and it is this aspect that allows us to change the quorum disk.

Configuring MSDTC

If your instance of SQL Server uses distributed transactions, or if you are installing SQL Server Integration Services (SSIS), then it relies on MSDTC (Microsoft Distributed Transaction Coordinator). If your instance will use MSDTC, then you need to ensure that it is properly configured. If it is not, then setup will succeed, but transactions that rely on it may fail.

When installed on a cluster, SQL Server automatically uses the instance of MSDTC that is installed in the same role, if one exists. If it does not, then it uses the instance of MSDTC to which it has been mapped (if this mapping has been performed). If there is no mapping, it uses the cluster’s default instance of MSDTC, and if there is not one, it uses the local machine’s instance of MSDTC.

Many DBAs choose to install MSDTC within the same role as SQL Server; however, this introduces a problem. If MSDTC fails, it can also bring down the instance of SQL Server. Of course, the cluster attempts to bring both of the applications up on a different node, but this still involves downtime, including the time it takes to recover the databases on the new node, which takes a non-deterministic duration. (Please refer to Chapter 11 for further details.) For this reason, I recommend installing MSDTC in a separate role. If you do, the SQL Server instance still utilizes MSDTC, since it is the cluster’s default instance, and it removes the possibility of MSDTC causing an outage to SQL Server. This is also preferable to using a mapped instance or the local machine instance since it avoids unnecessary configuration, and the MSDTC instance should be clustered when a clustered instance of SQL Server is using it.

To create an MSDTC role, start by selecting the Configure Role option from the Roles context menu in Failover Cluster Manager. This invokes the High Availability Wizard. On the Select A Role page of the wizard, select the Distributed Transaction Coordinator (DTC) role type, as shown in Figure 12-25.

9781484207116_Fig12-25.jpg

Figure 12-25. The Select Role page

On the Client Access Point page, illustrated in Figure 12-26, you need to enter a virtual name and IP address for MSDTC. In our case, we name it PROSQLMSDTC-C and assign 192.168.0.21 as the IP address. On a multi-subnet cluster, you need to provide an IP address for each network.

9781484207116_Fig12-26.jpg

Figure 12-26. The Client Access Point page

On the Select Storage page of the wizard, select the cluster disk on which you plan to store the MSDTC files, as shown in Figure 12-27. In our case, this is Disk 4.

9781484207116_Fig12-27.jpg

Figure 12-27. The Select Storage page

The Confirmation page displays an overview of the role that is about to be created, as shown in Figure 12-28.

9781484207116_Fig12-28.jpg

Figure 12-28. The Confirmation page

Alternatively, we could create this role in PowerShell. The script in Listing 12-4 first uses the Add-ClusterServerRole cmdlet to create the role. We pass the virtual name to use for the role into the Name parameter, the name of the cluster disk to use into the Storage parameter, and the IP address for the role into the StaticAddress parameter.

We then use the Add-ClusterResource cmdlet to add the DTC resource. The Name parameter names the resource and the ResourceType parameter specifies that it is a DTC resource. We then need to create the dependencies between the resources within the role. We did not need to do this when using the GUI, as the dependencies were created for us automatically. Resource dependencies specify the resource or resources on which other resources depend. A resource failing propagates through the chain and could take a role offline. For example, in the case of our PROSQLMSDTC-C role, if either the disk or the virtual name becomes unavailable, the DTC resource goes offline. Windows Server supports multiple dependencies with both AND and OR constraints. It is the OR constraints that make multi-subnet clustering possible, because a resource can be dependent on IP address A OR IP address B. Finally, we need to bring the role online by using the Start-ClusterGroup cmdlet.

Configuring a Role

After creating a role, you may wish to configure it to alter the failover policy or configure nodes as preferred owners. To configure a role, select Properties from the role’s context menu. On the General tab of the Properties dialog box, which is shown in Figure 12-29, you can configure a node as the preferred owner of the role. You can also change the order of precedence of node preference by moving nodes above or below others in the Preferred Owners window.

9781484207116_Fig12-29.jpg

Figure 12-29. The General tab

You can also select the priority for the role in the event that multiple roles fail over to another node at the same time. The options for this setting are as follows:

  • High
  • Medium
  • Low
  • No Auto Start

On the Failover tab of the Properties dialog box, you can configure the number of times that the role can fail over within a given period before the role is left offline. The default value for this is one failure within 6 hours. The issue with this is that if a role fails over, and after you fix the issue on the original node, you fail the role back, no more failovers are allowed within the 6-hour window. This is obviously a risk, and I generally advise that you change this setting. In our case, we have configured the role to allow a maximum of three failovers within a 24-hour time window, as illustrated in Figure 12-30. We have also configured the role to fail back to the most preferred owner if it becomes available again. Remember, when setting automatic failback, that failback also causes downtime in the same way that a failover does. If you aspire to a very high level of availability, such as five 9s, then this option may not be appropriate.

9781484207116_Fig12-30.jpg

Figure 12-30. The Failover tab

Building the AlwaysOn Failover Cluster Instance

Once the cluster has been built and configured, it is time to install the AlwaysOn failover cluster instance of SQL Server. To do this, select the New SQL Server Failover Cluster Installation option from the Installation tab of the SQL Server Installation Center.

Preparation Steps

Image Note  You can find further information on SQL Server Installation Center in Chapter 2.

When you select this option, you invoke the Install A SQL Server Failover Cluster Wizard. The majority of the pages in this wizard are identical to the pages in the SQL Server 2014 Setup Wizard. You can find details of these screens and discussions of the options in Chapter 2.

Cluster-Specific Steps

The first deviation from a stand-alone installation of SQL Server is the Instance Configuration page, which is illustrated in Figure 12-31. On this screen, you are prompted to enter the SQL Server Network Name as well as choose between a default instance and a named instance. The SQL Server Network Name is the virtual name, which is configured as a resource in the role. This allows client applications to connect to a server with a consistent name, regardless of the node that owns the instance.

9781484207116_Fig12-31.jpg

Figure 12-31. The Instance Configuration page

Image Tip  If you do not have permissions to create AD objects in the OU that contains your cluster, then the VCO for the instance must already exist, and you must have the Full Control permission assigned.

On the Cluster Resource Group page, displayed in Figure 12-32, a list of roles on the local cluster is displayed. It is possible to create an empty role prior to running the installation of the instance, and if you have done so, then you can select it from the list. Alternatively, you can modify the default name supplied in the SQL Server cluster resource group name box to create a new resource group. In this demonstration, we leave the default name.

9781484207116_Fig12-32.jpg

Figure 12-32. The Cluster Resource Group page

On the Cluster Disk Selection page of the wizard, shown in Figure 12-33, a list of cluster disks displays in the lower pane, with warnings next to any disks that you cannot select. In the top pane, you can check the disks that you want to add to the role. In our case, we select all of them, since there is one for data, one for logs, and one for TempDB.

9781484207116_Fig12-33.jpg

Figure 12-33. The Cluster Disk Selection page

On the Cluster Network Configuration page, we add an IP address for the role. In a multi-subnet cluster, we would add multiple IP addresses, one for each subnet. The Cluster Network Configuration page is illustrated in Figure 12-34.

9781484207116_Fig12-34.jpg

Figure 12-34. The Cluster Network Configuration page

On the Service Accounts tab of the Server Configuration page, the startup type for the SQL Server service and the Server Agent service can only be configured as manual, since startup is controlled by the cluster during failover. The other required service accounts are in a read-only display and cannot be configured, as shown in Figure 12-35.

9781484207116_Fig12-35.jpg

Figure 12-35. The Sevice Account tab

After instance installation is complete, the role is brought online and the instance is visible in Failover Cluster Manager, as shown in Figure 12-36.

9781484207116_Fig12-36.jpg

Figure 12-36. The SQL Server role in Failover Cluster Manager

Installing the Instance with PowerShell

Of course, we can use PowerShell to install the AlwaysOn failover cluster instance instead of using the GUI. To install an AlwaysOn failover cluster instance from PowerShell, we can use SQL Server’s setup.exe application with the InstallFailoverCluster action specified.

When you perform a command-line installation of a clustered instance, you need the parameters in Table 12-1, in addition to the parameters that are mandatory when you install a stand-alone instance of SQL Server. For a complete description of mandatory parameters, please see Chapter 3.

Table 12-1. Required Parameters for the Installation of a Clustered Instance

Parameter

Usage

/FAILOVERCLUSTERIPADDRESSES

Specifies the IP address(s) to use for the instance in the format <IP Type>;<address>;<network name>;<subnet mask>. For multi-subnet clusters, the IP addresses are space delimited.

/FAILOVERCLUSTERNETWORKNAME

The virtual name of the clustered instance.

/INSTALLSQLDATADIR

The folder in which to place SQL Server data files. This must be a cluster disk.

The script in Listing 12-5 performs the same installation that has just been demonstrated when you run it from the root directory of the installation media.

Adding a Node

The next step you should take when installing the cluster is to add the second node. Failure to add the second node results in the instance staying online, but with no high availability, since the second node is unable to take ownership of the role. To configure the second node, you need to log in to the passive cluster node and select the Add Node To SQL Server Failover Cluster option from the Installation tab of SQL Server Installation Center. This invokes the Add A Failover Cluster Node Wizard. The first page of this wizard is the Product Key page. Just like when you install an instance, you need to use this screen to provide the product key for SQL Server. Not specifying a product key only leaves you the option of installing the Evaluation Edition, and since this expires after 180 days, it’s probably not the wisest choice for high availability.

The following License Terms page of the wizard asks you to read and accept the license terms of SQL Server. Additionally, you need to specify if you wish to participate in Microsoft’s Customer Experience Improvement Program. As discussed in Chapter 2, if you select this option, then error reporting is captured and sent to Microsoft.

After you accept the license terms, a rules check runs to ensure that all of the conditions are met so you can continue with the installation. After the wizard checks for Microsoft updates and installing the setup files required for installation, another rules check is carried out to ensure that the rules for adding the node to the cluster are met.

Image Note  For further discussion and illustrations of the preparation pages just described, please refer to Chapter 2.

On the Cluster Node Configuration page, illustrated in Figure 12-37, you are asked to confirm the instance name to which you are adding a node. If you have multiple instances on the cluster, then you can use the drop-down box to select the appropriate instance.

9781484207116_Fig12-37.jpg

Figure 12-37. The Cluster Node Configuration page

On the Cluster Network Configuration page, shown in Figure 12-38, you confirm the network details. These should be identical to the first node in the cluster, including the same IP address, since this is, of course, shared between the two nodes.

9781484207116_Fig12-38.jpg

Figure 12-38. The Cluster Network Configuration page

On the Service Accounts page of the wizard, most of the information is in read-only mode and you are not able to modify it. This is because the service accounts you use must be the same for each node of the cluster. You need to re-enter the service account passwords, however. This page is shown in Figure 12-39.

9781484207116_Fig12-39.jpg

Figure 12-39. The Service Accounts page

Now that the wizard has all of the required information, an additional rules check is carried out before the summary page displays. The summary page, known as the Ready To Add Node page, is illustrated in Figure 12-40. It provides a summary of the activities that take place during the installation.

9781484207116_Fig12-40.jpg

Figure 12-40. The Read To Add Node page

Adding a Node Using PowerShell

To add a node using PowerShell instead of the GUI, you can run SQL Server’s setup.exe application with an AddNode action. When you add a node from the command line, the parameters detailed in Table 12-2 are mandatory.

Table 12-2. Mandatory Parameters for the AddNode Action

Parameter

Usage

/ACTION

Must be configured as AddNode.

/IACCEPTSQLSERVERLICENSETERMS

Mandatory when installing on Windows Server Core, since the /qs switch must be specified on Windows Server Core.

/INSTANCENAME

The instance that you are adding the extra node to support.

/CONFIRMIPDEPENDENCYCHANGE

Allows multiple IP addresses to be specified for multi-subnet clusters. Pass in a value of 1 for True or 0 for False.

/FAILOVERCLUSTERIPADDRESSES

Specifies the IP address(es) to use for the instance in the format <IP Type>;<address>;<network name>;<subnet mask>. For multi-subnet clusters, the IP addresses are space delimited.

/FAILOVERCLUSTERNETWORKNAME

The virtual name of the clustered instance.

/INSTALLSQLDATADIR

The folder in which to place SQL Server data files. This must be a cluster disk.

/SQLSVCACCOUNT

The service account that is used to run the Database Engine.

/SQLSVCPASSWORD

The password of the service account that is used to run the Database Engine.

/AGTSVCACCOUNT

The service account that issued to run SQL Server Agent.

/AGTSVCPASSWORD

The password of the service account that is used to run SQL Server Agent.

The script in Listing 12-6 adds ClusterNode2 to the role when you run it from the root folder of the install media.

Managing a Cluster

Installing the cluster is not the end of the road from an administrative perspective. You still need to periodically perform maintenance tasks. The following sections describe some of the most common maintenance tasks.

Moving the Instance between Nodes

Other than protecting against unplanned outages, one of the benefits of implementing high availability technologies is that doing so significantly reduces downtime for maintenance tasks, such as patching. This can be at the operating system level or the SQL Server level.

If you have a two-node cluster, apply the patch to the passive node first. Once you are happy that the update was successful, fail over the instance and then apply the patch to the other node. At this point, you may or may not wish to fail back to the original node, depending on the needs of your environment. For example, if the overriding priority is the level of availability of the instance, then you will probably not wish to fail back, because this will incur another short outage.

On the other hand, if your instance is less critical and you have licensed SQL Server with Software Assurance, then you may not be paying for the SQL Server license on the passive node. In this scenario, you only have a limited time period in which to fail the instance back to avoid needing to purchase an additional license for the passive node.

Image Note  For versions of SQL Server prior to SQL Server 2014, Software Assurance is not required in order to have a passive node without a license.

To move an instance to a different node using Failover Cluster Manager, select Move | Select Node from the context menu of the role that contains the instance. This causes the Move Clustered Role dialog box to display. Here, you can select the node to which you wish to move the role, as illustrated in Figure 12-41.

9781484207116_Fig12-41.jpg

Figure 12-41. The Move Clustered Role dialog box

The role is then moved to the new node. If you watch the role’s resources window in Failover Cluster Manager, then you see each resource move through the states of Online image Offline Pending image Offline. The new node is now displayed as the owner before the resources move in turn through the states of Offline - Online Pending - Online, as illustrated in Figure 12-42. The resources are taken offline and placed back online in order of their dependencies.

9781484207116_Fig12-42.jpg

Figure 12-42. Resources coming onlinie on passive node

We can also fail over a role using PowerShell. To do this, we need to use the Move--ClusterGroup cmdlet. Listing 12-7 demonstrates this by using the cmdlet to fail back the instance to ClusterNode1. We use the -Name parameter to specify the role that we wish to move and the -Node parameter to specify the node to which we wish to move it.

Rolling Patch Upgrade

If you have a cluster with more than two nodes, then consider performing a rolling patch upgrade when you are applying updates for SQL Server. In this scenario, you mitigate the risk of having different nodes, which are possible owners of the role, running different versions or patch levels of SQL Server, which could lead to data corruption.

The first thing that you should do is make a list of all nodes that are possible owners of the role. Then select 50 percent of these nodes and remove them from the Possible Owners list. You can do this by selecting Properties from the context menu of the Name resource, and then, in the Advanced Policies tab, unchecking the nodes in the possible owners list, as illustrated in Figure 12-43.

9781484207116_Fig12-43.jpg

Figure 12-43. Remove possible owners

To achieve the same result using PowerShell, we can use the Get-Resource cmdlet to navigate to the name resource and then pipe in the Set-ClusterOwnerNode to configure the possible owners list. This is demonstrated in Listing 12-8. The possible owners list is comma separated in the event that you are configuring multiple possible owners.

Once 50 percent of the nodes have been removed as possible owners, you should apply the update to these nodes. After the update has been verified on this half of the nodes, you should reconfigure them to allow them to be possible owners once more.

The next step is to move the role to one of the nodes that you have upgraded. After failover has successfully completed, remove the other half of the nodes from the preferred owners list before applying the update to these nodes. Once the update has been verified on this half of the nodes, you can return them to the possible owners list.

Image Tip  The possible owners can only be set on a resource. If you run Set-ClusterOwnerNode against a role using the -Group parameter, then you are configuring preferred owners rather than possible owners.

Removing a Node from the Cluster

If you wish to uninstall an AlwaysOn failover cluster instance, then you cannot perform this action from Control Panel as you would a stand-alone instance. Instead, you must run the Remove Node Wizard on each of the nodes of the cluster. You can invoke this wizard by selecting Remove Node from a SQL Server Failover Cluster option from the Maintenance tab in SQL Server Installation Center.

The wizard starts by running a global rules check, followed by a rules check for removing a node. Then, on the Cluster Node Configuration page shown in Figure 12-44, you are asked to confirm the instance for which you wish to remove a node. If the cluster hosts multiple instances, you can select the appropriate instance from the drop-down box.

9781484207116_Fig12-44.jpg

Figure 12-44. The Cluster Node Configuration page

On the Ready To Remove Node page, shown in Figure 12-45, you are given a summary of the tasks that will be performed. After confirming the details, the instance is removed. This process should be repeated on all passive nodes, and then finally on the active node. When the instance is removed from the final node, the cluster role is also removed.

9781484207116_Fig12-45.jpg

Figure 12-45. The Ready To Remove Node page

To remove a node using PowerShell, we need to run SQL Server’s setup.exe application, with the action parameter configured as RemoveNode. When you use PowerShell to remove a node, the parameters in Table 12-3 are mandatory.

Table 12-3. Mandatory Parameters When Removing a Node from a Cluster

Parameter

Usage

/ACTION

Must be configured as AddNode.

/INSTANCENAME

The instance that you are adding the extra node to support.

/CONFIRMIPDEPENDENCYCHANGE

Allows multiple IP addresses to be specified for multi-subnet clusters. Pass in a value of 1 for True or 0 for False.

The script in Listing 12-9 removes a node from our cluster when we run it from the root directory of the SQL Server installation media.

Summary

Before you build a cluster, you must prepare the servers that will become the cluster nodes. To do this, you need to install the .NET Framework 3.5 features and the Failover Cluster features in the operating system. You can do this either through Server Manager or by using the Install-WindowsFeature cmdlet in PowerShell.

Once you have prepared the nodes, you need to build the cluster by using Failover Cluster Manager. During this process, you define the IP address and virtual name that will be used for administering the cluster. You also need to validate the cluster. This validation consists of a comprehensive set of tests that check the configuration of the network, the nodes, and the storage to ensure compatibility with clustering. Warnings raised by this validation may or may not indicate an issue, but if any errors are raised, then you must resolve them. Microsoft will not support clusters that have not been validated.

Once you have built the cluster, you may need to perform many configurations before you install the SQL Server failover cluster instance. These configurations may include changing the quorum model of the cluster and installing Microsoft Distributed Transaction Coordinator (MSDTC). MSTDC is required if you plan to use SQL Server Integration Services (SSIS) or run any distributed transactions.

Once you have configured the cluster to your requirements, you can install the failover cluster instance. You can perform this action from the Installation tab of SQL Server Installation Center or from PowerShell by running setup.exe with the InstallFailoverCluster action. This process is similar to a stand-alone installation of SQL Server, except that you need to supply additional information, such as the IP address and virtual name of the instance. This allows applications to connect, regardless of the node that owns the instance.

After the initial installation is complete, the instance is accessible, but there is no high availability. In order to allow the instance to fail over to other nodes, you must run the Add Node Wizard on each of the other nodes within the cluster.

One benefit of achieving high availability for SQL Server is that doing so allows you to minimize downtime during planned maintenance. On a two-node cluster, you can upgrade the passive node, fail over, and then upgrade the active node. For larger clusters, you can perform a rolling patch upgrade, which involves removing half of the nodes from the possible owners list and upgrading them. You then fail over the instance to one of the upgraded nodes and repeat the process for the remaining nodes. This mitigates the risk of mixed version, across the possible owners.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.125.51