Installing and Configuring Scale-Out File Servers

Before you can install and configure SOFS, you need to make several deployment decisions, as well as understand the prerequisites:

  • You will need to decide the number of nodes within your SOFS cluster (this is from two to eight nodes).
  • Determine what kind of converged network you will have in place to support your SOFS cluster.
  • Determine what kind of storage you will have in place to support your SOFS cluster (Storage Spaces or SAN based).
  • Before you can implement SOFS, you will need to install features that are included in the File and Storage Services role and the Failover Clustering feature.
  • You will need to ensure that the cluster passes validation, create your Windows failover cluster, configure networking, and add storage (one of more CSV LUNs).

Storage Spaces
Storage Spaces is a new virtualization capability within Windows Server 2012 aimed at reducing the cost associated with highly available storage for hosted, virtualized, and cloud-based deployments. Storage Spaces is based on a storage-pooling model, whereby storage pools can be created from affordable commodity-based hardware, depending on your storage needs. A storage space is then carved out of the storage pool and presented as a virtual disk to Windows.

Complying with Installation Prerequisites

SOFS requires the File Server role and the Failover Clustering feature to be installed on each node that will be a part of the SOFS. The File Server role and the Failover Clustering feature can be installed independently of each other, or at the same time, using either Server Manager or the appropriate PowerShell command.

Installing Roles and Features

Use the following procedure to install the File Server role and the Failover Clustering feature using Server Manager:

1. Open the Server Manager dashboard and click Manage at the top-right of the screen, as shown in Figure 7-5. Then click Add Roles And Features. The Add Roles And Features Wizard appears.
2. On the Before You Begin page, click Next. On the Select Installation Type page, click Role-Based or Feature-Based Installation, and then click Next.
3. On the Select Destination Server page, select the appropriate server, and then click Next. (By default, the local server appears in the Server Pool.)
4. On the Select Server Roles page, expand File And Storage Services, expand File Services, and then select the File Server check box. Click Next.
5. On the Select Features page, select the Failover Clustering check box, and then click Next.
6. On the Add Features That Are Required For Failover Clustering page, click Add Features.
7. On the Confirm Installation Selections page, click Install. Verify that the installation succeeds, and click Close.

Figure 7-5 Server Manager dashboard

c07f005.tif

To install the prerequisite roles and features by using PowerShell, run the following Windows Server Manager PowerShell command:

Add-WindowsFeature –name File-Services,Failover-Clustering `
-IncludeManagementTools

Verifying a Successful Installation

Verify the successful installation of the prerequisite roles and features by running the following Windows Server Manager PowerShell command:

Get-WindowsFeature | Where {$_.installed}

Configuring Failover Clustering

The next task on the road to implementing SOFS is to implement Windows Failover Clustering. The cluster will provide support for a highly available SMB file share that can then be used by applications such as SQL Server and for workloads running on Hyper-V. Before you can install and configure Windows Failover Clustering, consider the following:

  • You will need to run the cluster validation process and ensure that it completes, without reporting any failures.
  • After the successful creation of your multi-node cluster, you will need to verify that the Core Cluster Resources group and any shared storage have come online.
  • If the shared storage was added prior to the cluster being formed, all the LUNs except for the Witness Disk will have automatically been added to the Available Storage Group and will be brought online. SOFS requires the storage to be a CSV LUN, and the CSV namespace is now enabled by default in Windows Server 2012.
  • If the storage is added after the cluster has been formed or the check box Add All Available Storage To The Cluster was not selected, then the storage will need to be added manually.

Validating a Cluster

Cluster validation is a process that is designed to identify any potential hardware, software, or general configuration issues prior to configuring your cluster and placing that cluster into production. After the validation is complete, you can create the cluster. Use the following procedure to validate your cluster:

1. Open the Server Manager dashboard and click Tools at the top-right of the screen, as shown earlier in Figure 7-5. Then click Failover Cluster Manager. The Failover Cluster Manager MMC appears.
2. Under the Management heading, in the center pane, click the Validate Configuration link. The Validate A Configuration Wizard appears.
3. On the Before You Begin page, click Next. On the Select Servers Or A Cluster page, enter the names of the servers you want to validate in an FQDN format—such as, for example, FS1.DEMO.INTERNAL. Then click Next.
4. On the Testing Options page, ensure that the Run All Tests (Recommended) option is selected, and then click Next.
5. On the Confirmation page, click Next. On the Summary page, ensure that the Create The Cluster Now Using The Validated Nodes check box is deselected, and then click Finish.

To validate the hardware, software, and general configuration of your Windows Failover Cluster by using PowerShell, run the following Windows Failover Cluster PowerShell command:

Test-Cluster –Node fs1.demo.internal, fs2.demo.internal

Make sure to separate each node of the cluster with a comma.

Creating a Cluster

After installing the required roles and features to support SOFS and validating your configuration, the next step is to create a new cluster. Creating a cluster can be accomplished either with the Failover Cluster Manager MMC or the appropriate PowerShell cmdlet. Use the following procedure to create your cluster:

1. Open the Server Manager dashboard and click Tools at the top-right of the screen. Then click Failover Cluster Manager. The Failover Cluster Manager MMC appears.
2. Under the Management heading, in the center pane, click the Create Cluster link. The Create Cluster Wizard appears.
3. On the Before You Begin page, click Next. On the Select Servers page, enter the names of the servers you want to join the cluster, in a FQDN format, such as FS1.DEMO.INTERNAL, and click Next.
4. On the Access Point For Administering The Cluster page, in the Cluster Name box, type the name of your cluster.
Additionally, if DHCP-assigned addresses are not being used on the NICs associated with the cluster, you will have to provide static IP address information.

Using DHCP-Assigned IP Addresses
Since Windows Server 2008 Failover Clustering, the capability has existed for cluster IP address resources to obtain their addressing from DHCP as well as via static entries. If the cluster nodes themselves are configured to obtain their IP addresses from DHCP, the default behavior will be to obtain an IP address automatically for all cluster IP address resources. If the cluster node has statically assigned IP addresses, the cluster IP address resources will have to be configured with static IP addresses. Cluster IP address resource IP assignment follows the configuration of the physical node and each specific interface on the node.

5. Deselect any networks that will not be used to administer the cluster. In the Address field, enter an IP address to be associated with the cluster and then click Next, as shown in Figure 7-6.

Figure 7-6 Access point for administering the cluster

c07f006.tif
6. On the Confirmation page, click Next. On the Summary page, click Finish.

To create your Windows Failover Cluster by using PowerShell, run the following Windows Failover Cluster PowerShell command:

New-Cluster -Name SOFSDEMO -Node FS1.DEMO.INTERNAL,FS2.DEMO.INTERNAL -NoStorage `
-StaticAddress 10.0.1.23

Configuring Networking

Communication between SOFS cluster nodes is critical for smooth operation. Therefore, it is important to configure the networks that you will use for SOFS cluster communication and ensure that they are configured in the most optimal way for your environment.

At least two of the cluster networks must be configured to support heartbeat communication between the SOFS cluster nodes, and this will avoid any single points of failure. To do so, configure the roles of these networks as Allow Cluster Network Communications On This Network. Typically, one of these networks should be a private interconnect dedicated to internal communication. However, if you have only two physical NICs, rather than two LBFO pairs, these two NICs should be enabled for both cluster use and client access.

Additionally, each SOFS cluster network must fail independently of all other cluster networks. This means that two cluster networks must not have a component in common that could cause a common simultaneous failure, such as the use of a multiport (dual- or quad-port) network adapter. To attach a node to two separate SOFS cluster networks, you would need to ensure that independent NICs are used.

To eliminate possible communication issues, remove all unnecessary network traffic from the NIC that is set to Internal Cluster communications only (this adapter is also known as the heartbeat or private network adapter) and consider the following:

  • Remove NetBIOS from the NIC.
  • Do not register in DNS.
  • Specify the proper cluster communication priority order.
  • Set the proper adapter binding order.
  • Define the proper network adapter speed and mode.
  • Configure TCP/IP correctly.

Adding Storage

After going through the process of creating your cluster, the next step is to add storage to Cluster Shared Volumes. SOFS requires the storage to be a CSV LUN, with the benefit here being that the CSV LUN can be accessed by more than one node at a time. You can add a CSV LUN by using Failover Cluster Manager. Use the following procedure to add your storage to the cluster:

1. Open the Server Manager dashboard and click Tools at the top-right of the screen. Then click Failover Cluster Manager. The Failover Cluster Manager MMC appears.
2. Expand the Storage node in the left pane, click the Disks node, right-click the disk that you want to add to Cluster Shared Volumes within the center pane, and then click Add To Cluster Shared Volumes. Repeat this process for each disk you would like to add to the SOFS cluster.

To add available storage to Cluster Shared Volumes by using PowerShell, run the following Windows Failover Cluster PowerShell command (note that “Cluster Disk 2” represents the disk number that you want to add and may differ in your setup):

Add-ClusterSharedVolume "Cluster Disk 2"

Configuring Scale-Out File Services

SOFS requires that you configure the File Server role, as well as create a continuously available file share, on your CSV LUN. Use the following procedure to configure SOFS and add your continuously available file share to your cluster:

1. Open the Server Manager dashboard and click Tools. Then click Failover Cluster Manager. The Failover Cluster Manager MMC appears.
2. Right-click the Roles node and click Configure Role. The High Availability Wizard appears.
3. On the Before You Begin page, click Next. On the Select Role page, click File Server and then click Next.
4. On the File Server Type page, select the Scale-Out File Server For Application Data option and then click Next.
5. On the Client Access Point page, in the Name box, type the DNN that will be used to access Scale-Out File Server (note that the DNN is limited to a maximum of 15 characters). Then click Next.
6. On the Confirmation page, confirm your settings and then click Next. On the Summary page, click Finish.

To configure the File Server role by using PowerShell, run the following Windows Failover Cluster PowerShell command:

Add-ClusterScaleOutFileServerRole -Name SOFS -Cluster SOFSDEMO

When complete, the SOFS Role should be online on one of the nodes within your cluster. The group will contain a DNN and the SOFS resource (see Figure 7-7).

Figure 7-7 SOFS role dependency list

c07f007.tif

Configuring a Continuously Available File Share

After going through the process of configuring your SOFS role, the next step is to add a continuously available file share to the CSV LUN. Use the following procedure to add your continuously available file share to your cluster:

1. Open the Server Manager dashboard and click Tools. Then click Failover Cluster Manager. The Failover Cluster Manager MMC appears.
2. Expand the Roles node in the left pane, right-click the SOFS resource within the center pane, and then click Add File Share.
Alternatively, shared folders can be created by using the PowerShell command New-SmbShare or directly with Windows Explorer.
3. On the Select The Profile For This Share Page, click SMB Share – Applications, as shown in Figure 7-8. Then click Next.
4. On the Select The Server And Path For This Share page, select the Cluster Shared Volume to host this share and then click Next.
5. On the Specify Share Name page, in the Share Name box, type a share name, such as SOFSSHARE, and then click Next.
6. On the Configure Share Settings page, ensure that the Enable Continuous Availability check box is selected, optionally select Encrypt Data Access, and then click Next.
7. On the Specify Permissions To Control Access page, click Customize Permissions, grant the following permissions, and then click Next.

Figure 7-8 Selecting a Profile in the New Share Wizard

c07f008.tif
If you are using this Scale-Out File Server file share for Hyper-V, then all Hyper-V computer accounts, the SYSTEM account, and all Hyper-V administrators, must be granted full control on the share and the file system (see Figure 7-9). If the Hyper-V server is part of a Windows Failover Cluster, the CNO must also be granted full control on the share and the file system.

Figure 7-9 Advanced security settings—stand-alone host

c07f009.tif

Cluster Name Object
The Cluster Name Object (CNO) is the security context for the cluster and is used for all interactions requiring a security context.

If you are using Scale-Out File Server on Microsoft SQL Server, the SQL Server service account must be granted full control on the share and the file system.
8. On the Confirm selections page, click Create. On the View results page, click Close.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.193.106