Chapter 5

Manage storage and file services

The most common workload for Windows Server is as a file server. This is because ultimately most organizations create documents that need to be stored somewhere and often shared even if their primary business does not involve document creation. Windows Server provides a number of technologies that address the problems that occur related to the reliable storage and sharing of files. These include the humble file shares themselves, synchronization technologies to replicate file share contents across hybrid environments, and technologies that improve the performance and reliability of underlying storage technologies.

Skills covered in this chapter:

Skill 5.1: Configure and manage Azure File Sync

Azure File Sync is one of the most useful hybrid technologies available for Windows Server. Anyone who has managed a file server knows the challenges they pose, from having to remove disused files on a regular basis to ensure that there is enough space for new files, to challenges around ensuring that files are regularly backed up, and even to being able to restore a file that you might have removed at some point to save space because someone actually needs it now. Azure File Sync helps you address all these problems, reducing the amount of time you need to spend maintaining file servers so you can get on with the million other things on your to-do list. This objective deals with how to set up and use Azure File Sync.

Create Azure File Sync Service

The backbone of Azure File Sync is the Storage Sync Service. The storage sync service is the service that runs in Azure that manages Azure. You should deploy as few storage sync services as necessary since a Windows Server file server can only be registered with one sync service and file servers that are connected to different storage sync services are unable to synchronize with each other.

You should plan to deploy the storage sync service and the Azure File Share endpoints used by each sync group in the same Azure region and resource group. To deploy a storage sync service, perform the following steps:

  1. In the Azure portal select Create a resource and then search for Azure File Sync. In the list of results select Azure File Sync and then select Create.

  2. On the Deploy Storage Sync page, provide the following:

    • Name A name for the storage sync service. This name only needs to be unique on a per-region basis, but it’s a good idea to have it unique for your organization.

    • Subscription The name of the subscription that will host the storage sync service. This will be the subscription where costs accrue for the service.

    • Resource Group The resource group that will host the storage sync service. You should also plan to host the storage account used with the sync service in the same resource group.

    • Location The location where you will deploy Azure File Sync. This location should be geographically proximate to where your server endpoints are. Remember that clients will be accessing files through the file server endpoints close to them. Bandwidth and latency between the server endpoint and the file share is generally only an issue when there is a substantive delay between a file that is tiered being requested and it synchronizing back to the endpoint.

Create sync groups

A sync group allows you to replicate a specific folder and file structure across server and Azure File Share endpoints. Each sync group has a single Azure file share endpoint but can have multiple server endpoints. An Azure file storage sync service can host multiple sync groups, and a Windows Server endpoint can participate in multiple sync groups as long as those sync groups belong to the same storage sync service. To create a sync group, you need to specify a sync group name, which is separate from the sync service name; the name of the storage account that will be used; and the name of the Azure File Share that will be used. You should create the storage account and Azure File Share before creating the sync group.

Create cloud endpoints

The back end of Azure File Sync is an Azure File Share, also termed a cloud endpoint. This is a cloud file share that will store any file that is written to an Azure File Sync endpoint. The back-end Azure File Share stores the entire contents of what appears to be on the file share that is the front end for Azure File Sync. Creating a file share involves creating a storage account and then creating the file share within the storage account.

To create an Azure File Share, consider the following:

  • Performance requirements In most cases, the only computers interacting with the file share in an Azure File Sync deployment are the server endpoints. This means that you are unlikely to require the higher I/O performance capabilities of a premium file share that is hosted on solid-state disk (SSD)-based hardware.

  • Redundancy requirements Standard file shares can use locally redundant, zone-redundant, or geo-redundant storage. Large file shares of the type you are likely to use with Azure File Sync are only available with locally redundant and zone-redundant storage.

  • File share size Local and zone redundant storage accounts allow for file shares that span up to 100 TiB. The file share size will need to be able to hold all the tiered data from your file share endpoints and should be substantially larger than the storage on any on-premises server. The amount of storage you allocate to a file share will depend on the amount of data you need to tier and how much storing that data costs. Storage and transfer costs are billed separately, and even if you create a file share that is larger than you need, your organization will only be billed by the storage capacity actually used.

You can configure Azure Backup to back up this file share endpoint. The advantage of this is that in the event of data corruption or deletion, you can just recover data to the Azure File Share in the cloud from the Azure console, and it will replicate down to all the Azure File Sync endpoints.

Register servers

The Azure File Sync agent allows you to register a server with a storage sync service. To register a server, download and install the Azure File Sync agent from the Microsoft Download Center. As part of the installation process, you can configure the agent to be automatically updated through Microsoft Update. When the installation completes, you perform registration with a storage sync service. To register a server, you need local Administrator privileges on the server you want to register and you need an Azure account that is a member of the Owner or Contributor management role for the storage sync service in Azure. You can delegate these roles to an Azure AD account under Access Control (IAM) on the Storage Sync Service properties page in the Azure console. During the registration process, you must specify the Azure subscription, resource group, and storage sync service that will be used with the server endpoint.

Registration will use Azure credentials to create a trust relationship between the storage sync service and the Windows Server computer. The Windows Server instance will then create an identity separate from the user account used to create the registration that will function as long as the server remains registered and the current share Access Signature token associated with the storage account remains valid.

Create server endpoints

An Azure File Sync endpoint consists of a server and a path that are enrolled in an Azure File Sync service. A Windows Server can host multiple endpoints, each of which has a different path, as long as those endpoints are in different sync groups associated with the same sync service. An Azure File Sync endpoint functions as the folder structure that underlies a normal file share. Administrators create a traditional shared folder and point it at the path that the Azure File Sync endpoint replicates to. You can also point a Distributed File System (DFS) namespace at this path, replacing Distributed File System Replication (DFSR) with Azure File Sync replication while still keeping the navigational advantages of the DFS way of identifying shared folders.

Create a server endpoint by adding the server that you registered with the storage sync service and specifying the local path to the files that you want to replicate using Azure File Sync. When creating the server endpoint, you also specify the cloud tiering settings in terms of how much free space should always be available on the local volume that hosts the files and how many days after a file was last accessed should pass before the file is tiered. After the endpoint is created, any files in the path specified will be replicated up to the Azure File Share that functions as the cloud endpoint. If you create an endpoint that points to the system volume of the registered server, you cannot enable cloud tiering on that endpoint.

File shares that serve as front ends for Azure File Sync endpoints should have the same share permissions. If you are using Azure File Sync with a failover cluster, ensure that the agent is installed on each node in the cluster and that each node in the cluster is registered to the same storage sync service.

Need More Review? Deploy Azure File Sync

You can learn more about deploying Azure File Sync at https://docs.microsoft.com/en-us/azure/storage/file-sync/file-sync-deployment-guide.

Configure cloud tiering

Azure File Sync uses a process called cloud tiering to ensure that there is capacity on the volume that hosts the share. Cloud tiering means that you don’t need to worry about constantly freeing up space for new files. You can configure Azure File Sync on a per-file share basis to tier files based on when the file was last accessed, how much free space there is on the volume that hosts the share, or both. For example, you might configure Azure File Sync so that any file that hasn’t been accessed in 14 days on a particular share is automatically tiered to Azure. You could also specify that the least recently accessed files be automatically tiered to Azure in the event that the volume has only 30 percent free space remaining. If you have both a policy to tier files that exceed a certain age and a requirement that a certain amount of space still be available on the volume, Azure File Sync will ensure that requirement for free space is met by tiering least recently accessed files until the free space requirement is met.

From the users’ perspective, a tiered file still appears as though it’s on the file server that they are accessing. If users try to open the file, it syncs down from the back-end Azure File Share to the Azure File Sync file share endpoint and then opens normally. Cloud tiering can be configured on a per-server endpoint basis in the Azure console.

Monitor File Sync

You can monitor Azure File Sync using Azure Monitor, the Storage Sync Service, and Windows Server. Azure Monitor provides the following data:

  • Bytes synced

  • Cloud tiering cache hit rate

  • Cloud tiering recall size

  • Cloud tiering recall size by application

  • Cloud tiering recall success rate

  • Cloud tiering recall throughput

  • Files not syncing

  • Files synced

  • Server cache size

  • Server online status

  • Sync session results

The Storage Sync Service in the Azure portal provides you with the following data:

  • Registered server health

  • Server endpoint health

    • Files not syncing

    • Sync activity

    • Cloud tiering efficiency

    • Files not tiering

    • Recall errors

  • Metrics

You can also view the Telemetry event log in Event Viewer on a server endpoint under Applications and ServicesMicrosoftFileSyncAgent to view sync health information. Azure File Sync performance counters are available in Performance Monitor that allow you to view bandwidth utilization and performance of the Azure File Sync agent.

Need More Review? Monitor Azure File Sync

You can learn more about monitoring Azure File Sync at https://docs.microsoft.com/en-us/azure/storage/file-sync/file-sync-monitoring.

Migrate DFS to Azure File Sync

When you migrate DFS to Azure File Sync, you replace the older DFS file replication technology with Azure File Sync. You’ll learn about both these technologies in more detail later in the chapter. The technology that is relevant to Azure File Sync is the namespace technology.

When you use DFS Namespaces, you can configure a single UNC path to map to multiple SMB shares. For example \adatumshareshovercraft can map to \Adelaide-fs01hovercraft as well as \Melbourne-fs01hovercraft. When a client attempts to access the DFS UNC path, they are directed by DFS to the closest SMB endpoint. When used with DFS replication, it meant that a user attempting to access a share would be connected to the closest DFS endpoint. When you use DFS Namespaces with Azure File Sync, a client that navigates to the DFS address will be directed to the closest Azure File Sync endpoint.

To migrate from DFS replication to Azure File Sync replication, perform the following steps:

  1. Create a new sync group that will be used as the substitute for the DFS replication topology you are replacing.

  2. Start on the server that has the full set of data in your DFS replication topology to migrate. Install Azure File Sync on that server.

  3. Register that server and create a server endpoint for the first server to be migrated. Do not enable cloud tiering.

  4. Let all the data on that server sync to your Azure File Share cloud endpoint.

  5. Install and register the Azure File Sync agent on each of the remaining servers that host DFS replicas.

  6. Disable DFS replication on each server.

  7. Create an Azure File Sync server endpoint on each of the previous servers that participated in DFS replication. Do not enable cloud tiering.

  8. Ensure that the sync process completes and test your topology.

  9. Retire DFS-R.

  10. Enable cloud tiering on any server endpoint as desired.

Need More Review? Using DFS Namespaces With Azure Files

You can learn more about using DFS Namespaces with Azure files at https://docs.microsoft.com/en-us/azure/storage/files/files-manage-namespaces.

images Exam Tip

Remember what steps you need to take to configure a new server to sync with an Azure storage account’s file share.

Skill 5.2: Configure and manage Windows Server File Shares

Traditionally, the most common use for Windows Servers has been as a file server. No matter how advanced technology gets, people who work in an organization need a way to share files with one another that is less chaotic than emailing them or handing them over on USB drives. Even with cloud storage options such as Teams, SharePoint Online, and OneDrive, many organizations still make use of the humble file server as a way of storing and sharing documents.

Configure Windows Server File Share access

The basic idea with shared folders is that you create a folder and assign permissions to a group, such as a department within your organization, and the people in that group use that space on the file server to store files to be shared with the group.

For example, you create a shared folder named Managers on a file server named FS1. Next, you set share permissions on the shared folder and file system permissions on the files and folders within the shared folder. Permissions allow you to control who can access the shared folder and what users can do with that access. For example, permissions determine whether they are limited to just read-only access or whether they can create and edit new and existing files.

When you set share permissions and file system permissions for a shared folder, the most restrictive permissions apply. For example, if you configure the share permission so that the Everyone group has Full Control and then configure the file permission so that the Domain Users group has Read Access, a user who is a member of the Domain Users group accessing the file over the network has Read Access.

Things get a little more complicated when a user is a member of multiple groups; in this case, the most cumulative permission applies. For example, in a file where the Domain Users group has Read Access, but the Managers group has Full Control, a user who is a member of both Domain Users and Managers who accesses the file over the network has the Full Control permission. This is great for a certification exam question, but it can be needlessly complex when you’re trying to untangle permissions to resolve a service desk ticket.

You can create shared folders in a variety of ways. The way that many administrators do it, often out of habit, is by using the built-in functionality of File Explorer. If you are using File Explorer to share folders, you have two general options when it comes to permissions:

  • Simple Share Permissions. When you use the Simple Share Permissions option, you specify whether a user or group account has Read or Read/Write permissions to a shared folder. When you use Simple Share Permissions, both the share permissions and the file and folder level permissions are set at the same time. It is important to note that any files and folders in the shared folder path have their permissions reset to match those configured through the File Sharing dialog box. This, however, doesn’t happen with other forms of share permission configurations.

  • Advanced Share Permissions. Advanced Share Permissions are what administrators who have been managing Windows file servers since the days of Windows NT 4 are likely to be more familiar with. With Advanced Permissions, you configure share permissions separately from file system permissions. You configure Advanced Share Permissions through the Advanced Sharing button on the Sharing tab of a folder’s Properties dialog box. When you configure Advanced Sharing Permissions, permissions are only set on the share and are not reset on the files and folders within the share. If you are using Advanced Sharing Permissions, you set the file system permissions separately.

You can manage shares centrally through the Shares area of the Server Manager console. An advantage of the Server Manager console is that you can use it to connect to manage shares on remote servers, including servers running the Server Core installation option. When you edit the properties of a share through Server Manager, you can also edit share permissions. This functions in the same way as editing Advanced Share Permissions through File Explorer in that it won’t reset the permissions on the file system itself; permissions are only reset on the share.

You can also use the Server Manager’s share properties interface when File Server Resource Manager is installed to edit the following settings:

  • Enable Access-Based Enumeration. Enabled by default, this setting ensures that users can only see files and folders to which they have access.

  • Allow Caching of Share. Allows files to be used offline. An additional setting, Enable BranchCache on the file share, allows use of BranchCache if the appropriate group policies are applied. You’ll learn more about BranchCache later in this chapter. You can configure this option when the Settings tab is selected.

  • Encrypt Data Access. When you enable this option, traffic to and from the shared folder is encrypted if the client supports SMB 3.0 (Windows 8 and later). You can configure this option when the Settings tab is selected.

  • Folder Owner Email. Setting the folder owner’s email addresses can be useful when resolving access-denied assistance requests. You can configure this when the Permission tab is selected.

  • Folder Usage. Folder-usage properties allow you to apply metadata to the folder that specifies the nature of the files stored there. You can choose between User Files, Group Files, Application Files, and Backup and Archival Files. You can use folder usage properties with data classification rules.

Windows Admin Center (WAC) also provides basic file share configuration functionality, though this is not currently as sophisticated as what can be accomplished through Server Manager or File Explorer.

Configure file screens

File screens allow you to block users from writing files to file shares based on file name extension. For example, you can use a file screen to stop people from storing video or audio files on file shares. File screens are implemented based on file name. Usually, this just means file screens are implemented by file extension, but you can configure file screens based on a pattern match of any part of a file name. You implement file screens using file groups and file screen templates.

A file screen doesn’t block files that are already there; file screens just stop new files from being written to the share. File screens also only work based on file name. If you have users who are especially cunning, they might figure out that it’s possible to get around a file screen by renaming files, so they don’t get blocked by the screen.

File groups

A file group is a pre-configured collection of file extensions related to a specific type of file. For example, the Image Files file group includes file name extensions related to image files, such as .jpg, .png, and .gif. While file groups are usually fairly comprehensive in their coverage, they aren’t always complete. Should you need to, you can modify the list to add new file extensions.

The file groups included with FSRM include the following:

  • Audio And Video Files Blocks file extensions related to audio and video files, such as .avi and .mp3s

  • Backup Files Blocks file extensions related to backups, including .bak and .old files

  • Compressed Files Blocks file extensions related to compressed files, such as .zip and .cab

  • E-mail Files Blocks file extensions related to email storage, including .pst and .mbx files

  • Executable Files Blocks file extensions related to executable files and scripts, such as .exe or .ps1 extensions

  • Image Files Blocks file extensions related to images, such as .jpg or .png extensions

  • Office Files Blocks file extensions related to Microsoft Office files, such as .docx and .pptx files

  • System Files Blocks file extensions related to system files, including .dll and .sys files

  • Temporary Files Blocks file extensions related to temporary files, such as .tmp. Also blocks files starting with the ~ character

  • Text Files Blocks file extensions related to text files, including .txt and .asc files

  • Web Page Files Blocks file extensions related to web page files, including .html and .htm files

To edit the list of files in a file group, right-select the file group and select Edit File Group Properties. Using the dialog box you can modify the list of files to include and exclude files based on file name pattern. For example, you can do a simple exclusion or inclusion based on the file name suffix, such as *.bak. You also have the option of creating a more complex exclusion or inclusion based on the file name, such as backup *.*, which would exclude all files with the word backup at the start of any extension.

Exclusions allow you to add exceptions to an existing block rule. For example, you could configure a file screen to block all files that have the extension .vhdx. You might then create an exception for the name server2022.vhdx. When implemented, all files with the .vhdx extension would be blocked from being written to the share, except for files with the name server2022.vhdx.

While the NTFS and ReFS file systems are case sensitive, file screens are not case sensitive.

To create a new file group, right-select the File Groups node in the File Server Resource Manager console and select Create File Group. Provide the following information:

  • File Group Name The name for the file group.

  • Files To Include Provide patterns that match the names of files you want to block from being written to the file server.

  • Files To Exclude Provide patterns that match the names of files you want to exclude from the block.

File screen templates

File screen templates are made up of a screening type, a collection of file groups, and a set of actions to perform when a match is found. File screen templates support the following screening types:

  • Active Screening An active screen blocks users from writing files to the file share that have names that match those patterns listed in the file group.

  • Passive Screening A passive screen doesn’t block users from writing files to the file share that have names that match patterns listed in the file group. Instead, you use a passive screen to monitor such activity.

The actions you can configure include sending an email, writing a message to the event log, running a command, or generating a report.

After you have configured the appropriate file screen template, create the file screen by applying the template to a specific path. You can also create file screen exceptions, which exempt specific folders from an existing file screen. For example, you might apply a file screen at the root of a shared folder that blocks audio and video files from being written to the share. If you wanted to allow users to write audio and video files to one folder in the share, you could configure a file screen exception and apply it to that folder.

Configure File Server Resource Manager quotas

Quotas are important. If you don’t use them, file shares tend to end up consuming all available storage unless you have a solution such as Azure File Sync deployed. Some users dump as much as possible onto a file share unless quotas are in place and unless you are monitoring storage; the first you’ll hear about it is when the service desk gets calls about people being unable to add new files to the file share.

NTFS has had rudimentary quota functionality since the Windows NT days. The reason that most Windows Server administrators don’t bother with quota functionality is that it applies at the volume level and can’t be applied to individual user accounts. Needless to say, if you have 500 users for whom you want to configure quotas, you don’t want to have to individually configure a quota for each one. Even with command-line utilities, you still need to create an entry for each user.

File Server Resource Manager (FSRM) provides far more substantial quota functionality that makes quotas more practical to implement as a way of managing storage utilization on Windows Server file servers. Quotas in FSRM can be applied on a per-folder basis and are not cumulative across a volume. You can also configure quotas in FSRM so that users are sent warning emails if they exceed a specific quota threshold but before they are blocked from writing files to the file server. You manage quotas using FSRM by creating a quota template and then applying that quota template to a path.

Creating a quota template involves setting a limit, specifying a quota type, and then configuring notification thresholds. You can choose between the following quota types:

  • Hard Quota. Do Not Allow Users To Exceed Limit A hard quota blocks users from writing data to the file share after the quota value is exceeded.

  • Soft Quota. Allow Users To Exceed Limit (Use For Monitoring) A soft quota allows you to monitor when users exceed a specific storage utilization value, but it doesn’t block users from writing data to the file share after the quota value is exceeded.

Notification thresholds allow you to configure actions to be taken after a certain percentage of the assigned quotas are reached. You can configure notifications via email, get an item written to an event log, run a command, or have a report generated.

After you’ve created the quota template, you can apply it to a folder. To do this, select the Quotas node under Quota Management, and from the Action menu, select Create Quota. In the Create Quota dialog box, select the path to which the quota applies and the quota template you want to apply. You then choose between applying the quota to the whole path or setting up an auto-apply template. Auto-apply templates allow separate quotas to be applied to any new and existing quota path subfolders. For example, if you applied a quota to the C:Example path using the 2 GB template, the quota would apply cumulatively for all folders in that path. If you chose an auto-apply template, a separate 2 GB quota would be configured for each new and existing folder under C:Example.

Use additional FSRM functionality

File Server Resource Manager includes advanced functionality that you can use to manage file servers in on-premises and hybrid environments.

Storage reports

The Storage Reports functionality of FSRM allows you to generate information about the files that are being stored on a particular file server. You can use FSRM to create the following storage reports:

  • Duplicate Files This report locates multiple copies of the same file. If you’ve enabled deduplication on the volume hosting these files, these additional copies do not consume additional disk space because they are deduplicated.

  • File Screening Audit This report allows you to view which users or applications are triggering file screens—for example, which users have tried to save music or video files to a shared folder.

  • Files By File Group This report allows you to view files sorted by file group. You can view files by all file groups, or you can search for specific files—for example, a report on ZIP files stored on a shared folder.

  • Files By Owner This report allows you to view files by owner. You can search for files by all owners or run a report that provides information on files by one or more specific users.

  • Files By Property Use this report to find out about files based on a classification. For example, if you have a classification named Top_Secret, you can generate a report about all files with that classification on the file server.

  • Folders By Property Similar to Files By Property, use this report to find out about folders based on a classification.

  • Large Files This report allows you to find large files on the file server. By default, it finds files larger than 5 MB, but you can edit this setting to locate all files that are larger.

  • Least Recently Accessed Files This report allows you to identify files that have not been accessed for a certain number of days. By default, this report identifies files that have not been accessed in the last 90 days, but you can configure this setting to any number that is appropriate for your organization.

  • Most Recently Accessed Files Use this report to determine which files have been accessed most recently. The default version of this report finds files that have been accessed in the last seven days.

  • Quota Usage Use this report to view how a user’s storage usage compares against the assigned quota. For example, you could run a report to determine which users have exceeded 90 percent of their quota.

You can configure storage reports to run and then to be stored locally on file servers. You also have the option of configuring storage reports to be emailed to one or more email addresses. You can generate storage reports in DHTML, HTML, XML, CSV, and text formats.

File classification

File classification allows you to apply metadata to files based on file properties. For example, you can apply the tag Top_Secret to a file that has specific properties, such as who authored it and whether a particular string of characters appeared in the file.

The first step to take when configuring file classification is to configure classification properties. After you’ve done this, you can create a classification rule to assign the classification property to a file. You can also allow users to manually assign classification properties to a file. By specifying the values allowed, you limit which classification properties the user can assign.

You can configure the following file classification properties:

  • Yes/No Provide a Boolean value.

  • Date-Time Provide a date and time.

  • Number Provide an integer value.

  • Multiple Choice List Allow multiple values to be assigned from a list.

  • Ordered List Provide values in a specific order.

  • Single Choice Select one of a selection of options.

  • String Provide a text-based value.

  • Multi-string Assign multiple text-based values.

Classification rules allow you to assign classifications to files based on the properties of a file. You can use one of three methods to classify a file:

  • Content Classifier When you choose this method of classification, you configure a regular expression to scan the contents of a file for a specific string or text pattern. For example, you could use the content classifier to automatically assign the Top_Secret classification to any file that contained the text Project_X.

  • Folder Classifier When you choose this method of classification, all files in a particular path are assigned the designated classification.

  • Windows PowerShell Classifier When you choose this method of classification, a PowerShell script is run to determine whether a file is assigned a particular classification.

You can configure classification rules to run against specific folders. You can also choose to recheck files each time the rule is run. That way, you can change a file’s classification in the event that the properties that triggered the initial classification change. When configuring reevaluation, you can also choose to remove user-assigned classification in case there is a conflict.

File management tasks

File management tasks are automated tasks that FSRM performs on files according to a schedule. FSRM supports three types of file management tasks:

  • File Expiration This moves all files that match the conditions to a specific directory. The most common usage of a file expiration task is to move files that haven’t been accessed by anyone for a specific period, such as 365 days, to a specific directory.

  • Custom Allows you to run a specific executable against a file. You can specify which executable is to be run, any special arguments to be used when running the executable, and the service account permissions, which can be Local Service, Network Service, or Local System.

  • RMS Encryption Allows you to apply an RMS template or a set of file permissions to a file based on conditions. For example, you might want to automatically apply a specific set of file permissions to a file that has the Top_Secret classification or apply a specific RMS template to a file that has the Ultra_Secret classification.

When configuring a file management task, you also need to provide the following information:

  • Scope The path where the task is run.

  • Notification Any notification settings that you want to configure, such as sending an email, running a command, or writing an event to an event log. With file expiration, you can configure an email to be sent to each user who has files that are subject to the expiration task.

  • Report Generating a report each time that the task is run. A notification is sent to the user who owns the file, and reports are sent to administrators.

  • Schedule Specify when you want the file management task to be run.

  • Condition Specify the condition that triggers the management task.

Access-denied assistance

Access-denied assistance allows users to be informed why they don’t have access to a specific file. Access-denied assistance gives you the option of allowing the user to send an email message to the file owner so that the owner can, if appropriate, grant access to the file. You can configure access-denied assistance using FSRM or by configuring Group Policy. You configure access-denied assistance for a single server in FSRM by editing the FSRM options.

If you want to use access-denied assistance across all file servers in your organization, you can use Group Policy. To do so, edit the policies located in the Computer ConfigurationPoliciesAdministrative TemplatesSystemAccess-Denied Assistance node. This node contains the following policies:

  • Customize Message For Access Denied Errors Use this policy to specify the message users see when they are blocked from accessing a file.

  • Enable Access-Denied Assistance On Client For All File Types When enabled, access-denied assistance functions for all file types where the user is blocked from accessing the file.

Configure BranchCache

BranchCache speeds up access to files stored on shared folders that are accessed across medium-to high-latency WAN links. For example, suppose several users in a company’s Auckland, New Zealand branch office need to regularly access several files stored on a file server in the Sydney, Australia head office. The connection between the Auckland and Sydney office is low bandwidth and high latency. The files are also fairly large and need to be stored on the Sydney file server. Additionally, the Auckland branch office is too small for a Distributed File System (DFS) replica to make sense. In a scenario such as this, you would implement BranchCache.

BranchCache creates a locally cached copy of files from remote file servers that can be accessed by other computers on the local network, assuming the file hasn’t been updated at the source. In the example scenario, after one person in the Auckland office accesses the file, the next person to access the same file in the Auckland office accesses a copy that is cached locally, rather than retrieving it from the Sydney file server. The BranchCache process performs a check to verify that the cached version is up to date. If it isn’t, the updated file is retrieved and stored in the Auckland network’s BranchCache.

You add BranchCache to a file server by using the following PowerShell command:

Install-WindowsFeature FS-BranchCache

After installing BranchCache, you need to configure Group Policies that apply to file servers in your organization that allow them to support BranchCache. To do this, you need to configure the Hash Publication for BranchCache policy, located in the Computer ConfigurationPoliciesAdministrative TemplatesNetworkLanman Server node.

You have three options when configuring this policy:

  • Allow Hash Publication Only For Shared Folders On Which BranchCache is enabled This option allows you to selectively enable BranchCache.

  • Disallow Hash Publication On All Shared Folders Use this option when you want to disable BranchCache.

  • Allow Hash Publication For All Shared Folders Use this option if you want to enable BranchCache on all shared folders.

Generally, there’s rarely a great reason not to enable BranchCache on all shared folders, but should you want to be selective, you do have that option. If you choose to be selective and only enable BranchCache on some shares, you need to edit the properties of the share and enable BranchCache. You do so by selecting Caching on the Advanced Sharing page and then using the Offline Settings options.

After you’ve configured your file server to support BranchCache, you need to configure client computers to support BranchCache. You do so by configuring Group Policy in the Computer ConfigurationPoliciesAdministrative TemplatesNetworkBranchCache node of a GPO. Which policies you configure depends on how you want BranchCache to work at each branch office. You can choose between the following options:

  • Distributed Cache Mode When client computers are configured for Distributed Cache mode, each Windows 7 or later computer hosts part of the cache.

  • Hosted Cache Mode When you configure Hosted Cache mode, a server at the branch office hosts the cache in its entirety. Any server running Windows Server 2008 R2 or later can function as a hosted cache mode server.

To configure a branch office server to function as a hosted cache mode server, run the following PowerShell commands:

Install-WindowsFeature BranchCache
Start-Service BranchCache
Enable-BCHostedServer

Implement and configure Distributed File System

Distributed File System (DFS) has two advantages over a traditional file share. The first is that DFS automatically replicates to create copies of the file share and its content on one or more other servers. The second is that clients connect to a single UNC address, with the client directed to the closest server and redirected to the next closest server in the event that a server hosting a DFS replica fails. Azure File Sync provides most of the first functionality but does not provide the second functionality.

Using DFS, you can push a single shared folder structure out across an organization that has multiple branch offices. Changes made to files on one file share replica propagate across to the other file share replicas, with a robust and built-in conflict-management system present to ensure that problems do not occur when users are editing the same file at the same time.

DFS namespace

A DFS namespace is a collection of DFS shared folders. It uses the same UNC pathname structure, except instead of \ServerNameFileShareName with DFS, it is \domainname with all DFS shared folders located under this DFS root. For example, instead of

\FS-1Engineering
\FS-2Accounting
\FS-3Documents

you could have

\Contoso.comEngineering
\Contoso.comAccounting
\Contoso.comDocuments

In this scenario, the Engineering, Accounting, and Documents folders could all be hosted on separate file servers and you could use a single namespace to locate those shared folders, rather than needing to know the identity of the file server that hosts them.

DFS supports the following types of namespaces:

  • Domain-Based Namespace Domain-based namespaces store configuration data in Active Directory. You deploy a domain-based namespace when you want to ensure that the namespace remains available even if one or more of the servers hosting the namespace goes offline.

  • Standalone Namespace Standalone namespaces have namespace data stored in the registry of a single server and not in Active Directory as is the case with domain-based namespaces. You can have only a single namespace server with a standalone namespace. Should the server that hosts the namespace fail, the entire namespace is unavailable even if servers that host individual folder targets remain online.

To create a DFS namespace, perform the following steps:

  1. In the DFS console, select the Namespaces node. In the Action menu, select New Namespace.

  2. On the Namespace Server page, select a server that has the DFS Namespaces feature installed. You can install this feature with the following PowerShell cmdlet:

    Install-WindowsFeature FS-DFS-Namespace
  3. On the Namespace Name and Settings page, provide a meaningful name for the namespace. This is located under the domain name. For example, if you added the name Schematics and you were installing DFS in the contoso.internal domain, the namespace would end up as \contoso.internalSchematics. By default, a shared folder is created on the namespace server, although you can edit settings on this page of the wizard and specify a separate location for the shared folder that hosts content you want to replicate.

  4. On the Namespace Type page, you should generally select domain-based namespace as this gives you the greatest flexibility and provides you with the option of adding additional namespace servers later on for redundancy.

To add an additional namespace server to an existing namespace, ensure that the DFS Namespace role feature is installed on the server you want to add, and then perform the following steps:

  1. In the DFS console, select the namespace to which you want to add the additional namespace server, and on the Action menu, select Add Namespace Server.

  2. On the Add Namespace Server page, specify the name of the namespace server, or browse and query Active Directory to verify that the name is correct, and then select OK. This creates a shared folder on the new namespace server with the name of the namespace.

DFS replication

A replica is a copy of a DFS folder. Replication is the process that ensures each replica is kept up to date. DFS uses block-level replication, which means that only blocks in a file that have changed are transmitted to other replicas during the replication process.

You install the DFS replication feature by running the following PowerShell command:

Install-WindowsFeature FS-DFS-Replication

In the event that the same file is being edited by different users on different replicas, DFS uses a “last writer wins” conflict-resolution model. In the unlikely event that two separate users create files with the same name in the same location on different replicas at approximately the same time, conflict resolution uses “earliest creator wins.” When conflicts occur, files and folders that “lose” the conflict are moved to the Conflict and Deleted folder, located under the local path of the replicated folder in the DfsrPrivateConflictandDeleted directory.

Replicated folders and targets

One of the big advantages of DFS is that you can create copies of folders across multiple servers that are automatically updated. Each copy of that replica is called a folder target. Only computers that have the DFS replication role feature installed can host folder targets. A replicated folder can have multiple folder targets. For example, you might have a replicated folder named \contoso.comEngineering that you have configured targets for in Sydney on \SYD-FS1Engineering, Melbourne on \MEL-FS1Engineering, and Auckland on \AKL-FS1Engineering.

Replication topology

A replication group is a collection of servers that host copies of a replicated folder. When configuring replication for a replication group, you choose a topology and a primary member. The topology dictates how data replicates between the folders that each server hosts. The primary member is the seed from where file and folder data is replicated.

When creating a replication group, you can specify the following topologies:

  • Hub and Spoke This topology has hub members where data originates and spoke members to the location in which data is replicated. This topology also requires at least three members of the replication group. Choose this if you have a hub-and-spoke topology for your organizational WAN.

  • Full Mesh In this topology, each member of the replication group can replicate with other members. This is the simplest form of replication group and is suitable when each member can directly communicate with the others.

  • No Topology When you select this option, you can create a custom topology where you specify how each member replicates with others.

Replication schedules

You use replication schedules to determine how replication partners communicate with each other. You use a replication schedule to specify when replication partners communicate and whether replication traffic is throttled so that it doesn’t flood the network.

You can configure replication to occur continuously and specify bandwidth utilization, with a minimum value of 16 Kbps and an upper value of 256 Mbps, with the option of setting it to Unlimited. If necessary, you can also set different bandwidth limitations for different periods of the day.

images Exam Tip

Remember how to block specific file types from being written to a file share.

Skill 5.3: Configure Windows Server Storage

This objective deals with Windows Server storage technologies, from disks and volumes to file systems, and then how to increase the performance and resiliency of storage using Storage Spaces Direct.

Configure disks and volumes

In Windows Server, a disk is a physical device, which can be traditional magnetic storage, solid-state, or persistent memory device (also termed storage class memory). There are two types of disks available to Windows Server: basic disks and dynamic disks.

Basic disks

A basic disk is a disk that contains partitions that are generally formatted with a filesystem such as NTFS. Partitions are logical regions on a disk. A volume on a basic disk is a formatted partition. A basic disk using the Master Boot Record partition style can host as many as four partitions configured either as four primary partitions or three primary partitions and an extended partition. A basic disk formatted with the GUID Partition Table (GPT) style can have up to 128 primary partitions.

  • Primary partitions Each primary partition has a single logical volume. This volume can be formatted with a filesystem such as NTFS or ReFS.

  • Extended partitions Extended partitions are not formatted or assigned drive letters. Extended partitions can be divided into logical drives, which can in turn be used as formatted volumes.

You can perform the following operations on basic disks:

  • Create and delete primary and extended partitions.

  • Create and delete logical drives within an extended partition.

  • Format a partition and set it as active.

Dynamic disks

Dynamic disks allow you to create volumes that span multiple physical disks (as spanned and striped volumes) and to create fault-tolerant volumes (such as mirrored volumes and RAID-5 volumes). Dynamic disks support MBR and GPT partition styles. Dynamic disks were more commonly used in earlier versions of Windows Server, but most of the functionality of dynamic disks has been superseded by Storage Spaces, covered in a moment. The most common use case for dynamic disks in Windows Server 2022 is for mirrored boot volumes.

Partition styles

Windows Server supports two partition styles: MBR, which has been available since the early 1980s, and GPT, which has been available since the late 1990s. In addition to supporting up to 128 primary partitions, disks that use the GPT partition type support partitions and volumes that exceed 2 terabytes.

Configure and manage storage spaces

Storage spaces and storage pools were first introduced to Windows Server with the release of Windows Server 2012. A storage pool is a collection of storage devices that you can use to aggregate storage. You expand the capacity of a storage pool by adding storage devices to the pool. A storage space is a virtual disk that you create from the free space that is available in a storage pool. Depending on how you configure it, a storage space can be resilient to failure and have improved performance through storage tiering.

Storage pools

A storage pool is a collection of storage devices, usually disks, but can also include items such as virtual hard disks, from which you can create one or more storage spaces. A storage space is a special type of virtual disk that has the following features:

  • Resilient storage Configure to use disk mirroring or parity in the structure of the underlying storage (if available). We discuss resilience in a moment.

  • Tiering Configure to leverage a combination of SSD and HDD disks to achieve maximum performance. Tiering is also discussed later in this chapter.

  • Continuous availability Storage spaces integrate with failover clustering, and you can cluster pools across separate nodes within a single failover cluster.

  • Write-back cache If a storage space includes SSDs, a write-back cache can be configured in the pool to buffer small random writes. These random writes are then later offloaded to SSDs or HDDs that make up the virtual disk in the pool.

  • Multitenancy You can configure access control lists on each storage pool. This allows you to configure isolation in multitenant scenarios.

Storage space resiliency

When creating a virtual disk on a storage pool that has enough disks, you can choose among several storage layouts. These layout options provide the following benefits:

  • Mirror Multiple copies of data are written across separate disks in the pool. This protects the virtual disk from failure of the physical disks that constitute the storage pool. Mirroring can be used with storage tiering. Depending on the number of disks in the pool, storage spaces provide two-way or three-way mirroring. Two-way mirroring writes two copies of data, and three-way mirroring writes three copies of data. Three-way mirroring provides better redundancy, but it also consumes more disk space.

  • Parity Parity data is stored on disks in the array that are separate from where data is stored. Parity provides greater capacity than using the Mirror option, but it has the drawback of slower write performance. Windows Server provides two types of parity: Single Parity and Dual Parity. Single Parity provides protection against one failure at a time. Dual Parity provides protection against two failures at a time. You need to have a minimum of three disks for Single Parity and a minimum of seven disks for Dual Parity. You get the option to select between Single Parity and Dual Parity when configuring storage layout when there are more than seven disks.

  • Simple This option provides no resiliency for the storage, which means that if one of the disks in the storage pool fails, the data on any virtual hard disks built from that pool will also be lost.

If you configure disks in the storage pool that use the Hot Spare option, storage spaces will be able to automatically repair virtual disks that use the Mirror or Parity resiliency options. It’s also possible for automatic repairs to occur if spare unallocated capacity exists within the pool.

Storage space tiering

Storage space tiering allows you to create a special type of virtual disk from a pool of storage that is a combination of SSD and traditional HDD disks. Storage tiering provides the virtual disk with performance similar to that of an array built out of SSD disks, but without the cost of building a large capacity array comprised of SSD disks. It accomplishes this by moving frequently accessed files to faster physical disks within the pool, thus moving less frequently accessed files to slower storage media.

You can only configure storage tiers when creating a virtual disk if there is a mixture of physical disks with the HDD and the SSD disk type in the pool upon which you want to create the disk. Once the disk is created, you cannot undo storage tiering from a virtual disk. You configure storage tiering for a virtual disk by selecting the Create Storage Tiers On This Virtual Disk option during virtual disk creation.

One challenge when configuring storage tiering is ensuring that you have media marked as SSD and HDD in the pool. While media will usually be recognized correctly, in some cases you must specify that a disk is of the SSD type, which allows storage tiering to be configured.

You can specify the disk media type using the following PowerShell procedure:

  1. First determine the storage using the Get-StoragePool cmdlet.

  2. To view whether physical disks are configured as SSD or HDD, use the Get-StoragePool cmdlet and then pipe that cmdlet to the Get-PhysicalDisk cmdlet. For example, to view the identity and media type of physical disks in the storage pool named Example-Pool, issue the command:

    Get-StoragePool -FriendlyName ExamplePool | Get-PhysicalDisk | Select UniqueID,
    MediaType, Usage
  3. Once you have determined the UniqueIDs of the disks that you want to configure as the SSD type, you can configure a disk to have the SSD type by using the Set-PhysicalDisk cmdlet with the UniqueID parameter and the MediaType parameter set to SSD. Similarly, you can change the type back to HDD by setting the MediaType parameter to HDD.

Thin provisioning and trim

Thin provisioning allows you to create virtual disks where you specify a total size for the disk, but only the space that is actually used will be allocated. For example, with thin provisioning, you might create a virtual hard disk that can grow to 500 GB in size but is only currently 10 GB in size because only 10 GB of data is currently stored on the volumes hosted on the disk.

You can view the amount of space that has been allocated to a thin-provisioned virtual disk, and you can see the total capacity in the Virtual Disks area when the Storage Pools node is selected in the Server Manager console or in Windows Admin Center. When you create a virtual disk, the maximum disk size available is determined by the amount of free space on the physical disks that make up the storage pool, rather than the maximum capacity of the existing thin-provisioned disks. For example, if you have a storage pool with two 10 TB physical disks, you can create more than two thin-provisioned disks that have a maximum size of 10 TB. You can create thin-provisioned disks 10 TB in size as long as the actual allocated space on the storage pool doesn’t exceed 10 out of the 20 available. It is possible to create thin-provisioned disks in such a way that the total thin-provisioned disk capacity exceeds the storage capacity of the underlying storage pool. If you do overallocate space, you’ll need to monitor how much of the underlying storage pool capacity is consumed and add disks to the storage pool because that capacity is exhausted.

Trim is an automatic process that reclaims space when data is deleted from thin-provisioned disks. For example, if you have a 10 TB thin-provisioned virtual disk that stores 8 TB of data, 8 TB will be allocated from the storage pool that hosts that virtual disk. If you delete 2 TB of data from that thin-provisioned virtual disk, Trim ensures that the storage pool that hosts that virtual disk will be able to reclaim that unused space. The 10 TB thin-provisioned virtual disk will appear to be 10 TB in size, but after the trim process is complete, it will only consume 6 TB of space on the underlying storage pool. Trim is enabled by default.

Need More Review? Thin Provisioning and Trim Storage

You can learn more about think provisioning and trim storage at https://docs.microsoft.com/en-us/windows-hardware/drivers/storage/thin-provisioning.

Storage Spaces Direct

Storage Spaces Direct allows you to use Windows Server with locally attached storage to create highly available software-defined storage. Storage Spaces Direct (which uses the abbreviation S2D because the SSD abbreviation is already used for solid-state disks) provides a form of distributed, software-defined, shared-nothing storage that has similar characteristics to RAID in terms of performance and redundancy. S2D allows you to create volumes from a storage pool of physical drives that are attached to multiple nodes that participate in a Windows Server failover cluster. Storage Spaces Direct functions as a replacement for expensive large-scale hardware storage arrays.

Storage Spaces Direct has the following properties:

  • You can scale out by adding additional nodes to the cluster.

  • When you add a node to a cluster configured for Storage Spaces Direct, all eligible drives on the cluster node will be added to the Storage Spaces Direct pool.

  • You can have between 2 and 16 nodes in a Storage Spaces Direct failover cluster.

  • It requires each node to have at least two solid-state drives and at least four additional drives.

  • A cluster can have more than 400 drives and can support more than 4 petabytes of storage.

  • Storage Spaces Direct works with locally attached SATA, SAS, persistent memory, or NVMe drives.

  • Cache is automatically built from SSD media. All writes up to 256 KB and all reads up to 64 KB will be cached. Writes are then de-staged to HDD storage in optimal order.

  • Storage Spaces Direct volumes can be part mirror and part parity. To have a three-way mirror with dual parity, it is necessary to have four nodes in the Windows Server failover cluster that hosts Storage Spaces Direct.

  • If a disk fails, the plug-and-play replacement will automatically be added to the storage spaces pool when connected to the original cluster node.

  • A Storage Spaces Direct cluster can be configured with rack and chassis awareness as a way of further ensuring fault tolerance.

  • Storage Spaces Direct clusters are not supported where nodes span multiple sites.

  • While NTFS is supported for use with S2D clusters, ReFS is recommended.

S2D supports two deployment options:

  • Hyper-Converged With the Hyper-Converged deployment option, both storage and compute resources are deployed on the same cluster. This has the benefit of not requiring you to configure file server access and permissions and is most commonly used in small to medium-sized Hyper-V deployments.

  • Converged With the Converged (also known as disaggregated) deployment option, storage and compute resources are deployed in separate clusters. Often used with Hyper-V infrastructure-as-a-service (IaaS) deployments, a scale-out file server is deployed on S2D to provide network attached storage over SMB3 file shares. The compute resources for the IaaS virtual machines are located on a separate cluster from the S2D cluster.

Storage Spaces Direct in Windows Server supports nested resiliency. Nested resiliency is a capability designed for two-server S2D clusters that allows storage to remain available in the event of multiple hardware failures. When nested resiliency is configured, volumes can remain online and accessible even if one server goes offline and a drive fails. Nested resiliency only works when a cluster has two nodes. Nested resiliency requires a minimum of four capacity drives per server node and two cache drives per server node.

S2D resiliency types

S2D resiliency options are dependent on how many fault domains are present, the failure tolerance required, and the storage efficiency that can be achieved. A fault domain is a collection of hardware, such as a rack of servers, where a single failure can affect every component in that collection.

Table 5-1 lists the different resiliency types, failure tolerances, storage efficiencies, and minimum fault domains.

Table 5-1 S2D resiliency

Resiliency

Failure tolerance

Storage efficiency

Minimum fault domains

Two-way mirror

1

50.0%

2

Three-way mirror

2

33.3%

3

Dual parity

2

50.0%-80.0%

4

Mixed

2

33.3%-80.0%

4

Adding S2D cluster nodes

Prior to adding a server to an existing S2D cluster, run the Test-Cluster validation cmdlet, while including current and existing nodes. For example, run this command to add the cluster node S2DN3 to a cluster that included nodes S2DN1 and S2DN2:

Test-Cluster -Name TestS2DCluster -node S2DN1,S2DN2,S2DN3 -Include "Storage Spaces
Direct", "Inventory", "Network", "System Configuration"

Once the validation has completed, run the Add-Clusternode cmdlet on one of the existing cluster nodes and specify the new cluster node name.

S2D cluster node maintenance

Before performing maintenance on a cluster node, you should pause and drain the node. You can do this with the Suspend-ClusterNode cmdlet with the -Drain parameter. Once the node is drained, which will also put it in a paused state, you can either shut the node down or perform other maintenance operations, such as restarting the node. When you have completed maintenance on the node, you can return it to operation by using the Resume-Clusternode cmdlet.

Configure and manage Storage Replica

Storage Replica allows you to replicate volumes between servers, including clusters, for the purposes of disaster recovery. You can also use Storage Replica to configure asynchronous replication to provision failover clusters that span two geographically disparate sites, while all nodes remain synchronized.

You can configure Storage Replica to support the following types of replication:

  • Synchronous Replication Use this when you want to mirror data and you have very low latency between the source and the destination. This allows you to create crash-consistent volumes. Synchronous replication ensures zero data loss at the filesystem level should a failure occur.

  • Asynchronous Replication Asynchronous Storage Replica is suitable when you want to replicate storage across sites where you are experiencing higher latencies.

Storage Replica is available for single volumes under 2 TB in the standard edition of Windows Server. These limits do not apply to the datacenter edition of the operating system. Participant servers must be members of the same Active Directory Domain Services forest.

Storage Replica operates at the partition layer. This means that it replicates all VSS snapshots created by the Windows Server operating system or by backup software that leverages VSS snapshot functionality.

Storage Replica has the following features:

  • Zero data loss, block-level replication When used with synchronous replication, there is no data loss. By leveraging block-level replication, even files that are locked will be replicated at the block level.

  • Guest and host Storage Replica works when Windows Server is a virtualized guest or when it functions as a host operating system. It is possible to replicate from third-party virtualization solutions to IaaS virtual machines hosted in the public cloud as long as Windows Server functions as the source and target operating system.

  • Supports manual failover with zero data loss You can perform manual failover when both the source and destination are online, or you can have failover occur automatically if the source storage fails.

  • Leverage SMB3 This allows Storage Replica to use multichannel, SMB Direct support on RoCE, iWARP, and InfiniBand RDMA network cards.

  • Encryption and authentication support Storage Replica supports packet signing, AES-128-GCM full data encryption, support for Intel AES-NI encryption acceleration, and Kerberos AES 256 authentication.

  • Initial seeding You can perform initial seeding by transferring data using a method other than Storage Replica between source and destination. This is especially useful when transferring large amounts of data between disparate sites where it may make more sense to use a courier to transport a high-capacity hard disk drive than it does to transmit data across a WAN link. The initial replication will then copy only blocks that have been changed since the replica data was exported from source to destination.

  • Consistency groups Consistency groups implement write ordering guarantees. This ensures that applications such as Microsoft SQL Server, which may write data to multiple replicated volumes, will have that data replicated so that it remains replicated sequentially in a consistent way.

Supported configurations

Storage Replica is supported in the following configurations:

  • Server-to-server In this configuration, Storage Replica supports both synchronous and asynchronous replication between two standalone services. Local drives, storage spaces with shared SAS storage, SAN, and iSCSI-attached LUNs can be replicated. You can manage this configuration either using Server Manager or PowerShell. Failover can only be performed manually.

  • Cluster-to-cluster In this configuration, replication occurs between two separate clusters. The first cluster might use Storage Spaces Direct, storage spaces with shared SAS storage, SAN, and iSCSI-attached LUNs. You manage this configuration using PowerShell and Azure Site Recovery. Failover must be performed manually.

  • Stretch cluster A single cluster where nodes are located in geographically disparate sites. Some nodes share one set of asymmetric storage and other nodes share another set of asymmetric storage. Storage is replicated either synchronously or asynchronously, depending on bandwidth considerations. This scenario supports storage spaces with shared SAS storage, SAN, and iSCSI-attached LUNs. You manage this configuration using PowerShell and the Failover Cluster Manager GUI tool. This scenario allows for automated failover.

The following configurations are not supported on Windows Server, though they may be supported at some point in the future:

  • Storage Replica only supports one-to-one replication in Windows Server. You cannot configure Storage Replica to support one-to-many replication or transitive replication. Transitive replication is where there is a replica of the replica server.

  • Storage Replica on Windows Server does not support bringing a replicated volume online for read-only access in Windows Server 2016. In Windows Server 2019 and Windows Server 2022, you can perform a test failover and temporarily mount a snapshot of the replicated storage on an unused NTFS or ReFS formatted volume.

  • Deploying scale-out file servers on stretch clusters participating in Storage Replica is not a supported configuration.

  • Deploying Storage Spaces Direct in a stretch cluster with Storage Replica is not supported.

Storage Replica requirements

Ensure that each server meets the following requirements:

  • The servers have two volumes, one volume hosting the data that you want to replicate, the other hosting the replication logs.

  • Ensure that both log and data disks are formatted as GPT and not MBR.

  • Data volume on the source and destination servers must be the same size and use the same sector sizes.

  • Log volume on the source and destination volumes should be the same size and use the same sector sizes. The log volume should be a minimum of 9 GB in size.

  • Ensure that the data volume does not contain the system volume, page file, or dump files.

Use the Test-SRTopology cmdlet to verify that all Storage Replica requirements have been met. To do this, perform the following steps:

  1. Create a temp directory on the source server that will store the Storage Replica Topology Report.

  2. Make a note of the drive letters of the source storage and log volumes and destination storage and log volumes.

When the test completes, view the TestSrTopologyReport.html file to verify that your configuration meets Storage Replica requirements. Once you have verified that the configuration does meet requirements, you can use the New-SRPartnerShip cmdlet to create a Storage Replica partnership.

You can check the status of Storage Replica replication by running the Get-SRPartnership and Get-SRGroup cmdlets. Once Storage Replica is active, you won’t be able to access the replica storage device on the destination computer unless you reverse replication or remove replication. When you run this command, you will receive a warning that data loss may occur, and you will be asked whether you want to complete the operation.

By default, all replication when Storage Replica is configured is synchronous. You can switch between synchronous and asynchronous replication by using the Set-SRPartnership cmdlet with the ReplicationMode parameter.

Storage Replica uses the following ports:

  • 445 Used for SMB, which is the replication transport protocol.

  • 5985 Used for WSManHTTP, which is the management protocol for WMI/CIM/PowerShell.

  • 5445 Used for SMB with iWARP. This port is only required if using iWARP RDMA networking.

Need More Review? Storage Replica

You can learn more about Storage Replica at https://docs.microsoft.com/en-us/windowsserver/storage/storage-replica/storage-replica-overview.

Configure data deduplication

Deduplication works by analyzing files, locating the unique chunks of data that make up those files, and only storing one copy of each unique data chunk on the volume. (A chunk is a collection of storage blocks.) Deduplication can reduce the amount of storage consumed on the volume because when analyzed, it turns out that a substantial number of data chunks stored on a volume are identical. Rather than store multiple copies of the same identical chunk, deduplication ensures that one copy of the chunk is stored with placeholders in other locations pointing at the single copy of the chunk, rather than storing the chunk itself. Windows Server supports deduplication on both NTFS- and ReFS-formatted volumes. Before you can enable deduplication, you need to install the Data Deduplication role service.

When you configure deduplication, you choose from one of the following usage types:

  • General-Purpose File Server Appropriate for general-purpose file servers, optimization is performed on the background on any file that is older than three days. Files that are in use and partial files are not optimized.

  • Virtual Desktop Infrastructure (VDI) Server Appropriate for VDI servers, optimization is performed in the background on any file that is older than three days, but files that are in use and partial files will also be optimized.

  • Virtualized Backup Server This usage type is suitable for backup applications such as System Center Data Protection Manager or an Azure Backup Server. Performs priority optimization on files of any age. It will optimize in-use files, but it will not optimize partial files.

When configuring deduplication settings, you can configure files to be excluded on the basis of file extension, or you can exclude entire folders from data deduplication. Deduplication involves running a series of jobs outlined in Table 5-2.

Table 5-2 Deduplication jobs

Name

Description

Schedule

Optimization

Deduplicates and optimizes the volume.

Once per hour

Garbage collection

Reclaims disk space by removing unnecessary chunks. You may want to run this job manually after deleting a substantial amount of data in an attempt to reclaim space by using the Start-DedupeJob cmdlet with the Type parameter set to GarbageCollection.

Every Saturday at 2:35 a.m.

Integrity scrubbing

Identifies corruption in the chunk store and uses volume features where possible to repair and reconstruct corrupted data.

Every Saturday at 3:35 a.m.

Unoptimization

A special job that you run manually when you want to disable deduplication for a volume.

Run manually

Need More Review? Deduplication

You can learn more about deduplication at https://docs.microsoft.com/en-us/windows-server/storage/data-deduplication/overview.

Configure SMB Direct

SMB Direct allows you to use network adapters that have RDMA to improve the performance of file servers. RDMA allows the network adapter to mediate the transfer of large amounts of data rather than having that process managed by the computer’s CPU. Windows Server 2022 and Windows 11 support SMB Encryption with SMB Direct. Previous versions of Windows Server and client offload encryption operations to the CPU, reducing traffic throughput. SMB Direct is enabled by default with Windows Server 2012 and later operating systems with compatible RDMA network adapters. Disabling SMB Multichannel functionality will also disable SMB Direct.

Need More Review? SMB Direct

You can learn more about SMB Direct at https://docs.microsoft.com/en-us/windows-server/storage/file-server/smb-direct.

Configure Storage QoS

Storage Quality of Service (QoS) allows you to centrally manage and monitor the performance of storage used for virtual machines that leverage the Scale-Out File Server and Hyper-V roles. The Storage QoS feature will automatically ensure that access to storage resources is distributed equitably between virtual machines that use the same file server cluster. It allows you to configure minimum and maximum performance settings as policies in units of IOPS.

Storage QoS allows you to accomplish the following goals:

  • Reduce the impact of noisy neighbor VMs A noisy neighbor VM is a virtual machine that is consuming a disproportionate amount of storage resources. Storage QoS allows you to limit the extent to which such a VM can consume storage bandwidth.

  • Monitor storage performance When you deploy a virtual machine to a scale-out file server, you can review storage performance from a central location.

  • Allocate minimum and maximum available resources Through Storage QoS policies, you can specify minimum and maximum resources available. This allows you to ensure that each VM has the minimum storage performance it requires to run reserved for it.

There are two types of Storage QoS policy:

  • Aggregated An aggregated policy applies maximum and minimum values for a combined set of virtual hard disk files and virtual machines. For example, by creating an aggregated policy with a minimum of 150 IOPS and a maximum of 250 IOPS and applying it to three virtual hard disk files, you can ensure that the three virtual hard disk files will have a minimum of 150 IOPS between them when the system is under load and will consume a maximum of 250 IOPs when the virtual machines associated with those hard disks are heavily using storage.

  • Dedicated A dedicated policy applies a minimum and a maximum value to each individual virtual hard disk. For example, if you apply a dedicated policy to each of three virtual hard disks that specify a minimum of 150 IOPS and a maximum of 250 IOPS, each virtual hard disk will individually be able to use up to 250 IOPS, while having a minimum of 150 IOPS reserved for use if the system is under pressure.

You create policies with the New-StorageQosPolicy cmdlet, which is specified by using the PolicyType parameter if the policy is Dedicated or Aggregated; also, you must specify the minimum and maximum IOPS. For example, run this command to create a new policy called Alpha of the Dedicated type that has a minimum of 150 IOPS and a maximum of 250 IOPS:

New-StorageQosPolicy -Name Alpha -PolicyType Dedicated -MinimumIops 150 -MaximumIops 250

Once you’ve created a policy, you can apply it to a virtual hard disk by using the Set-VMHardDiskDrive cmdlet.

Need More Review? Storage Quality of Service

You can learn more about Storage QoS at https://docs.microsoft.com/en-us/windows-server/storage/storage-qos/storage-qos-overview.

Configure filesystems

Windows Server supports the following filesystems:

  • NTFS

  • ReFS

  • FAT and FAT32

NTFS

NTFS is a filesystem that has been present in Windows Server environments since the 1990s. The Windows Server Hybrid Administrator Associate certification doesn’t concentrate on the specifics of NTFS other than as a point of comparison between it and ReFS. In most cases you’ll use NTFS on a Windows Server system as it is the general-purpose filesystem and you’ll only have to think about ReFS for circumstances such as when you have to deal with the sorts of large files used by virtual machine hard disks or databases. Boot volumes on Windows Server computers always use NTFS and cannot use ReFS.

NTFS supports volumes up to 16 terabytes using the default cluster size of 4 KB and up to 256 terabytes using the maximum cluster size of 64 KB. Most scenarios that require such large volumes will be better served by using ReFS. ReFS is engineered to address shortcomings that NTFS had with file and disk sizes that would have been incomprehensible when the filesystem was first made available.

NTFS has the following permissions that can be applied to files and folders:

  • Full Control When applied to folders, allows the reading, writing, changing, and deletion of files and subfolders. When applied to a file, permits reading, writing, changing, and deletion of the file. Allows modification of permissions on files and folders.

  • Modify When applied to folders, allows the reading, writing, changing, and deletion of files and subfolders. When applied to a file, permits reading, writing, changing, and deleting the file. Does not allow the modification of permissions on files and folders.

  • Read & Execute When applied to folders, allows the content of the folders to be accessed and executed. When applied to a file, allows the file to be accessed and executed.

  • List Folder Contents Can only be applied to folders. Allows the contents of the folder to be viewed.

  • Read When applied to folders, allows content to be accessed. When applied to a file, allows the contents to be accessed. Differs from Read & Execute in that it does not allow files to be executed.

  • Write When applied to folders, allows adding of files and subfolders. When applied to a file, allows a user to modify, but not delete, a file.

ReFS

ReFS (Resilient File System) is a filesystem that is appropriate for very large workloads where you need to maximize data availability and integrity and ensure that the filesystem is resilient to corruption. The ReFS filesystem is suitable for hosting specific types of workloads such as virtual machines and SQL Server data, because it includes the following features that improve upon NTFS:

  • Integrity ReFS uses checksums for both metadata and file data. This means that ReFS can detect data corruption.

  • Storage spaces integration When integrated with storage spaces that are configured with Mirror or Parity options, ReFS has the ability to automatically detect and repair corruption using a secondary or tertiary copy of data stored by storage spaces. The repair occurs without downtime.

  • Proactive error correction ReFS includes a data integrity scanner that scans the volume to identify latent corruption and proactively repair corrupted data.

  • Scalability ReFS is specifically designed to support data sets in the petabyte range.

  • Advanced VM operations ReFS includes functionality specifically to support virtual machine operations. Block cloning accelerates copy operations, which accelerate VM checkpoint merges. Sparse VDL allows ReFS to substantially reduce the amount of time required to create very large fixed-size virtual hard disks.

It is important to note that ReFS is suitable only for hosting specific types of workloads. It isn’t suitable for many workloads used in small and medium enterprises that aren’t hosting large VMs or huge SQL Server databases. On computers running Windows Server 2022, ReFS supports the following features available in NTFS:

  • BitLocker encryption

  • Data deduplication

  • Cluster Shared Volume support

  • Junctions/soft links

  • Hard links

  • Failover cluster support

  • Access control lists

  • USN journal (Update Sequence Number Journal)

  • Changes notifications

  • Junction points

  • Mount points

  • Reparse points

  • Volume snapshots

  • File IDs

  • OpLock (opportunistic lock)

  • Sparse files

  • Named streams

  • Thin provisioning

  • Trim/Unmap

ReFS does not support File Server Resource Manager, file compression, file encryption, extended attributes, and quotas. ReFS does support block clone and file-level snapshots. ReFS volumes are also not bootable and cannot host the page file.

When used with Storage Spaces Direct, ReFS allows mirror-accelerated parity. Mirror-accelerated parity provides fault tolerance without impacting performance. To create a mirror-accelerated parity volume for use with Storage Spaces Direct, use the following PowerShell command:

New-Volume -FriendlyName "ExampleVolume" -filesystem CSVFS_ReFS -StoragePoolFriendlyName
"ExamplePool" -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 200GB,
800GB

You can use ReFSUtil, located in the %SystemRoot%WindowsSystem32 folder, to recover data from ReFS volumes that have failed and are displayed as RAW in Disk Management. ReFS will identify files that can be recovered from a damaged ReFS volume and copy those files to another volume.

Need More Review? Resilient File System

To learn more about ReFS, visit https://docs.microsoft.com/en-us/windows-server/storage/refs/refs-overview.

FAT and FAT32

Windows Server does support creating volumes that use the FAT and FAT32 filesystems. There are few circumstances where you would need to create a volume that uses these filesystems, but many removable storage devices use FAT32 and sometimes the simplest way to get files on and off a recalcitrant server is to copy them onto a convenient USB stick.

images Exam Tip

Remember the minimum number of drives necessary for the various storage space resiliency options.

Chapter summary

  • Azure File Sync provides a method of replicating files between on-premises endpoints and an Azure File Share. Azure File Sync also provides storage tiering of files that have not been recently accessed and can ensure that a certain amount of space on a volume remains free.

  • You can use DFS namespaces with Azure File Sync, with DFS namespaces pointing a client at the closest endpoint and Azure File Sync replacing DFS replication.

  • File Server Resource Manager allows you to implement file screens, which can be used to block specific file types from being written to file shares.

  • Using DFS, you can push a single shared folder structure out across an organization that has multiple branch offices.

  • A storage pool is a collection of storage devices that you can use to aggregate storage. You expand the capacity of a storage pool by adding storage devices to the pool.

  • A storage space is a virtual disk that you create from the free space that is available in a storage pool.

  • Storage Replica allows you to replicate volumes between servers, including clusters, for the purposes of disaster recovery.

  • Deduplication can reduce the amount of storage consumed on the volume because when analyzed, it turns out that a substantial number of data chunks stored on a volume are identical.

  • SMB Direct allows you to use network adapters that have RDMA to improve the performance of file servers.

  • Storage Quality of Service (QoS) allows you to centrally manage and monitor the performance of storage used for virtual machines that leverage the Scale-Out File Server and Hyper-V roles.

Thought experiment

In this thought experiment, demonstrate your skills and knowledge of the topics covered in this chapter. You can find answers to this thought experiment in the next section.

You are responsible for managing several file servers for a university that has multiple branch locations spread across Australia. Files are presently replicated using Distributed File System (DFS) between each branch office. You want to replace DFS as a replication engine with Azure File Sync. Three separate sets of folders are replicated using DFS, with some DFS replicas present in some locations and not others. One of the DFS replicas is present in all locations. Another problem the university has is students storing unauthorized files in their personal directories. With this information in mind, answer the following questions:

  1. How can you block students writing MP3 files to the file share?

  2. How many cloud endpoints will you need to create when migrating from DFS to Azure File Sync?

  3. What is the minimum number of storage sync service instances you’ll need to deploy in Azure to support the migration from DFS to Azure File Sync?

Thought experiment answers

This section contains the solution to the thought experiment. Each answer explains why the answer choice is correct.

  1. You can use File Server Resource Manager to implement file screens to block students from writing MP3 files to the file shares.

  2. You will need to create three cloud endpoints, one for each separate folder structure.

  3. You will need to create only one storage sync service. This is because each server must replicate with the others and a server can only be associated with one storage sync service.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.100.20