Windows Server 2016 Storage Spaces Direct

Azure Stack does not work with existing storage systems (based on iSCSI, fibre channel, or anything else). As of today, storage is cheap and there is no real need any more for expensive Storage Area Networks (SAN).

This is the main statement that was the basis for developing Windows Server 2012 Storage Spaces and scaling out file server technology. But as you all may know, this technology is great, but in the first releases, it was not as reliable and performant as everybody promised. With Windows Server 2016, this changed completely as Microsoft decided to go with a different technology design: each Storage Spaces Direct member will have its own direct attached storage and S2D technology makes sure that the data is saved in a highly available multi-node environment. This provides a chance to fulfill the following statements:

  • It is simple to configure
  • It provides great performance with up to 4k random IOPS per server
  • It provides fault tolerance with built-in resiliency
  • It has to be resource-efficient based on Resilient File System (ReFS) and needs minimum CPU performance
  • It has to be easy to manage with built-in monitoring and APIs
  • It needs to be scalable as it works (today) with up to 16 servers and 400 drives

In general, S2D has two deployment designs:

  • Converged: The storage and compute in different cluster environments
  • Hyper-converged: The storage and compute in one single cluster

Azure Stack is designed based on a hyper-converged environment, which can be described as follows:

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/media/storage-spaces-direct-in-windows-server-2016/converged-full-stack.png

This works based on the following design aspects:

  • The networking technology is based on SMB3, including SMB direct and SMB multichannel. This all works over the Ethernet networking technology based on RDMA (iWARP and RoCE).
  • Each server has local attached storage based on NVMe, SAS, or SATA drives with at least two SSDs and four additional ones. SATA and SAS need to work behind a SAS expander or a host bus adapter (HBA).
  • All servers are connected to one failover cluster.
  • The storage service bus creates a software-defined storage fabric.
  • The storage bus layer cache makes sure that the best available server-side read/write caching is always available.
  • The storage pool is automatically created by discovering all drives and adding them to the pool.
  • Storage Spaces are responsible for fault tolerance using mirroring and/or erasure coding. It generally provides three-way mirroring, which means two storage copies and fail, but the data is still available.
  • ReFS (know as Resilient File System) is designed for virtualization, which provides automatic correction of filesystem issues.
  • CSV (known as Cluster Shared Volumes) are the namespaces provided to the servers that look like local storage.
  • Finally, the Scale-Out-File Server provides file access remotely using the SMB3 technology over the network.

In the following overview, you can see how these components interact with each other and what a Hyper-converged storage stack could look like:

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/media/hyper-converged-solution-using-storage-spaces-direct-in-windows-server-2016/storagespacesdirecthyperconverged.png
If you want to play with Windows Server 2016 Storage Space independently from Azure Stack, you will need to meet the hardware requirements described here: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-hardware-requirements.

A generic design for Storage Spaces is a Three-Way Mirror, which means that all data is being saved on three different services, which provides a high level of resiliency:

Source: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/media/understand-the-cache/cache-server-side-architecture.png
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.53.168