Performance considerations
This chapter focuses on performance, more specifically, the tools that the IBM N series provides to simplify performance increase by configuring certain features.
The following topics are covered:
 
16.1 FlexCache
A FlexCache volume is a sparsely-populated volume on a cluster node, that is backed by a volume, usually present on a different node within that cluster. A sparsely-populated volume or a sparse volume provides access to data in the backing volume (also called the origin volume) without requiring that all the data be in the sparse volume.
You can use only FlexVol volumes to create FlexCache volumes. However, many of the regular FlexVol volumes features are not supported on FlexCache volumes, such as Snapshot copy creation, deduplication, compression, FlexClone volume creation, volume move, and volume copy.
You can use FlexCache volumes to speed up access to data, or to offload traffic from heavily accessed volumes. FlexCache volumes help improve performance, especially when clients need to access the same data repeatedly, because the data can be served directly without having to access the source. Therefore, you can use FlexCache volumes to handle system workloads that are read- intensive.
Cache consistency techniques help in ensuring that the data served by the FlexCache volumes remains consistent with the data in the origin volumes.
16.1.1 Contents of a cached file
When the client requests a data block of a specific file from a FlexCache volume, then the attributes of that file and the requested data block are cached. The file is then considered to be cached, even if all its data blocks are not present in the FlexCache volume. If the requested data is cached and valid, a read request for that data is fulfilled without access to the origin volume.
16.1.2 Serving read requests
A FlexCache volume directly serves read requests if it contains the data requested by the client. Otherwise, the FlexCache volume requests the data from the origin volume and stores the data before serving the client request. Subsequent read requests for the data are then served directly from the FlexCache volume.
FlexCache volumes serve client read requests as follows:
1. A cluster node, which corresponds to the logical interface (LIF) on which the client sends its read request, accepts the request.
2. The node responds to the read request based on the types of volumes it contains. See Table 16-1 for the FlexCache volume behaviors to certain actions.
Table 16-1 FlexCache volume behaviors
If the node contains...
Then
A FlexCache volume that contains the requested data and the origin volume
The data is served from the origin volume
A FlexCache volume that contains the requested data but not the origin volume
The data is served from the FlexCache volume.
A FlexCache volume that does not contain the requested data
The FlexCache volume retrieves the requested data from a volume that contains the data, stores the data, and serves the client request
A volume that is the primary source of the requested data but does not contain a FlexCache volume
The data is served directly from the volume containing the requested data.
 
Note: If the node does not contain either the primary source of the data or a FlexCache volume, the client request is directly passed to a node that contains the primary source of the data.
16.1.3 Why using FlexCache volumes
FlexCache volumes are used to improve performance and balance resources during data read operations.
Performance scaling:
A data volume when created is stored on a specific node of the cluster. That volume can move within the cluster, but at any point in time, only one node contains the source data. If there is intensive access to the data on that volume, then that node in the cluster can get overloaded, and develop a performance bottleneck.
FlexCache volumes scale performance by enabling multiple nodes of a cluster to respond to read requests efficiently without having to overload the node containing the source data and without having to send data over the cluster interconnect (for cache hits).
Resource balancing:
Certain nodes of a cluster can encounter spikes of high performance during certain tasks or activities to a specific data set. By caching copies of data throughout the cluster, FlexCache volumes efficiently enable each node in the cluster to handle the workload. This approach spreads the workload across the cluster, smoothing out the performance created by heavy read or metadata access.
16.1.4 Considerations for working with FlexCache volumes
Take into account the following considerations when creating and working with FlexCache volumes:
You do not need to install any license for creating FlexCache volumes.
Using Clustered Data ONTAP, you can cache a FlexVol volume within the storage virtual machine (SVM) that contains the origin volume.
You must use a caching system running Data ONTAP operating in 7-Mode if you want to cache a FlexVol volume outside the cluster.
You cannot use Infinite Volumes as the caching or origin volume.
You can use only FlexVol volumes to cache data in other FlexVol volumes.
To cache a FlexVol volume within a cluster, you must ensure that the FlexCache volumes and the origin volumes are created on storage systems supported by Clustered Data ONTAP 8.2 or later.
 
Note: For information about the requirements that the storage system running Data ONTAP operating in 7-Mode must meet for caching a Clustered Data ONTAP FlexVol volume, see the Data ONTAP Storage Management Guide for 7-Mode.
You can create FlexCache volumes on a specific cluster node or on all the cluster nodes spanned by the SVM that contains the origin volume.
FlexCache volumes are created with a space guarantee type of partial.
The partial guarantee type is a special guarantee type that cannot be changed. You cannot view the space guarantee type for a FlexCache volume from the command line interface. When you use commands such as volume show to view the volume's space guarantee type, the value for the particular field shows a dash.
There is no specific limit on the size of the origin volume that a FlexCache volume can cache.
A FlexCache volume is created with the same language setting as its corresponding origin volume.
Flash Cache is supported on nodes with FlexCache volumes, and optimizes performance and efficiency accordingly for all volumes on the node, including FlexCache volumes.
Storage Accelerator (SA) systems do not support Clustered Data ONTAP. SA systems support only Data ONTAP operating in 7G or 7-Mode.
FlexCache volumes support client access using the following protocols, NFSv3, NFSv4.0, and CIFS (SMB 1.0, 2.x, and 3.0).
In addition, FlexCache volumes can retrieve from the origin volumes the Access Control Lists (ACLs) and the stream information for the cached data, depending on the protocol.
For better performance of FlexCache volumes in a cluster, you must ensure that data LIFs are properly configured on the cluster nodes that contain the FlexCache volumes.
16.1.5 Limitations of FlexCache volumes
You can have a maximum of 100 FlexCache volumes on a cluster node. In addition, certain features of Data ONTAP are not available on FlexCache volumes, and other features are not available on origin volumes that are backing the FlexCache volumes.
You cannot use the following Data ONTAP capabilities on FlexCache volumes (these limitations do not apply to origin volumes):
Compression:
Compressed origin volumes are supported.
Snapshot copy creation
SA systems
SnapManager
SnapRestore
SnapMirror
FlexClone volume creation
The ndmp command
Quotas
Volume move
Volume copy
Cache load balancing
Qtree creation on cache:
Qtree management must be done on the origin.
Deduplication
Mounting the FlexCache volume as a read-only volume
I2P:
The cache volume will not synchronize I2P information from the origin; any requests for this information are always forwarded to the origin.
Storage QoS policy group:
An origin volume can be assigned to a policy group, which controls the origin volume and its corresponding FlexCache volumes.
 
Note: The following N series systems do not support FlexCache volumes in a Clustered Data ONTAP environment:N6040, N6060, N6210, and N6240.
The following limitations apply to origin volumes:
You must map FlexCache volumes to an origin volume that is inside the same SVM. FlexCache volumes in a Clustered Data ONTAP environment cannot point to an origin volume that is present outside the SVM, such as a 7-Mode origin volume.
You cannot use a FlexCache volume to cache data from Infinite Volumes. The origin volume must be a FlexVol volume.
A load-sharing mirror volume or a volume that has load-sharing mirrors attached to it cannot serve as an origin volume.
Any volume in a SnapVault relationship cannot serve as an origin volume.
You cannot use an origin volume as the destination of a SnapMirror migrate command.
A FlexCache volume cannot be used as an origin volume.
16.1.6 Comparison of FlexCache volumes and load-sharing mirrors
Both FlexCache volumes and load-sharing mirror volumes can serve hosts from a local node in the cluster, instead of using the cluster interconnect to access the node storing the primary source of data. However, you need to understand the essential differences between them and use them in your storage system.
Table 16-2 explains the differences between load-sharing mirror volumes and FlexCache volumes.
Table 16-2 Comparison between Load-sharing mirror volumes and FlexCache volumes
Load-sharing mirror volumes
FlexCache volumes
The data that load-sharing mirror volumes use to serve client requests is a complete copy of the source data.
The data that FlexCache volumes use to serve client requests is cached copy of the source data, containing only data blocks that are accessed by clients.
Can be used as a disaster-recovery solution by promoting a load-sharing mirror to a source volume.
Cannot be used for disaster recovery. A FlexCache volume does not contain a complete copy of the source data.
Are read-only volumes, with the exception of admin privileges for write access or bypass of the load-sharing mirror.
Are read and write-through cache volumes.
A user creates one load-sharing mirror volume at a time.
A user can create one FlexCache volume at a time, or can simultaneously create FlexCache volumes on all the nodes spanned by the SVM that contains the origin volume.
16.2 Virtual Storage Tiering
The Virtual Storage Tier is the IBM N series approach to automated storage tiering. We had several important goals in mind when we set out to design Virtual Storage Tier components:
Use storage system resources as efficiently as possible, especially by minimizing I/O to disk drives Provide a dynamic, real-time response to changing I/O demands of applications
Fully integrate storage efficiency capabilities so efficiency is not lost when data is promoted to the Virtual Storage Tier
Use fine data granularity so that cold data never gets promoted with hot data thus making efficient use of expensive Flash media
Simplify deployment and management
The Virtual Storage Tier is a self-managing, data-driven service layer for storage infrastructure. It provides real-time assessment of workload priorities and optimizes I/O requests for cost and performance without the need for complex data classification and movement.
The Virtual Storage Tier leverages key storage efficiency technologies, intelligent caching, and simplified management. You simply choose the default media tier you want for a volume or LUN (SATA,FC or SAS). Hot data from the volume or LUN is automatically promoted on demand to flash-based media.
The Virtual Storage Tier promotes hot data without the data movement overhead associated with other approaches to automated storage tiering. Any time a read request is received for a block on a volume or LUN where the Virtual Storage Tier is enabled, that block is automatically subject to promotion. Note that promotion of a data block to the Virtual Storage Tier is not data migration because the block remains on hard disk media when a copy is made to the Virtual Storage Tier.
With the Virtual Storage Tier, data is promoted to Flash media after the first read from hard disk drives. This approach to data promotion means that additional disk I/O operations are not needed to promote hot data. By comparison, other implementations may not promote hot data until it has been read from disk many times, and then additional disk I/O is still required to accomplish the promotion process.
Our algorithms distinguish high-value data from low-value data and then retain that data in the Virtual Storage Tier. Metadata, for example, is always promoted when read for the first time. In contrast, sequential reads are normally not cached in the Virtual Storage Tier unless specifically enabled because they tend to crowd out more valuable data.You can change the behavior of the intelligent cache to meet the requirements of applications with unique data access requirements. For example, you can configure the Virtual Storage Tier to cache incoming random writes as they are committed to disk and to enable the caching of sequential reads.
You can optionally create different classes of service by enabling or disabling the placement of data into the Virtual Storage Tier on a volume-by-volume basis.
16.3 Storage Quality of Service (QoS)
Storage QoS is a new feature in Data ONTAP that provides the ability to group storage objects and set throughput limits on the group. With this ability, a storage administrator can separate workloads by organization, application, business unit, or production versus development environments.
In enterprise environments, storage QoS offers these benefits:
Helps to prevent user workloads from affecting each other
Helps to protect critical applications critical applications that have specific response times that must be met
In IT as a service (ITaaS) environments, storage QoS offers these benefits:
Helps to prevent tenants from affecting each other
Helps to avoid performance degradation with each new tenant
Storage QoS differs from the FlexShare quality-of-service tool. FlexShare is only in 7-Mode, and storage QoS is only in Clustered Data ONTAP. FlexShare works by setting relative priorities on workloads or resources, and attaining the desired results can be complicated. Storage QoS sets hard limits on collections of one or more objects and replaces complex relative priorities with very specific limits to throughput.
Clustered Data ONTAP 8.2 provides Storage QoS policies on cluster objects. An entire SVM, or a group of volumes or LUNS within an SVM, can be dynamically assigned to a policy group, which specifies a throughput limit, defined in terms of IOPS or MB/sec. This can be used to reactively or proactively throttle rogue workloads and prevent them from affecting the rest of the workloads. QoS policy groups can also be used by service providers to prevent tenants from affecting each other, as well as to avoid performance degradation of the existing tenants when a new tenant is deployed on the shared infrastructure.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.255.177