Clustered Data ONTAP compared to 7-Mode
Clustered Data ONTAP offers significant innovations and enhancements over 7-Mode. The latest version of Clustered Data ONTAP offers innovative features such as QoS and in-place controller upgrades, and it also adds new CIFS features that are supported with 7-Mode.
SnapVault in Clustered Data ONTAP is storage efficient. A very small feature set supported in 7-Mode is not supported in Clustered Data ONTAP, but most of these features will be supported in future releases.
The following topics are covered:
 
4.1 Storage virtual machine versus vFiler unit
A storage virtual machine (SVM) is also referred to as a Vserver. It can be thought of as a secure virtual storage system that manages resources, including volumes and logical interfaces (LIFs). Separation of software from the hardware grants an SVM independent mobility of an SVM’s LIFs and flexible volumes. An SVM is virtualized because its storage and network connections are not permanently bound to any node or group of nodes:
The physical resources of the cluster are not bound to any particular SVM. An SVM’s volumes can be moved to different physical aggregates without disrupting client access. Similarly, an SVM’s LIFs can be moved to different physical network ports without disrupting client access.
SVMs can be isolated in their own separate VLANs using separate physical ports or using VLAN tagging. Users of one SVM might or might not be granted access to another SVM.
An SVM can serve both SAN and NAS concurrently.
An SVM manages volumes. However, a cluster administrator can limit which aggregates an SVM can use for volume creation.
SVMs can be provisioned on-the-fly for individual departments, companies, or applications. The same physical hardware can be used by many tenants. SVMs provide a secure logical boundary between tenants.
Several similarities between vFiler units in 7-Mode and Clustered Data ONTAP SVMs exist. Both allow a layer of storage virtualization, and both allow administrative control of just that virtual storage unit. However, some differences exist between the two as compared within Table 4-1.
Table 4-1 SVM and vFiler unit differences.
Clustered Data ONTAP SVM
7-Mode MultiStore vFiler Unit
Required. Clustered Data ONTAP needs at least one SVM defined in order to serve data.
Optional. A controller running Data ONTAP in 7-Mode can serve data without a vFiler unit defined.
Serves a single namespace. Multiple flexible volumes can be accessed using a single CIFS share or NFS export.
Volumes must be exported individually.
Uses resources from one or many nodes within the cluster.
Bound to the resources of a single controller.
Supports any Clustered Data ONTAP data protocol, including FCP and FCoE.
Limited to NFS, CIFS, and iSCSI.
Provides fully delegated role-based access control (RBAC).
Provides limited administration delegation.
4.2 Failover and giveback comparison
The HA pair controller architecture is very similar in both 7-Mode and Clustered Data ONTAP, however, there are a few differences.
For customers running within optimal limits, failover is expected to be from 15 through 45 seconds. Most customers can expect unplanned failover times of 30 seconds or less. Approximately 90% of our customer environments are within optimal limits. However, it is still a preferred practice to configure client-side timeouts to withstand a 120-second failover.
For customers that push the system limits by running with the maximum number of spindles, utilizing the maximum system capacity, using a large number of FlexVol volumes on a single node, or consistently running with CPU utilization greater than 50%, unplanned failover times might be longer than 45 seconds. 120 seconds is the preferred client-side timeout setting. This advice is consistent with preferred practices for 7-Mode installations.
Giveback is handled differently in Clustered Data ONTAP than in 7-Mode. During giveback in Clustered Data ONTAP, the root aggregate is sent home first so the node can be assimilated back into the cluster. During this time, all data aggregates continue to serve data using the partner node.
After the home node is ready to serve data, each data aggregate will be given back to the home node serially. Each data aggregate might take up to 30 seconds to complete giveback.
The total giveback time is the time it takes for the root aggregate to do a giveback, the time it takes to assimilate the node back into the cluster, and the time it takes to give back each aggregate. The entire giveback operation might be lengthy, but any single aggregate will only be unavailable for up to 30 seconds.
Planned takeover in Clustered Data ONTAP 8.2 is similar to giveback in previous versions of Clustered Data ONTAP. In these planned takeovers, aggregates are taken over one by one, reducing the total amount of time each aggregate is unavailable. Unplanned takeovers in Clustered Data ONTAP 8.2 behave the same as in previous versions of Clustered Data ONTAP.
 
Note: Recent versions of Data ONTAP (7-Mode and Clustered Data ONTAP) are configured by default to perform a giveback after a panic-induced takeover.
4.3 Data protection and load sharing
Data protection means backing up data and being able to recover it. You protect the data by making copies of it so that it is available for restoration even if the original is no longer available.
Businesses need data backup and protection for the following reasons:
To protect data from accidentally deletions, application crashes, data corruption, and so on
To archive data for future use
To recover from a disaster
4.3.1 SnapMirror
Only asynchronous SnapMirror mirroring is supported. This can be set both within the cluster (intracluster) as well as between clusters (intercluster). The replication is at the volume level of granularity and is also known as a data protection (DP) mirror. Qtree SnapMirror is not available for Clustered Data ONTAP.
SnapMirror relationships can be throttled to a specific transfer rate using the snapmirror modify –throttle command.
4.3.2 SnapVault
SnapVault in Clustered Data ONTAP 8.2 delivers much of the same functionality you all may be familiar with from 7-Mode, the ability to store Snapshot copies on a secondary system for a long period of time, without taking up space on your primary system.
However, SnapVault in Clustered Data ONTAP is based on a new engine that uses volume-based logical replication, as opposed to SnapVault in 7-Mode, which used qtree-based replication. Since deduplication and compression operate at the flexible volume level, that represents a big advantage over 7-Mode. Storage efficiency is maintained while data is transferred to the backup system and is also maintained on the backup system. That translates to reduced backup times, and increased storage efficiency in the backup copy.
SnapVault is available in Clustered Data ONTAP 8.2 and above. Intercluster SnapVault is supported. SnapVault relationships between Clustered Data ONTAP and 7-Mode Data ONTAP are not supported.
4.3.3 NDMP
For FlexVol volumes, Data ONTAP supports tape backup and restore through the Network Data Management Protocol (NDMP). For Infinite Volumes, Data ONTAP supports tape backup and restore through a mounted volume. Infinite Volumes do not support NDMP. The type of volume determines what method to use for backup and recovery.
NDMP allows you to back up storage systems directly to tape, resulting in efficient use of network bandwidth. Clustered Data ONTAP supports dump engine for tape backup. Dump is a Snapshot copy-based backup to tape, in which your file system data is backed up to tape. The Data ONTAP dump engine backs up files, directories, and the applicable access control list (ACL) information to tape. You can back up an entire volume, an entire qtree, or a subtree that is neither an entire volume nor an entire qtree. Dump supports level-0, differential, and incremental backups. You can perform a dump backup or restore by using NDMP-compliant backup applications. Starting with Data ONTAP 8.2, only NDMP Version 4 is supported.
4.3.4 Data protection mirror
This feature provides asynchronous disaster recovery. Data protection mirror relationships enable you to periodically create Snapshot copies of data on one volume; copy those Snapshot copies to a partner volume (the destination volume), usually on another cluster; and retain those Snapshot copies. The mirror copy on the destination volume ensures quick availability and restoration of data from the time of the latest Snapshot copy, if the data on the source volume is corrupted or lost.
If you conduct tape backup and archival operations, you can perform them on the data that is already backed up on the destination volume.
4.3.5 Load-sharing mirror
A load-sharing mirror of a source flexible volume is a full, read-only copy of that flexible volume. Load-sharing mirrors are used to transparently off-load client read requests. Client write requests will fail unless directed to a specific writable path.
Load-sharing mirrors can be used to enable the availability of the data in the source flexible volume. Load-sharing mirrors will provide read-only access to the contents of the source flexible volume even if the source becomes unavailable. A load-sharing mirror can also be transparently promoted to become the read-write volume.
A cluster might have many load-sharing mirrors of a single source flexible volume. When load-sharing mirrors are used, every node in the cluster should have a load-sharing mirror of the source flexible volume. The node that currently hosts the source flexible volume should also have a load-sharing mirror. Identical load-sharing mirrors on the same node will yield no performance benefit.
Load-sharing mirrors are updated on demand or on a schedule that is defined by the cluster administrator. Writes made to the mirrored flexible volume will not be visible to readers of that flexible volume until the load-sharing mirrors are updated. Similarly, junctions added in the source flexible volume will not be visible to readers until the load-sharing mirrors are updated. Therefore, it is advised to use load-sharing mirrors for flexible volumes that are frequently read but infrequently written to.
SVM root volumes are typically small, contain only junctions to other volumes, do not contain user data, are frequently read, and are infrequently updated. SVM root volumes must be available for clients to traverse other volumes in the namespace. This makes SVM root volumes good candidates for mirroring across different nodes in the cluster.
In versions of Clustered Data ONTAP prior to 8.2, load-sharing mirrors were used to distribute access to read-only datasets. Clustered Data ONTAP 8.2 introduces FlexCache technology, which can also be used to distribute read access but provides write access and is space efficient.
Load-sharing mirrors are capable of supporting NAS only (CIFS/NFSv3). They do not support NFSv4 clients or SAN client protocol connections (FC, FCoE, or iSCSI).
4.4 Cluster management
One of the major benefits of Clustered Data ONTAP is the ability to manage the cluster as a single entity through the cluster management interface. After a node is joined to the cluster, its components can be managed within the cluster context.
Clustered Data ONTAP systems can be managed in a variety of ways:
CLI: SSH to cluster, node, or SVMs
GUI: OnCommand System Manager
GUI: OnCommand Unified Manager
GUI: OnCommand Insight
GUI: OnCommand Balance
In Clustered Data ONTAP 8.2, latency metrics are available for SVMs, protocols, volumes, and nodes, among others. To find these metrics, browse the statistics show-periodic command directory in the CLI.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.140.160